url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/1254
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1254/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1254/comments
https://api.github.com/repos/ollama/ollama/issues/1254/events
https://github.com/ollama/ollama/issues/1254
2,008,016,076
I_kwDOJ0Z1Ps53r-TM
1,254
"Model" not found, try pulling it first
{ "login": "rehberim360", "id": 144798027, "node_id": "U_kgDOCKFxSw", "avatar_url": "https://avatars.githubusercontent.com/u/144798027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rehberim360", "html_url": "https://github.com/rehberim360", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
6
2023-11-23T11:17:39
2024-05-02T07:05:34
2024-01-03T17:39:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello everyone. I host Ollama in google VM. All firewall settings etc. have been made. I am connecting remotely via API. ![1](https://github.com/jmorganca/ollama/assets/144798027/7c3ff8ed-aefd-44e9-978a-de48b9e8774d) I pulled my models while in Ollama service start. ![2](https://github.com/jmorganca/ollama/ass...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1254/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6062
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6062/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6062/comments
https://api.github.com/repos/ollama/ollama/issues/6062/events
https://github.com/ollama/ollama/pull/6062
2,436,420,429
PR_kwDOJ0Z1Ps52zPwX
6,062
server: OLLAMA in modelfile and manifests
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[]
closed
false
null
[]
null
1
2024-07-29T21:34:00
2024-08-07T22:26:25
2024-08-07T22:26:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6062", "html_url": "https://github.com/ollama/ollama/pull/6062", "diff_url": "https://github.com/ollama/ollama/pull/6062.diff", "patch_url": "https://github.com/ollama/ollama/pull/6062.patch", "merged_at": null }
new optional parameter `OLLAMA` in modelfile to specify minimum version of ollama to run this model: `ollama create newmodel` ``` FROM mymodel.gguf OLLAMA 0.2.3 ``` using another model with the `FROM` command defaults to the version if they specify it right now. otherwise, you can set it yourself ``` FROM new...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6062/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8131
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8131/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8131/comments
https://api.github.com/repos/ollama/ollama/issues/8131/events
https://github.com/ollama/ollama/pull/8131
2,744,160,590
PR_kwDOJ0Z1Ps6Fc7_F
8,131
scripts: sign renamed macOS binary
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-12-17T07:41:17
2024-12-18T02:03:51
2024-12-18T02:03:49
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8131", "html_url": "https://github.com/ollama/ollama/pull/8131", "diff_url": "https://github.com/ollama/ollama/pull/8131.diff", "patch_url": "https://github.com/ollama/ollama/pull/8131.patch", "merged_at": "2024-12-18T02:03:49" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8131/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7904
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7904/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7904/comments
https://api.github.com/repos/ollama/ollama/issues/7904/events
https://github.com/ollama/ollama/issues/7904
2,710,401,505
I_kwDOJ0Z1Ps6hjXHh
7,904
fatal error: index out of range
{ "login": "prubinst", "id": 136655984, "node_id": "U_kgDOCCU0cA", "avatar_url": "https://avatars.githubusercontent.com/u/136655984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prubinst", "html_url": "https://github.com/prubinst", "followers_url": "https://api.github.com/users/prubinst/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-12-02T04:14:35
2024-12-02T04:14:35
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm running a dspy script that uses model `llama3-instruct:latest`. Script starts properly but after some ~20 minutes I get an exception like this: ``` ... [GIN] 2024/12/02 - 01:05:51 | 404 | 476.079µs | 127.0.0.1 | POST "/api/show" fatal error: index out of range runtime...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7904/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5687
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5687/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5687/comments
https://api.github.com/repos/ollama/ollama/issues/5687/events
https://github.com/ollama/ollama/issues/5687
2,407,312,180
I_kwDOJ0Z1Ps6PfKs0
5,687
/api/chat role Enum became case sensitive.
{ "login": "wkr1337", "id": 28607631, "node_id": "MDQ6VXNlcjI4NjA3NjMx", "avatar_url": "https://avatars.githubusercontent.com/u/28607631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wkr1337", "html_url": "https://github.com/wkr1337", "followers_url": "https://api.github.com/users/wkr133...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-14T07:46:55
2024-07-15T20:55:58
2024-07-15T20:55:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I updated from version 0.1.48 to version 0.2.5. After the update, the /api/chat endpoint changed. The `role` object inside the `messages `object became case **sensitive**. Here is an example request that used to work before the update: `{ "model": "llama3", "messages": [ ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5687/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5687/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4320
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4320/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4320/comments
https://api.github.com/repos/ollama/ollama/issues/4320/events
https://github.com/ollama/ollama/pull/4320
2,290,273,418
PR_kwDOJ0Z1Ps5vILX6
4,320
add phi2 mem
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-05-10T19:13:55
2024-05-10T19:35:09
2024-05-10T19:35:08
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4320", "html_url": "https://github.com/ollama/ollama/pull/4320", "diff_url": "https://github.com/ollama/ollama/pull/4320.diff", "patch_url": "https://github.com/ollama/ollama/pull/4320.patch", "merged_at": "2024-05-10T19:35:08" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4320/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1999
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1999/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1999/comments
https://api.github.com/repos/ollama/ollama/issues/1999/events
https://github.com/ollama/ollama/pull/1999
2,081,695,499
PR_kwDOJ0Z1Ps5kEtz8
1,999
Fix CPU-only build under Android Termux enviornment.
{ "login": "lainedfles", "id": 126992880, "node_id": "U_kgDOB5HB8A", "avatar_url": "https://avatars.githubusercontent.com/u/126992880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lainedfles", "html_url": "https://github.com/lainedfles", "followers_url": "https://api.github.com/users/lai...
[]
closed
false
null
[]
null
1
2024-01-15T10:13:07
2024-01-28T23:04:20
2024-01-19T01:16:54
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1999", "html_url": "https://github.com/ollama/ollama/pull/1999", "diff_url": "https://github.com/ollama/ollama/pull/1999.diff", "patch_url": "https://github.com/ollama/ollama/pull/1999.patch", "merged_at": "2024-01-19T01:16:54" }
Update gpu.go initGPUHandles() to declare gpuHandles variable before reading it. This resolves an "invalid memory address or nil pointer dereference" error. Update dyn_ext_server.c to avoid setting the RTLD_DEEPBIND flag under __TERMUX__ (Android).
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1999/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8587
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8587/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8587/comments
https://api.github.com/repos/ollama/ollama/issues/8587/events
https://github.com/ollama/ollama/pull/8587
2,811,285,359
PR_kwDOJ0Z1Ps6I_qRI
8,587
llm: update library lookup logic now that there is one runner
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2025-01-26T02:49:27
2025-01-29T05:07:50
2025-01-29T05:07:49
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8587", "html_url": "https://github.com/ollama/ollama/pull/8587", "diff_url": "https://github.com/ollama/ollama/pull/8587.diff", "patch_url": "https://github.com/ollama/ollama/pull/8587.patch", "merged_at": "2025-01-29T05:07:49" }
This removes the `runners` package now that there is only a single runner executable built (the `ollama` binary itself!). It tries to minimize changes to `discover` and `gpu` where possible.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8587/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/487
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/487/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/487/comments
https://api.github.com/repos/ollama/ollama/issues/487/events
https://github.com/ollama/ollama/pull/487
1,886,532,209
PR_kwDOJ0Z1Ps5Z0QgK
487
update dockerignore
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-07T20:36:33
2023-09-07T21:16:18
2023-09-07T21:16:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/487", "html_url": "https://github.com/ollama/ollama/pull/487", "diff_url": "https://github.com/ollama/ollama/pull/487.diff", "patch_url": "https://github.com/ollama/ollama/pull/487.patch", "merged_at": "2023-09-07T21:16:17" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/487/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7467
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7467/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7467/comments
https://api.github.com/repos/ollama/ollama/issues/7467/events
https://github.com/ollama/ollama/pull/7467
2,629,985,838
PR_kwDOJ0Z1Ps6ArXdL
7,467
Align rocm compiler flags
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-11-01T22:51:17
2024-11-07T18:20:53
2024-11-07T18:20:51
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7467", "html_url": "https://github.com/ollama/ollama/pull/7467", "diff_url": "https://github.com/ollama/ollama/pull/7467.diff", "patch_url": "https://github.com/ollama/ollama/pull/7467.patch", "merged_at": "2024-11-07T18:20:51" }
Bring consistency with the old generate script behavior
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7467/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7467/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4086
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4086/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4086/comments
https://api.github.com/repos/ollama/ollama/issues/4086/events
https://github.com/ollama/ollama/pull/4086
2,274,041,399
PR_kwDOJ0Z1Ps5uR5_v
4,086
Add preflight OPTIONS handling and update CORS config
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
1
2024-05-01T19:14:45
2024-05-08T20:14:01
2024-05-08T20:14:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4086", "html_url": "https://github.com/ollama/ollama/pull/4086", "diff_url": "https://github.com/ollama/ollama/pull/4086.diff", "patch_url": "https://github.com/ollama/ollama/pull/4086.patch", "merged_at": "2024-05-08T20:14:00" }
Couple of tweaks to our CORS configuration and how we handle `OPTIONS` requests. This update is geared towards making our service more compatible with clients originally designed to work with OpenAI, where sending an `Authorization` header is common. #### Details of Changes 1. **Handling OPTIONS Requests**: I added...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4086/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4086/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6183
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6183/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6183/comments
https://api.github.com/repos/ollama/ollama/issues/6183/events
https://github.com/ollama/ollama/issues/6183
2,449,023,581
I_kwDOJ0Z1Ps6R-SJd
6,183
LINE FEED problems in recent commit
{ "login": "FellowTraveler", "id": 339191, "node_id": "MDQ6VXNlcjMzOTE5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/339191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FellowTraveler", "html_url": "https://github.com/FellowTraveler", "followers_url": "https://api.github...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-08-05T16:55:13
2024-08-11T06:24:34
2024-08-11T06:24:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Since I grabbed the latest code, it IMMEDIATELY tells me that I have unstaged changes and won't let me checkout other branches. Also makes it impossible to rebase, etc. Git stash doesn't fix it. There is some kind of line feed issue probably in a very recent merge. **I can't be the only one who...
{ "login": "FellowTraveler", "id": 339191, "node_id": "MDQ6VXNlcjMzOTE5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/339191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FellowTraveler", "html_url": "https://github.com/FellowTraveler", "followers_url": "https://api.github...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6183/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7452
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7452/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7452/comments
https://api.github.com/repos/ollama/ollama/issues/7452/events
https://github.com/ollama/ollama/issues/7452
2,627,583,362
I_kwDOJ0Z1Ps6cnb2C
7,452
makefiles should verify compiler before trying to build GPU target
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 7700262114, "node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
0
2024-10-31T18:42:01
2024-12-10T17:47:21
2024-12-10T17:47:21
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? If you have the GPU libraries present, but not the compiler, we'll try to build and fail with strange errors from ccache since no compiler command was passed in. ``` /usr/bin/ccache -c -fPIC -D_GNU_SOURCE -fPIC -Wno-unused-function -std=gnu++11 -mavx -parallel-jobs=2 -c -O3 -DGGML_USE_CUDA...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7452/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7452/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1454
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1454/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1454/comments
https://api.github.com/repos/ollama/ollama/issues/1454/events
https://github.com/ollama/ollama/issues/1454
2,034,331,723
I_kwDOJ0Z1Ps55QXBL
1,454
Repeated output during use
{ "login": "duyaofei", "id": 6417789, "node_id": "MDQ6VXNlcjY0MTc3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6417789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/duyaofei", "html_url": "https://github.com/duyaofei", "followers_url": "https://api.github.com/users/duyao...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2023-12-10T11:31:10
2024-03-12T21:18:55
2024-03-12T21:18:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Today I am running yi:34b chat q4 using Ollama_ K_ When encountering repetitive output from the repeating machine during M, I entered the same issue on the official webpage and the output was normal. It is speculated that the problem arose from the output of invisible control characters. Thank you for your hard work. ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1454/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5813
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5813/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5813/comments
https://api.github.com/repos/ollama/ollama/issues/5813/events
https://github.com/ollama/ollama/issues/5813
2,420,953,395
I_kwDOJ0Z1Ps6QTNEz
5,813
Bug: ToolCall issue
{ "login": "KSemenenko", "id": 4385716, "node_id": "MDQ6VXNlcjQzODU3MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4385716?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KSemenenko", "html_url": "https://github.com/KSemenenko", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-20T15:54:00
2024-07-20T16:09:04
2024-07-20T16:09:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? **Describe the bug** Tool Calling parameters **To Reproduce** I use Ollama witn OpenAI API. I used this model https://ollama.com/library/llama3-groq-tool-use and Im doung function calling (for gpt4o this code works perfectly) But I see this as text asnwer from model. ``` <too...
{ "login": "KSemenenko", "id": 4385716, "node_id": "MDQ6VXNlcjQzODU3MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4385716?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KSemenenko", "html_url": "https://github.com/KSemenenko", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5813/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/463
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/463/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/463/comments
https://api.github.com/repos/ollama/ollama/issues/463/events
https://github.com/ollama/ollama/pull/463
1,879,245,066
PR_kwDOJ0Z1Ps5ZbhuY
463
fix not forwarding last token
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-03T21:48:36
2023-09-05T16:01:33
2023-09-05T16:01:32
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/463", "html_url": "https://github.com/ollama/ollama/pull/463", "diff_url": "https://github.com/ollama/ollama/pull/463.diff", "patch_url": "https://github.com/ollama/ollama/pull/463.patch", "merged_at": "2023-09-05T16:01:32" }
llama.cpp server serves the last token along with `stop: true` also remove unused fields
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/463/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4234
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4234/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4234/comments
https://api.github.com/repos/ollama/ollama/issues/4234/events
https://github.com/ollama/ollama/issues/4234
2,283,996,971
I_kwDOJ0Z1Ps6IIwcr
4,234
Customized LLaVA Setup
{ "login": "zhangry868", "id": 6694822, "node_id": "MDQ6VXNlcjY2OTQ4MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/6694822?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangry868", "html_url": "https://github.com/zhangry868", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-07T18:42:46
2024-05-07T23:56:05
2024-05-07T23:56:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I wonder whether there is a guideline on hosting customized LLaVA model. I have both mmprojector and base models gguf files. Feel free to point me any related materials/links. Many thanks, Rui
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4234/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2146
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2146/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2146/comments
https://api.github.com/repos/ollama/ollama/issues/2146/events
https://github.com/ollama/ollama/pull/2146
2,094,810,743
PR_kwDOJ0Z1Ps5kxPTJ
2,146
add keep_alive to generate/chat/embedding api endpoints
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
14
2024-01-22T21:47:04
2024-08-11T21:04:54
2024-01-26T22:28:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2146", "html_url": "https://github.com/ollama/ollama/pull/2146", "diff_url": "https://github.com/ollama/ollama/pull/2146.diff", "patch_url": "https://github.com/ollama/ollama/pull/2146.patch", "merged_at": "2024-01-26T22:28:02" }
This change adds a new `keep_alive` parameter to `/api/generate` which can control the duration for how long a model is loaded and left in memory. There are three cases: 1. if `keep_alive` is not set, the model will stay loaded for the default value (5 minutes); 2. if `keep_alive` is set to a positive duration (e.g...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2146/reactions", "total_count": 19, "+1": 19, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2146/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1149
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1149/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1149/comments
https://api.github.com/repos/ollama/ollama/issues/1149/events
https://github.com/ollama/ollama/issues/1149
1,996,069,334
I_kwDOJ0Z1Ps52-ZnW
1,149
No such host no matter what model I pull
{ "login": "chnsh", "id": 7926657, "node_id": "MDQ6VXNlcjc5MjY2NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7926657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chnsh", "html_url": "https://github.com/chnsh", "followers_url": "https://api.github.com/users/chnsh/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
6
2023-11-16T05:06:12
2023-11-27T07:07:52
2023-11-27T07:07:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello 👋 Thank you so much for developing this project. I am excited to use it in my day-to-day work and when I pull any model - as an example `ollama pull codellama:7b-instruct` I get an error like so. This is true for all models. I am wondering if I am missing any steps. I installed this app from https://olla...
{ "login": "chnsh", "id": 7926657, "node_id": "MDQ6VXNlcjc5MjY2NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7926657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chnsh", "html_url": "https://github.com/chnsh", "followers_url": "https://api.github.com/users/chnsh/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1149/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2966
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2966/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2966/comments
https://api.github.com/repos/ollama/ollama/issues/2966/events
https://github.com/ollama/ollama/pull/2966
2,172,742,801
PR_kwDOJ0Z1Ps5o6ABG
2,966
Add ROCm support to linux install script
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-03-07T01:10:38
2024-03-15T01:00:17
2024-03-15T01:00:16
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2966", "html_url": "https://github.com/ollama/ollama/pull/2966", "diff_url": "https://github.com/ollama/ollama/pull/2966.diff", "patch_url": "https://github.com/ollama/ollama/pull/2966.patch", "merged_at": "2024-03-15T01:00:16" }
Merge after #2885 and the release is out to avoid users with rocm failing to install due to the dependency file not being available yet. This depends on corresponding path changes in PR #3008 Prior to merging this, folks who want to install the pre-release on Radeon systems can use the following: ``` curl ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2966/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4601
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4601/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4601/comments
https://api.github.com/repos/ollama/ollama/issues/4601/events
https://github.com/ollama/ollama/issues/4601
2,314,208,616
I_kwDOJ0Z1Ps6J8AVo
4,601
Error: llama runner process has terminated: signal: segmentation fault
{ "login": "guiniao", "id": 44078253, "node_id": "MDQ6VXNlcjQ0MDc4MjUz", "avatar_url": "https://avatars.githubusercontent.com/u/44078253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guiniao", "html_url": "https://github.com/guiniao", "followers_url": "https://api.github.com/users/guinia...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-05-24T02:33:24
2024-08-27T14:03:30
2024-05-24T23:05:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama run codellama:34b error occurred: pulling manifest pulling f36b668ebcd3... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 19 GB pulling 2e0493f67d0c... 100% ▕█████████████████...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4601/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3391
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3391/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3391/comments
https://api.github.com/repos/ollama/ollama/issues/3391/events
https://github.com/ollama/ollama/pull/3391
2,213,932,616
PR_kwDOJ0Z1Ps5rF4p9
3,391
Update troubleshooting link
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-03-28T19:05:29
2024-03-28T20:15:57
2024-03-28T20:15:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3391", "html_url": "https://github.com/ollama/ollama/pull/3391", "diff_url": "https://github.com/ollama/ollama/pull/3391.diff", "patch_url": "https://github.com/ollama/ollama/pull/3391.patch", "merged_at": "2024-03-28T20:15:57" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3391/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3077
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3077/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3077/comments
https://api.github.com/repos/ollama/ollama/issues/3077/events
https://github.com/ollama/ollama/pull/3077
2,181,546,460
PR_kwDOJ0Z1Ps5pX5tX
3,077
fix gpu_info_cuda.c compile warning
{ "login": "mofanke", "id": 54242816, "node_id": "MDQ6VXNlcjU0MjQyODE2", "avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mofanke", "html_url": "https://github.com/mofanke", "followers_url": "https://api.github.com/users/mofank...
[]
closed
false
null
[]
null
0
2024-03-12T12:50:59
2024-03-12T18:08:41
2024-03-12T18:08:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3077", "html_url": "https://github.com/ollama/ollama/pull/3077", "diff_url": "https://github.com/ollama/ollama/pull/3077.diff", "patch_url": "https://github.com/ollama/ollama/pull/3077.patch", "merged_at": "2024-03-12T18:08:40" }
fix compile warning `gpu_info_cuda.c: In function ‘cuda_check_vram’: gpu_info_cuda.c:158:20: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 4 has type ‘long long unsigned int’`
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3077/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5104
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5104/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5104/comments
https://api.github.com/repos/ollama/ollama/issues/5104/events
https://github.com/ollama/ollama/issues/5104
2,358,269,569
I_kwDOJ0Z1Ps6MkFaB
5,104
Model requests Tiamat 7B & chronomaid 13B
{ "login": "AncientMystic", "id": 62780271, "node_id": "MDQ6VXNlcjYyNzgwMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AncientMystic", "html_url": "https://github.com/AncientMystic", "followers_url": "https://api.githu...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-06-17T21:13:16
2024-06-17T21:13:16
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Tiamat 7B and chronomaid 13B are two of the best models i have found both mostly uncensored and quite good at fairly articulate responses on a wide range of topics. From all the models i have tried for their size these two are the best and have the widest range and balance. They will do general discussion, roleplayi...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5104/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2420
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2420/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2420/comments
https://api.github.com/repos/ollama/ollama/issues/2420/events
https://github.com/ollama/ollama/issues/2420
2,126,539,503
I_kwDOJ0Z1Ps5-wGrv
2,420
Will you add the "Smaug-72B" model?
{ "login": "konstantin1722", "id": 55327489, "node_id": "MDQ6VXNlcjU1MzI3NDg5", "avatar_url": "https://avatars.githubusercontent.com/u/55327489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konstantin1722", "html_url": "https://github.com/konstantin1722", "followers_url": "https://api.gi...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
23
2024-02-09T06:13:58
2024-03-12T17:21:12
2024-03-11T19:14:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
They say it outperformed in many ways, GPT-3.5, Mistral Medium and Qwen-72B. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2420/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2420/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7585
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7585/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7585/comments
https://api.github.com/repos/ollama/ollama/issues/7585/events
https://github.com/ollama/ollama/issues/7585
2,645,626,792
I_kwDOJ0Z1Ps6dsQ-o
7,585
why Ollama runs on CPU by default
{ "login": "yhz114514", "id": 119857104, "node_id": "U_kgDOByTf0A", "avatar_url": "https://avatars.githubusercontent.com/u/119857104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yhz114514", "html_url": "https://github.com/yhz114514", "followers_url": "https://api.github.com/users/yhz114...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-11-09T04:48:46
2024-11-09T12:41:18
2024-11-09T12:41:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? My device: NVIDIA RTX4070 12G The remaining video memory is 10G Run a 7B model and have enough video memory to run the model ollama will be forced to run on the CPU no matter what, even if its performance is much lower than that of the GPU ### OS Windows ### GPU Nvidia ### CPU Intel ...
{ "login": "yhz114514", "id": 119857104, "node_id": "U_kgDOByTf0A", "avatar_url": "https://avatars.githubusercontent.com/u/119857104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yhz114514", "html_url": "https://github.com/yhz114514", "followers_url": "https://api.github.com/users/yhz114...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7585/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7163
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7163/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7163/comments
https://api.github.com/repos/ollama/ollama/issues/7163/events
https://github.com/ollama/ollama/issues/7163
2,579,319,049
I_kwDOJ0Z1Ps6ZvUkJ
7,163
Ollama does not run
{ "login": "d3tk", "id": 90400076, "node_id": "MDQ6VXNlcjkwNDAwMDc2", "avatar_url": "https://avatars.githubusercontent.com/u/90400076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/d3tk", "html_url": "https://github.com/d3tk", "followers_url": "https://api.github.com/users/d3tk/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
45
2024-10-10T16:27:46
2024-11-05T20:03:37
2024-11-05T20:03:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The process never completes when I try to do ollama run or ollama list. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.12
{ "login": "d3tk", "id": 90400076, "node_id": "MDQ6VXNlcjkwNDAwMDc2", "avatar_url": "https://avatars.githubusercontent.com/u/90400076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/d3tk", "html_url": "https://github.com/d3tk", "followers_url": "https://api.github.com/users/d3tk/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7163/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1494
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1494/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1494/comments
https://api.github.com/repos/ollama/ollama/issues/1494/events
https://github.com/ollama/ollama/issues/1494
2,038,852,557
I_kwDOJ0Z1Ps55hmvN
1,494
suggestion: download models to home directory instead of `/usr/share/` on linux ?
{ "login": "hualet", "id": 2023967, "node_id": "MDQ6VXNlcjIwMjM5Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/2023967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hualet", "html_url": "https://github.com/hualet", "followers_url": "https://api.github.com/users/hualet/foll...
[]
closed
false
null
[]
null
7
2023-12-13T02:43:35
2024-06-25T18:01:25
2023-12-25T14:27:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I suggest that models should be downloaded to home directory like `~/.ollama/models` instead of `/usr/share/ollama/.ollama/models`, since I think it's a conviention that data should be in home not root. I didn't create root with a copacity big enough and encounter this :joy: ![image](https://github.com/jmorganc...
{ "login": "hualet", "id": 2023967, "node_id": "MDQ6VXNlcjIwMjM5Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/2023967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hualet", "html_url": "https://github.com/hualet", "followers_url": "https://api.github.com/users/hualet/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1494/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1494/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3175
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3175/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3175/comments
https://api.github.com/repos/ollama/ollama/issues/3175/events
https://github.com/ollama/ollama/issues/3175
2,189,666,086
I_kwDOJ0Z1Ps6Cg6cm
3,175
Run Mixtral-8x7B on Consumer Hardware with Expert Offloading
{ "login": "arjunkrishna", "id": 5271912, "node_id": "MDQ6VXNlcjUyNzE5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5271912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arjunkrishna", "html_url": "https://github.com/arjunkrishna", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
1
2024-03-16T00:52:15
2024-03-16T01:24:02
2024-03-16T01:13:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? mixtral:8x7B on rtx 3090 runs slow due to size issue. ### How should we solve this? in this article it says we can offload some experts to make it run faster. https://kaitchup.substack.com/p/run-mixtral-8x7b-on-consumer-hardware If you have already implemented this in ollama, t...
{ "login": "arjunkrishna", "id": 5271912, "node_id": "MDQ6VXNlcjUyNzE5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5271912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arjunkrishna", "html_url": "https://github.com/arjunkrishna", "followers_url": "https://api.github.com...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3175/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2984
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2984/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2984/comments
https://api.github.com/repos/ollama/ollama/issues/2984/events
https://github.com/ollama/ollama/issues/2984
2,174,429,341
I_kwDOJ0Z1Ps6Bmyid
2,984
Examples without code
{ "login": "slovanos", "id": 48527469, "node_id": "MDQ6VXNlcjQ4NTI3NDY5", "avatar_url": "https://avatars.githubusercontent.com/u/48527469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slovanos", "html_url": "https://github.com/slovanos", "followers_url": "https://api.github.com/users/slo...
[]
closed
false
null
[]
null
0
2024-03-07T17:53:01
2024-03-07T18:49:41
2024-03-07T18:49:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Some examples, such as the following, contain no code at all, just a README file: examples/python-chat-app examples/modelfile-tweetwriter Is this how it should be?
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2984/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/231
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/231/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/231/comments
https://api.github.com/repos/ollama/ollama/issues/231/events
https://github.com/ollama/ollama/pull/231
1,825,085,139
PR_kwDOJ0Z1Ps5WlXhL
231
Update discord invite link
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2023-07-27T19:43:21
2023-07-27T19:43:53
2023-07-27T19:43:53
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/231", "html_url": "https://github.com/ollama/ollama/pull/231", "diff_url": "https://github.com/ollama/ollama/pull/231.diff", "patch_url": "https://github.com/ollama/ollama/pull/231.patch", "merged_at": "2023-07-27T19:43:53" }
Update discord invite link
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/231/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2456
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2456/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2456/comments
https://api.github.com/repos/ollama/ollama/issues/2456/events
https://github.com/ollama/ollama/issues/2456
2,129,221,795
I_kwDOJ0Z1Ps5-6Vij
2,456
Providing unsupported image formats (e.g. `avif`) results in server error/hang
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
0
2024-02-11T23:26:28
2024-02-12T19:16:21
2024-02-12T19:16:21
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Providing unsupported image formats causes a hang and error
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2456/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8278
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8278/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8278/comments
https://api.github.com/repos/ollama/ollama/issues/8278/events
https://github.com/ollama/ollama/issues/8278
2,764,814,723
I_kwDOJ0Z1Ps6ky7mD
8,278
Ollama v0.5.4 not response with stream mode when submit tool option
{ "login": "maminge", "id": 64125498, "node_id": "MDQ6VXNlcjY0MTI1NDk4", "avatar_url": "https://avatars.githubusercontent.com/u/64125498?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maminge", "html_url": "https://github.com/maminge", "followers_url": "https://api.github.com/users/maming...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2025-01-01T03:30:48
2025-01-13T01:50:17
2025-01-13T01:50:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama v0.5.4 not response with stream mode when submit tool option When the content of the reply is not tool_call, I hope to reply in stream mode. # Thanks lot for your efforts!!! ------------------------------------------ ### POST DATA to Ollama API: http://localhost:11434/api/chat --...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8278/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5663
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5663/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5663/comments
https://api.github.com/repos/ollama/ollama/issues/5663/events
https://github.com/ollama/ollama/issues/5663
2,406,698,835
I_kwDOJ0Z1Ps6Pc09T
5,663
Error: llama runner process has terminated: signal: abort trap error:vocab size mismatch.
{ "login": "asap-blocky", "id": 147228147, "node_id": "U_kgDOCMaF8w", "avatar_url": "https://avatars.githubusercontent.com/u/147228147?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asap-blocky", "html_url": "https://github.com/asap-blocky", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-13T04:42:19
2024-08-04T08:46:57
2024-07-13T20:56:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? While attempting to run my fine tuned model using the Ollama library, I got this error message, "Error: llama runner process has terminated: signal: abort trap error:vocab size mismatch." ### Model and Environment: - The model was fine-tuned using the FastLanguageModel from the unsloth lib...
{ "login": "asap-blocky", "id": 147228147, "node_id": "U_kgDOCMaF8w", "avatar_url": "https://avatars.githubusercontent.com/u/147228147?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asap-blocky", "html_url": "https://github.com/asap-blocky", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5663/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1084
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1084/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1084/comments
https://api.github.com/repos/ollama/ollama/issues/1084/events
https://github.com/ollama/ollama/issues/1084
1,988,854,004
I_kwDOJ0Z1Ps52i4D0
1,084
Adding ollama serve to run as a daemon
{ "login": "rutsam", "id": 14162212, "node_id": "MDQ6VXNlcjE0MTYyMjEy", "avatar_url": "https://avatars.githubusercontent.com/u/14162212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rutsam", "html_url": "https://github.com/rutsam", "followers_url": "https://api.github.com/users/rutsam/fo...
[]
closed
false
null
[]
null
3
2023-11-11T09:09:02
2023-12-04T23:45:25
2023-12-04T23:45:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have been experimenting with ollama and I noticed it was heavily inspired by docker, however I run it on the server and where I do not use the desktop version, and thus find it better if there were to added an option to **run ollama server as a daemon** in the same fashion as docker compose symbolized with **a parame...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1084/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6754
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6754/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6754/comments
https://api.github.com/repos/ollama/ollama/issues/6754/events
https://github.com/ollama/ollama/pull/6754
2,519,740,089
PR_kwDOJ0Z1Ps57KYfc
6,754
Added QodeAssist link to README.md
{ "login": "Palm1r", "id": 9195189, "node_id": "MDQ6VXNlcjkxOTUxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/9195189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Palm1r", "html_url": "https://github.com/Palm1r", "followers_url": "https://api.github.com/users/Palm1r/foll...
[]
closed
false
null
[]
null
0
2024-09-11T13:22:04
2024-09-11T20:19:49
2024-09-11T20:19:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6754", "html_url": "https://github.com/ollama/ollama/pull/6754", "diff_url": "https://github.com/ollama/ollama/pull/6754.diff", "patch_url": "https://github.com/ollama/ollama/pull/6754.patch", "merged_at": "2024-09-11T20:19:49" }
QodeAssist is using ollama to provide an AI-powered coding assistant plugin for Qt Creator
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6754/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4124
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4124/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4124/comments
https://api.github.com/repos/ollama/ollama/issues/4124/events
https://github.com/ollama/ollama/issues/4124
2,277,528,591
I_kwDOJ0Z1Ps6HwFQP
4,124
`/api/embeddings` responds with 500 before Ollama is initialized - handle max queued requests failure better
{ "login": "maximiliangugler", "id": 90111898, "node_id": "MDQ6VXNlcjkwMTExODk4", "avatar_url": "https://avatars.githubusercontent.com/u/90111898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maximiliangugler", "html_url": "https://github.com/maximiliangugler", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
5
2024-05-03T11:56:55
2024-05-05T17:53:45
2024-05-05T17:53:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, please forgive the ambiguity of this report. The issue i am encountering now is the following: Before updating to 0.1.33, i was running on version 0.1.32. I was running the server with embedding-models for generating embeddings and I was using the langchain OllamaEmbeddings class ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4124/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6593
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6593/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6593/comments
https://api.github.com/repos/ollama/ollama/issues/6593/events
https://github.com/ollama/ollama/issues/6593
2,501,234,628
I_kwDOJ0Z1Ps6VFc_E
6,593
Get supported models with API
{ "login": "angelozerr", "id": 1932211, "node_id": "MDQ6VXNlcjE5MzIyMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1932211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/angelozerr", "html_url": "https://github.com/angelozerr", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-09-02T15:36:12
2024-09-02T22:02:26
2024-09-02T22:02:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The API provides the capability to get the list of local models, but I have not found an API to get the supported models that we can see with HTML page at https://ollama.com/library?q=l&sort=featured It would be nice if API could provide a list of supported models.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6593/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/648
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/648/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/648/comments
https://api.github.com/repos/ollama/ollama/issues/648/events
https://github.com/ollama/ollama/issues/648
1,919,580,844
I_kwDOJ0Z1Ps5yanqs
648
Model Parameters Not Getting Set
{ "login": "fmackenzie", "id": 38498536, "node_id": "MDQ6VXNlcjM4NDk4NTM2", "avatar_url": "https://avatars.githubusercontent.com/u/38498536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fmackenzie", "html_url": "https://github.com/fmackenzie", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
6
2023-09-29T16:28:16
2023-10-02T19:50:10
2023-10-02T19:50:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
From what I can tell, the parameters set in the model file are not getting set properly. Taking the mario Modelfile as an example and adding an EMBED and a few PARAMETERS, it looks like in the server output that the PARAMETERS are having issues getting set to the appropriate type, and thus are not actually getting set...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/648/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/648/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2447
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2447/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2447/comments
https://api.github.com/repos/ollama/ollama/issues/2447/events
https://github.com/ollama/ollama/pull/2447
2,128,901,871
PR_kwDOJ0Z1Ps5mkovQ
2,447
Add Page Assist to the community integrations
{ "login": "n4ze3m", "id": 39720973, "node_id": "MDQ6VXNlcjM5NzIwOTcz", "avatar_url": "https://avatars.githubusercontent.com/u/39720973?v=4", "gravatar_id": "", "url": "https://api.github.com/users/n4ze3m", "html_url": "https://github.com/n4ze3m", "followers_url": "https://api.github.com/users/n4ze3m/fo...
[]
closed
false
null
[]
null
3
2024-02-11T08:59:21
2024-02-20T19:03:58
2024-02-20T19:03:58
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2447", "html_url": "https://github.com/ollama/ollama/pull/2447", "diff_url": "https://github.com/ollama/ollama/pull/2447.diff", "patch_url": "https://github.com/ollama/ollama/pull/2447.patch", "merged_at": "2024-02-20T19:03:58" }
Hey, I'd like to share my Chrome extension project I've been working on, `Page Assist`, for community integration. It offers a sidebar and web UI for Ollama :). Please review this PR. Thank you.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2447/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/508
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/508/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/508/comments
https://api.github.com/repos/ollama/ollama/issues/508/events
https://github.com/ollama/ollama/pull/508
1,891,322,441
PR_kwDOJ0Z1Ps5aEQFY
508
create the blobs directory correctly
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-09-11T21:53:52
2023-09-11T21:54:52
2023-09-11T21:54:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/508", "html_url": "https://github.com/ollama/ollama/pull/508", "diff_url": "https://github.com/ollama/ollama/pull/508.diff", "patch_url": "https://github.com/ollama/ollama/pull/508.patch", "merged_at": "2023-09-11T21:54:52" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/508/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5938
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5938/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5938/comments
https://api.github.com/repos/ollama/ollama/issues/5938/events
https://github.com/ollama/ollama/issues/5938
2,428,849,521
I_kwDOJ0Z1Ps6QxU1x
5,938
Error: could not connect to ollama app, is it running?
{ "login": "wwjCMP", "id": 32979859, "node_id": "MDQ6VXNlcjMyOTc5ODU5", "avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwjCMP", "html_url": "https://github.com/wwjCMP", "followers_url": "https://api.github.com/users/wwjCMP/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
14
2024-07-25T02:49:43
2024-07-26T11:02:04
2024-07-26T11:02:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Environment="OLLAMA_MODELS=/home/try/ollama/models" After changing the Environment OLLAMA_MODELS, ollamal can not connect. If i cancel it, ollama can run again. What is the reason? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.8
{ "login": "wwjCMP", "id": 32979859, "node_id": "MDQ6VXNlcjMyOTc5ODU5", "avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwjCMP", "html_url": "https://github.com/wwjCMP", "followers_url": "https://api.github.com/users/wwjCMP/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5938/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6838
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6838/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6838/comments
https://api.github.com/repos/ollama/ollama/issues/6838/events
https://github.com/ollama/ollama/issues/6838
2,531,080,329
I_kwDOJ0Z1Ps6W3TiJ
6,838
Old Context Information fetched
{ "login": "atul-siriusai", "id": 172748914, "node_id": "U_kgDOCkvwcg", "avatar_url": "https://avatars.githubusercontent.com/u/172748914?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atul-siriusai", "html_url": "https://github.com/atul-siriusai", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
14
2024-09-17T12:50:03
2024-11-29T23:55:51
2024-09-18T00:23:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I am currently working on a Retrieval-Augmented Generation (RAG) application using LLaMA 3.1 70B. The workflow involves a set of documents in markdown format and an Excel sheet containing specific information that needs to be extracted from these documents. The process iterates over each row, dynamically gene...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6838/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1914
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1914/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1914/comments
https://api.github.com/repos/ollama/ollama/issues/1914/events
https://github.com/ollama/ollama/pull/1914
2,075,372,743
PR_kwDOJ0Z1Ps5jvQlF
1,914
Smarter GPU Management library detection
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-10T23:07:34
2024-01-11T01:28:42
2024-01-10T23:21:57
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1914", "html_url": "https://github.com/ollama/ollama/pull/1914", "diff_url": "https://github.com/ollama/ollama/pull/1914.diff", "patch_url": "https://github.com/ollama/ollama/pull/1914.patch", "merged_at": "2024-01-10T23:21:57" }
When there are multiple management libraries installed on a system not every one will be compatible with the current driver. This change improves our management library algorithm to build up a set of discovered libraries based on glob patterns, and then try all of them until we're able to load one without error. ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1914/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6712
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6712/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6712/comments
https://api.github.com/repos/ollama/ollama/issues/6712/events
https://github.com/ollama/ollama/issues/6712
2,514,175,809
I_kwDOJ0Z1Ps6V20dB
6,712
400 Bad Request when running behind Nginx Proxy Manager
{ "login": "Joly0", "id": 13993216, "node_id": "MDQ6VXNlcjEzOTkzMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/13993216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Joly0", "html_url": "https://github.com/Joly0", "followers_url": "https://api.github.com/users/Joly0/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
14
2024-09-09T14:45:23
2024-10-17T09:00:21
2024-10-08T19:27:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hey guys, i have an ollama instance that i would like to make public (of course with basic auth) through nginx proxy manager, but whenever i try to reach the api even with a simple request like `Invoke-RestMethod -Method Get -Uri https://ollama.mydoamin.com/api/tags` i get the error `Invoke-Rest...
{ "login": "Joly0", "id": 13993216, "node_id": "MDQ6VXNlcjEzOTkzMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/13993216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Joly0", "html_url": "https://github.com/Joly0", "followers_url": "https://api.github.com/users/Joly0/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6712/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5339
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5339/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5339/comments
https://api.github.com/repos/ollama/ollama/issues/5339/events
https://github.com/ollama/ollama/issues/5339
2,378,928,059
I_kwDOJ0Z1Ps6Ny4-7
5,339
Deepseek coder v2 is providing gibberish output
{ "login": "Manik04IISER", "id": 120251924, "node_id": "U_kgDOByrmFA", "avatar_url": "https://avatars.githubusercontent.com/u/120251924?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Manik04IISER", "html_url": "https://github.com/Manik04IISER", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw...
closed
false
null
[]
null
4
2024-06-27T19:20:14
2025-01-06T07:04:10
2025-01-06T07:04:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The Model being Deepseek Coder v2 16b q: 5_K_M I provided a code block to the model and it started to produce gibberish. Whereas for any other model, it works fine. ![Screenshot_2024-06-28-00-46-09_1920x1080](https://github.com/ollama/ollama/assets/120251924/fceb7566-cc56-4537-bbb4-6ad431b92db...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5339/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7161
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7161/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7161/comments
https://api.github.com/repos/ollama/ollama/issues/7161/events
https://github.com/ollama/ollama/issues/7161
2,578,850,725
I_kwDOJ0Z1Ps6ZtiOl
7,161
Problem with load llm model in Jetson AGX Orin Developer Kit (64GB)
{ "login": "witold-gren", "id": 2304938, "node_id": "MDQ6VXNlcjIzMDQ5Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/2304938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/witold-gren", "html_url": "https://github.com/witold-gren", "followers_url": "https://api.github.com/us...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-10-10T13:25:42
2024-10-11T23:41:14
2024-10-11T23:40:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hey, thanks for your great contribution to this project. I use it on a normal computer with an RTX 4090 card and everything works very well. However, I have a problem with my Nvidia Jetson AGX Orin. I'm trying to run it the same way: and I just install ollama using command: ``` curl -fsSL h...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7161/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8598
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8598/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8598/comments
https://api.github.com/repos/ollama/ollama/issues/8598/events
https://github.com/ollama/ollama/issues/8598
2,811,838,719
I_kwDOJ0Z1Ps6nmUD_
8,598
Error Running Mistral Nemo Imported from .safetensors
{ "login": "aallgeier", "id": 121313302, "node_id": "U_kgDOBzsYFg", "avatar_url": "https://avatars.githubusercontent.com/u/121313302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aallgeier", "html_url": "https://github.com/aallgeier", "followers_url": "https://api.github.com/users/aallge...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2025-01-26T23:15:22
2025-01-26T23:27:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I encountered an error when attempting to run the Mistral Nemo model imported from `.safetensors`. I intend to run the model on CPU only, even though I have a GPU (see the Modelfile below). - I am able to run the model converted to `.gguf`. - However, I would like to import and run directly from...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8598/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8598/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4535/comments
https://api.github.com/repos/ollama/ollama/issues/4535/events
https://github.com/ollama/ollama/pull/4535
2,305,902,909
PR_kwDOJ0Z1Ps5v9HOx
4,535
Correct typo in error message
{ "login": "likejazz", "id": 1250095, "node_id": "MDQ6VXNlcjEyNTAwOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1250095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/likejazz", "html_url": "https://github.com/likejazz", "followers_url": "https://api.github.com/users/likej...
[]
closed
false
null
[]
null
0
2024-05-20T12:34:40
2024-05-21T23:09:58
2024-05-21T20:39:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4535", "html_url": "https://github.com/ollama/ollama/pull/4535", "diff_url": "https://github.com/ollama/ollama/pull/4535.diff", "patch_url": "https://github.com/ollama/ollama/pull/4535.patch", "merged_at": "2024-05-21T20:39:02" }
The spelling of the term "request" has been corrected, which was previously mistakenly written as "requeset" in the error log message.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4535/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6149
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6149/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6149/comments
https://api.github.com/repos/ollama/ollama/issues/6149/events
https://github.com/ollama/ollama/issues/6149
2,446,249,341
I_kwDOJ0Z1Ps6Rzs19
6,149
Why is the NVidia GPU always going crashing when using ./ollama-linux-amd64 ?
{ "login": "tifDev", "id": 39730484, "node_id": "MDQ6VXNlcjM5NzMwNDg0", "avatar_url": "https://avatars.githubusercontent.com/u/39730484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tifDev", "html_url": "https://github.com/tifDev", "followers_url": "https://api.github.com/users/tifDev/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-08-03T08:50:48
2024-10-24T03:18:01
2024-10-24T03:17:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I've tried the protable edition that doesn't needs root installation (./ollama-linux-amd64). Everything work fine but after a couple minutes the GPU stops working and ollama starts to use CPU only. This is the error faced: ``` log ggml_cuda_init: failed to initialize CUDA: ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6149/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6149/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6991
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6991/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6991/comments
https://api.github.com/repos/ollama/ollama/issues/6991/events
https://github.com/ollama/ollama/pull/6991
2,551,623,293
PR_kwDOJ0Z1Ps582kB9
6,991
llama: wire up builtin runner to main binary
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-09-26T22:24:29
2024-10-08T16:17:34
2024-10-08T15:53:58
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6991", "html_url": "https://github.com/ollama/ollama/pull/6991", "diff_url": "https://github.com/ollama/ollama/pull/6991.diff", "patch_url": "https://github.com/ollama/ollama/pull/6991.patch", "merged_at": null }
Replaced by #7138 on main
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6991/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4761
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4761/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4761/comments
https://api.github.com/repos/ollama/ollama/issues/4761/events
https://github.com/ollama/ollama/pull/4761
2,328,763,972
PR_kwDOJ0Z1Ps5xLT3c
4,761
revert tokenize ffi
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-06-01T00:25:44
2024-06-01T01:54:22
2024-06-01T01:54:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4761", "html_url": "https://github.com/ollama/ollama/pull/4761", "diff_url": "https://github.com/ollama/ollama/pull/4761.diff", "patch_url": "https://github.com/ollama/ollama/pull/4761.patch", "merged_at": "2024-06-01T01:54:21" }
this change reverts the series of changes introduced to call tokenize/detokenize. there's a bug on windows specifically where it'll segfault loading deepseek-llm's pretokenizer regexp. the most likely candidate is unicode support differences in mingw used by cgo and msvc used by the subprocess
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4761/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4010
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4010/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4010/comments
https://api.github.com/repos/ollama/ollama/issues/4010/events
https://github.com/ollama/ollama/issues/4010
2,267,908,860
I_kwDOJ0Z1Ps6HLYr8
4,010
How to set 'verbose' ON by default after a model is loaded?
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[]
closed
false
null
[]
null
1
2024-04-29T00:35:01
2024-04-29T15:38:13
2024-04-29T15:38:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How to set 'verbose' ON by default after a model is loaded? it is annoying to type /set verbose every time.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4010/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3099
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3099/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3099/comments
https://api.github.com/repos/ollama/ollama/issues/3099/events
https://github.com/ollama/ollama/issues/3099
2,183,623,078
I_kwDOJ0Z1Ps6CJ3Gm
3,099
Working with gptscript
{ "login": "prologic", "id": 1290234, "node_id": "MDQ6VXNlcjEyOTAyMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1290234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prologic", "html_url": "https://github.com/prologic", "followers_url": "https://api.github.com/users/prolo...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-03-13T10:20:17
2024-03-13T14:38:14
2024-03-13T14:37:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Just wanted to bring your attention to this nice little project called [gptscript](https://github.com/gptscript-ai/gptscript) that mentions not working natively with Ollama [here](https://github.com/gptscript-ai/gptscript/issues/136#issuecomment-1993903566) due to missing `/models` endpoint. I had a quick look around t...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3099/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2667
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2667/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2667/comments
https://api.github.com/repos/ollama/ollama/issues/2667/events
https://github.com/ollama/ollama/issues/2667
2,148,389,832
I_kwDOJ0Z1Ps6ADdPI
2,667
Trojan:Script/Wacatac.B!ml After Ollama Update Ollama
{ "login": "gargakk", "id": 11261036, "node_id": "MDQ6VXNlcjExMjYxMDM2", "avatar_url": "https://avatars.githubusercontent.com/u/11261036?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gargakk", "html_url": "https://github.com/gargakk", "followers_url": "https://api.github.com/users/gargak...
[]
closed
false
null
[]
null
2
2024-02-22T07:20:24
2024-02-22T07:30:08
2024-02-22T07:27:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Today after Ollama automatic update on a windows machine system find Trojan:Script/Wacatac.B!ml. Why?? ![Screenshot 2024-02-22 081700](https://github.com/ollama/ollama/assets/11261036/2fe0cad3-c26d-40aa-b979-7a37281d5570)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2667/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2667/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1594
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1594/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1594/comments
https://api.github.com/repos/ollama/ollama/issues/1594/events
https://github.com/ollama/ollama/issues/1594
2,047,830,033
I_kwDOJ0Z1Ps56D2gR
1,594
Wont run on amd or intel gpu's?
{ "login": "srgantmoomoo", "id": 69589624, "node_id": "MDQ6VXNlcjY5NTg5NjI0", "avatar_url": "https://avatars.githubusercontent.com/u/69589624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srgantmoomoo", "html_url": "https://github.com/srgantmoomoo", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
25
2023-12-19T03:02:47
2023-12-19T20:02:55
2023-12-19T19:57:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
it seems that I cannot get this to run on my amd or my intel machine... does it only support nvidia gpu's? keep getting this... ``` 2023/12/18 21:59:15 images.go:737: total blobs: 0 2023/12/18 21:59:15 images.go:744: total unused blobs removed: 0 2023/12/18 21:59:15 routes.go:871: Listening on 127.0.0.1:11434 (v...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1594/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/702
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/702/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/702/comments
https://api.github.com/repos/ollama/ollama/issues/702/events
https://github.com/ollama/ollama/pull/702
1,926,953,741
PR_kwDOJ0Z1Ps5b8Kp2
702
display a message during a long model load in interactive mode
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-10-04T20:46:26
2023-10-20T16:43:54
2023-10-11T16:55:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/702", "html_url": "https://github.com/ollama/ollama/pull/702", "diff_url": "https://github.com/ollama/ollama/pull/702.diff", "patch_url": "https://github.com/ollama/ollama/pull/702.patch", "merged_at": null }
Previous behavior: The user must wait for the model to load while a spinner is displayed. This could take a while for large models. New behavior: After 30 seconds the spinner displays the message "please wait...". This will be removed from the display once there is a response from the generate endpoint.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/702/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1881
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1881/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1881/comments
https://api.github.com/repos/ollama/ollama/issues/1881/events
https://github.com/ollama/ollama/issues/1881
2,073,404,373
I_kwDOJ0Z1Ps57laPV
1,881
Only generate lots of hashes
{ "login": "ZhihaoZhang97", "id": 31653817, "node_id": "MDQ6VXNlcjMxNjUzODE3", "avatar_url": "https://avatars.githubusercontent.com/u/31653817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhihaoZhang97", "html_url": "https://github.com/ZhihaoZhang97", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
9
2024-01-10T00:58:30
2024-01-27T02:47:24
2024-01-27T02:47:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![Screenshot from 2024-01-10 11-52-07](https://github.com/jmorganca/ollama/assets/31653817/30f08c0d-c924-471f-b740-896ba804c2bf) Not sure if I am the first to encounter with this issue, when I installed the ollama and run the llama2 from the Quickstart, it only outputs a lots of '####'. I suspect that might be c...
{ "login": "ZhihaoZhang97", "id": 31653817, "node_id": "MDQ6VXNlcjMxNjUzODE3", "avatar_url": "https://avatars.githubusercontent.com/u/31653817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhihaoZhang97", "html_url": "https://github.com/ZhihaoZhang97", "followers_url": "https://api.githu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1881/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1547
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1547/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1547/comments
https://api.github.com/repos/ollama/ollama/issues/1547/events
https://github.com/ollama/ollama/issues/1547
2,044,089,323
I_kwDOJ0Z1Ps551lPr
1,547
API Llava Image Path
{ "login": "webmastermario", "id": 121729061, "node_id": "U_kgDOB0FwJQ", "avatar_url": "https://avatars.githubusercontent.com/u/121729061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/webmastermario", "html_url": "https://github.com/webmastermario", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
1
2023-12-15T17:19:14
2023-12-15T17:34:55
2023-12-15T17:34:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, how can i use the API with llava model? How to add the image in the curl command like: curl http://localhost:11434/api/generate -d '{ "model": "llava", "prompt":"Whats in the image?" }' "image":"path" ?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1547/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7789
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7789/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7789/comments
https://api.github.com/repos/ollama/ollama/issues/7789/events
https://github.com/ollama/ollama/issues/7789
2,681,972,884
I_kwDOJ0Z1Ps6f26iU
7,789
How to prevent Ollama requests to change the running model on Ollama?
{ "login": "WoodenTiger000", "id": 5031620, "node_id": "MDQ6VXNlcjUwMzE2MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/5031620?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WoodenTiger000", "html_url": "https://github.com/WoodenTiger000", "followers_url": "https://api.gith...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-11-22T06:32:53
2024-12-29T22:15:15
2024-12-29T22:15:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How can we prevent Ollama requests to change the running model on Ollama?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7789/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2622
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2622/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2622/comments
https://api.github.com/repos/ollama/ollama/issues/2622/events
https://github.com/ollama/ollama/issues/2622
2,145,659,261
I_kwDOJ0Z1Ps5_5Cl9
2,622
How to set a crt file or disable the SSL verify in Windows
{ "login": "NeuroWhAI", "id": 1130686, "node_id": "MDQ6VXNlcjExMzA2ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1130686?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NeuroWhAI", "html_url": "https://github.com/NeuroWhAI", "followers_url": "https://api.github.com/users/Ne...
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-02-21T02:32:25
2024-05-24T16:44:45
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello. I am having a problem with 403 response from run command while trying to use the Ollama(Windows Preview) behind company proxy server. There is nothing special left in the log, but it is obvious that it is a proxy problem. The http(s)_proxy environment variable is set and crt certificate is installed. **i rem...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2622/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2622/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6660
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6660/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6660/comments
https://api.github.com/repos/ollama/ollama/issues/6660/events
https://github.com/ollama/ollama/issues/6660
2,508,688,797
I_kwDOJ0Z1Ps6Vh42d
6,660
on ollama.com's profile settings page , email addr shown mangled
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw...
open
false
null
[]
null
0
2024-09-05T20:51:57
2024-09-05T20:53:48
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? on that page , where edit name and bio the after top header , there is first shown : username \n email addr of the logged in user then there are the editfields anyway at my email , gmail , ends with 7 , .. that 7 isnt displayes there on the page just the addr without the ending 7 ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6660/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3318
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3318/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3318/comments
https://api.github.com/repos/ollama/ollama/issues/3318/events
https://github.com/ollama/ollama/pull/3318
2,204,107,381
PR_kwDOJ0Z1Ps5qkpRM
3,318
Update faq.md
{ "login": "ltrivaldi322", "id": 125631184, "node_id": "U_kgDOB3z60A", "avatar_url": "https://avatars.githubusercontent.com/u/125631184?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ltrivaldi322", "html_url": "https://github.com/ltrivaldi322", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
0
2024-03-24T00:17:12
2024-03-24T00:25:18
2024-03-24T00:25:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3318", "html_url": "https://github.com/ollama/ollama/pull/3318", "diff_url": "https://github.com/ollama/ollama/pull/3318.diff", "patch_url": "https://github.com/ollama/ollama/pull/3318.patch", "merged_at": null }
Use right config option you fucking idiot
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3318/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5964
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5964/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5964/comments
https://api.github.com/repos/ollama/ollama/issues/5964/events
https://github.com/ollama/ollama/pull/5964
2,431,159,586
PR_kwDOJ0Z1Ps52hhlZ
5,964
Fix typo and improve readability
{ "login": "eust-w", "id": 39115651, "node_id": "MDQ6VXNlcjM5MTE1NjUx", "avatar_url": "https://avatars.githubusercontent.com/u/39115651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eust-w", "html_url": "https://github.com/eust-w", "followers_url": "https://api.github.com/users/eust-w/fo...
[]
closed
false
null
[]
null
2
2024-07-26T00:13:38
2024-08-14T00:54:20
2024-08-14T00:54:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5964", "html_url": "https://github.com/ollama/ollama/pull/5964", "diff_url": "https://github.com/ollama/ollama/pull/5964.diff", "patch_url": "https://github.com/ollama/ollama/pull/5964.patch", "merged_at": "2024-08-14T00:54:20" }
* Rename updatAvailableMenuID to updateAvailableMenuID * Replace unused cmd parameter with _ in RunServer function * Fix typos in comments
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5964/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2618
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2618/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2618/comments
https://api.github.com/repos/ollama/ollama/issues/2618/events
https://github.com/ollama/ollama/pull/2618
2,145,128,986
PR_kwDOJ0Z1Ps5nb4dg
2,618
Update llama.cpp submodule to `66c1968f7`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-02-20T19:35:40
2024-02-20T22:42:32
2024-02-20T22:42:31
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2618", "html_url": "https://github.com/ollama/ollama/pull/2618", "diff_url": "https://github.com/ollama/ollama/pull/2618.diff", "patch_url": "https://github.com/ollama/ollama/pull/2618.patch", "merged_at": "2024-02-20T22:42:31" }
This update's the llama.cpp commit to one that supports the newer embedding models. A few updates: - The previous patch 02 was merged 🎉 - Numa is now an enum
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2618/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3254
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3254/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3254/comments
https://api.github.com/repos/ollama/ollama/issues/3254/events
https://github.com/ollama/ollama/issues/3254
2,195,323,215
I_kwDOJ0Z1Ps6C2flP
3,254
I can't run llama2 model in my computer
{ "login": "Francois-lenne", "id": 114836746, "node_id": "U_kgDOBthFCg", "avatar_url": "https://avatars.githubusercontent.com/u/114836746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Francois-lenne", "html_url": "https://github.com/Francois-lenne", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-03-19T15:41:14
2024-03-19T23:01:21
2024-03-19T23:00:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when i tap in my CLI : `ollama run llama2` i have this issue `Error: error loading model /Users/francoislenne/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1f` i try to delete and reload the llama2 LLM but i still have the same error when i ta...
{ "login": "Francois-lenne", "id": 114836746, "node_id": "U_kgDOBthFCg", "avatar_url": "https://avatars.githubusercontent.com/u/114836746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Francois-lenne", "html_url": "https://github.com/Francois-lenne", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3254/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4388
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4388/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4388/comments
https://api.github.com/repos/ollama/ollama/issues/4388/events
https://github.com/ollama/ollama/issues/4388
2,291,709,100
I_kwDOJ0Z1Ps6ImLSs
4,388
Accept or Ignore additional headers in OpenAI compatible endpoints
{ "login": "UdaraJay", "id": 1122227, "node_id": "MDQ6VXNlcjExMjIyMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1122227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UdaraJay", "html_url": "https://github.com/UdaraJay", "followers_url": "https://api.github.com/users/Udara...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
1
2024-05-13T03:10:34
2024-06-06T22:19:05
2024-06-06T22:19:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The OpenAI javascript SDK adds some `x-stainless-*` headers to API calls that cause preflight checks to fail against Ollama's API when switching out the baseUrl for Ollama's `v1/chat/completions` endpoint. ``` Access to fetch at 'http://localhost:11434/v1/chat/completions' from origin 'http://localhost' has been b...
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4388/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6885
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6885/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6885/comments
https://api.github.com/repos/ollama/ollama/issues/6885/events
https://github.com/ollama/ollama/issues/6885
2,537,472,316
I_kwDOJ0Z1Ps6XPsE8
6,885
Please support FreeBSD
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivic...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-09-19T22:32:42
2024-09-20T17:59:14
2024-09-20T17:59:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, We have the FreeBSD port for ollama version 0.3.6: https://cgit.freebsd.org/ports/tree/misc/ollama However, later versions fail to compile because of this extensive patch that one user submitted: https://cgit.freebsd.org/ports/tree/misc/ollama/files/patch-FreeBSD-compatibility FreeBSD is very similar to Linu...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6885/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6885/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/441
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/441/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/441/comments
https://api.github.com/repos/ollama/ollama/issues/441/events
https://github.com/ollama/ollama/pull/441
1,872,558,414
PR_kwDOJ0Z1Ps5ZFM4e
441
GGUF support
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
2
2023-08-29T22:02:08
2023-09-07T17:55:38
2023-09-07T17:55:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/441", "html_url": "https://github.com/ollama/ollama/pull/441", "diff_url": "https://github.com/ollama/ollama/pull/441.diff", "patch_url": "https://github.com/ollama/ollama/pull/441.patch", "merged_at": "2023-09-07T17:55:37" }
This change adds support for running GGUF models which are currently in beta with llama.cpp. We will continue to run GGML models and this transition will be seamless to users. - Adds a llama.cpp mainline submodule which runs `GGUF` models - Dynamically select the right runner for the model type - Moved a some code...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/441/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5720
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5720/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5720/comments
https://api.github.com/repos/ollama/ollama/issues/5720/events
https://github.com/ollama/ollama/issues/5720
2,410,526,967
I_kwDOJ0Z1Ps6Prbj3
5,720
ollama-docker-app using 100% without reason in idle state
{ "login": "jan-panoch", "id": 34071544, "node_id": "MDQ6VXNlcjM0MDcxNTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/34071544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jan-panoch", "html_url": "https://github.com/jan-panoch", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-16T08:20:55
2024-07-23T00:22:45
2024-07-23T00:22:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? we are runnig ollama with docker and container ollama-docker-app-1 in idletime is consuming 100% cpu without reason. no always, but occasionally. restart helps. the olllama docker stack is started using docker compose file https://github.com/valiantlynx/ollama-docker/blob/main/docker-compose-...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5720/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6273
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6273/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6273/comments
https://api.github.com/repos/ollama/ollama/issues/6273/events
https://github.com/ollama/ollama/issues/6273
2,457,079,355
I_kwDOJ0Z1Ps6SdA47
6,273
unsupported content type: unknown
{ "login": "little1d", "id": 115958756, "node_id": "U_kgDOBulj5A", "avatar_url": "https://avatars.githubusercontent.com/u/115958756?v=4", "gravatar_id": "", "url": "https://api.github.com/users/little1d", "html_url": "https://github.com/little1d", "followers_url": "https://api.github.com/users/little1d/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-08-09T04:30:27
2024-08-14T20:47:16
2024-08-14T20:47:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Is ollama available to create model through safetensors file? I run this commad, and error says unsupported content type: unknown. I have tried llama3.1 model and qwen2-0.5b, same outcome **`command`** ollama create mymodel2 -f ./Modelfile ![image](https://github.com/user-attachments/ass...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6273/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/8044
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8044/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8044/comments
https://api.github.com/repos/ollama/ollama/issues/8044/events
https://github.com/ollama/ollama/issues/8044
2,732,674,276
I_kwDOJ0Z1Ps6i4Uzk
8,044
I can't use llama3.2 after download.Error: llama runner process has terminated: exit status 0xc0000409
{ "login": "Hastersun", "id": 78581699, "node_id": "MDQ6VXNlcjc4NTgxNjk5", "avatar_url": "https://avatars.githubusercontent.com/u/78581699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hastersun", "html_url": "https://github.com/Hastersun", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-12-11T11:27:36
2024-12-14T16:35:35
2024-12-14T16:35:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Python is installed.![image](https://github.com/user-attachments/assets/dfcba50c-02e8-4287-9699-ac8cbc1903c4) Version is 0.1.48
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8044/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8683/comments
https://api.github.com/repos/ollama/ollama/issues/8683/events
https://github.com/ollama/ollama/issues/8683
2,819,701,999
I_kwDOJ0Z1Ps6oETzv
8,683
Support release build without AVX
{ "login": "yoonsio", "id": 24367477, "node_id": "MDQ6VXNlcjI0MzY3NDc3", "avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoonsio", "html_url": "https://github.com/yoonsio", "followers_url": "https://api.github.com/users/yoonsi...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
0
2025-01-30T01:34:51
2025-01-30T02:13:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Release image fails to detect the GPU when running on a CPU that does not support AVX. Please add a non-AVX release build to the release pipeline. ``` msg="Dynamic LLM libraries" runners="[cpu_avx cpu cpu_avx2]" ``` Custom image can be built by overriding `CUSTOM_CPU_FLAGS`. #### Example: ``` docker build --platform li...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8683/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8683/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3876
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3876/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3876/comments
https://api.github.com/repos/ollama/ollama/issues/3876/events
https://github.com/ollama/ollama/issues/3876
2,261,414,283
I_kwDOJ0Z1Ps6GynGL
3,876
serving llama3 does not work
{ "login": "lambdaofgod", "id": 3647577, "node_id": "MDQ6VXNlcjM2NDc1Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lambdaofgod", "html_url": "https://github.com/lambdaofgod", "followers_url": "https://api.github.com/us...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-04-24T14:15:31
2024-12-26T07:36:09
2024-04-25T09:02:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am able to run llama 3 (`ollama run llama3`) but when I try to run the server I get >{"error":"model 'llama3' not found, try pulling it first"} This is in spite of `ollama list` detecting the model. Specifically I ran ``` curl $LLAMA_URL -d '{ "model": "llama3", "mes...
{ "login": "lambdaofgod", "id": 3647577, "node_id": "MDQ6VXNlcjM2NDc1Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lambdaofgod", "html_url": "https://github.com/lambdaofgod", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3876/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3876/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/990
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/990/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/990/comments
https://api.github.com/repos/ollama/ollama/issues/990/events
https://github.com/ollama/ollama/issues/990
1,976,963,640
I_kwDOJ0Z1Ps511hI4
990
TPU backend support
{ "login": "coolrazor007", "id": 62222426, "node_id": "MDQ6VXNlcjYyMjIyNDI2", "avatar_url": "https://avatars.githubusercontent.com/u/62222426?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coolrazor007", "html_url": "https://github.com/coolrazor007", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
19
2023-11-03T21:39:11
2024-12-23T00:57:37
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would love to see Ollama run on a TPU not just GPU. Has this been done by anyone already?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/990/reactions", "total_count": 26, "+1": 26, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/990/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/580
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/580/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/580/comments
https://api.github.com/repos/ollama/ollama/issues/580/events
https://github.com/ollama/ollama/pull/580
1,909,612,642
PR_kwDOJ0Z1Ps5bB0OW
580
refactor and add other platforms
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-22T23:16:38
2023-09-23T13:42:41
2023-09-23T13:42:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/580", "html_url": "https://github.com/ollama/ollama/pull/580", "diff_url": "https://github.com/ollama/ollama/pull/580.diff", "patch_url": "https://github.com/ollama/ollama/pull/580.patch", "merged_at": "2023-09-23T13:42:41" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/580/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4422
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4422/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4422/comments
https://api.github.com/repos/ollama/ollama/issues/4422/events
https://github.com/ollama/ollama/pull/4422
2,294,659,259
PR_kwDOJ0Z1Ps5vW661
4,422
add yi-1.5 example to model library
{ "login": "Yimi81", "id": 66633207, "node_id": "MDQ6VXNlcjY2NjMzMjA3", "avatar_url": "https://avatars.githubusercontent.com/u/66633207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yimi81", "html_url": "https://github.com/Yimi81", "followers_url": "https://api.github.com/users/Yimi81/fo...
[]
closed
false
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
[ { "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https...
null
4
2024-05-14T07:35:23
2024-11-21T08:47:10
2024-11-21T08:47:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4422", "html_url": "https://github.com/ollama/ollama/pull/4422", "diff_url": "https://github.com/ollama/ollama/pull/4422.diff", "patch_url": "https://github.com/ollama/ollama/pull/4422.patch", "merged_at": null }
We hope the open-source community can be promptly informed that ollama supports the yi-1.5 series. We have updated the list of example models in the README.md. Thank you for your time. @jmorganca
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4422/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/902
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/902/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/902/comments
https://api.github.com/repos/ollama/ollama/issues/902/events
https://github.com/ollama/ollama/issues/902
1,960,645,151
I_kwDOJ0Z1Ps503RIf
902
Support more params when ollama run
{ "login": "UICJohn", "id": 4167985, "node_id": "MDQ6VXNlcjQxNjc5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/4167985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UICJohn", "html_url": "https://github.com/UICJohn", "followers_url": "https://api.github.com/users/UICJohn/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
2
2023-10-25T06:32:04
2024-01-16T22:29:27
2024-01-16T22:29:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi there, Thanks for all you have done. Just wonder if there is any plan to support more params/options for running ollama model? For example, --rope-freq-scale So that we can run like this `ollama run xxxx --rope-freq-scale 0.125` I can see there is an Options map in api.GenerateRequest but it is not used...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/902/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8302
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8302/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8302/comments
https://api.github.com/repos/ollama/ollama/issues/8302/events
https://github.com/ollama/ollama/issues/8302
2,768,584,621
I_kwDOJ0Z1Ps6lBT-t
8,302
no compatible GPUs were discovered
{ "login": "fatebugs", "id": 65278566, "node_id": "MDQ6VXNlcjY1Mjc4NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/65278566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fatebugs", "html_url": "https://github.com/fatebugs", "followers_url": "https://api.github.com/users/fat...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-04T07:37:06
2025-01-24T09:50:20
2025-01-24T09:50:20
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Due to the limitations of the latest version of MacOS, I am unable to use the ollama.app client and can only use Docker as the runtime tool for ollama. When running ollama in Docker launched on Macmini M4, it prompts that the GPU cannot be found. In this case, how should I solve this problem ![...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8302/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8586
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8586/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8586/comments
https://api.github.com/repos/ollama/ollama/issues/8586/events
https://github.com/ollama/ollama/issues/8586
2,811,276,191
I_kwDOJ0Z1Ps6nkKuf
8,586
/v2/library/ 404 ollama -v 0.5.5
{ "login": "moofya", "id": 55646892, "node_id": "MDQ6VXNlcjU1NjQ2ODky", "avatar_url": "https://avatars.githubusercontent.com/u/55646892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moofya", "html_url": "https://github.com/moofya", "followers_url": "https://api.github.com/users/moofya/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2025-01-26T02:19:11
2025-01-26T05:38:58
2025-01-26T05:38:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? “https://registry.ollama.ai/v2/library/deepseek-r1/manifests/8b": dial tcp 104.21.75.227:443: i/o timeout ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "moofya", "id": 55646892, "node_id": "MDQ6VXNlcjU1NjQ2ODky", "avatar_url": "https://avatars.githubusercontent.com/u/55646892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moofya", "html_url": "https://github.com/moofya", "followers_url": "https://api.github.com/users/moofya/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8586/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/466
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/466/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/466/comments
https://api.github.com/repos/ollama/ollama/issues/466/events
https://github.com/ollama/ollama/pull/466
1,879,256,450
PR_kwDOJ0Z1Ps5Zbj0D
466
template extra args
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-09-03T22:34:05
2024-04-14T22:45:31
2024-04-14T22:45:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/466", "html_url": "https://github.com/ollama/ollama/pull/466", "diff_url": "https://github.com/ollama/ollama/pull/466.diff", "patch_url": "https://github.com/ollama/ollama/pull/466.patch", "merged_at": null }
User defined arguments to the template, making things like infilling easier: ``` FROM codellama:7b-code TEMPLATE "<PRE> {{ .Args.Prefix }} <SUF> {{- .Args.Suffix }} <MID>" ``` Request: ``` $ curl -s localhost:11434/api/generate -d '{"model":"codellama-infill","args":{"Prefix":"def remove_non_ascii(s: str) ->...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/466/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7074
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7074/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7074/comments
https://api.github.com/repos/ollama/ollama/issues/7074/events
https://github.com/ollama/ollama/issues/7074
2,560,503,446
I_kwDOJ0Z1Ps6Yni6W
7,074
Docker image size is over a GB larger than 0.3.10
{ "login": "codefromthecrypt", "id": 64215, "node_id": "MDQ6VXNlcjY0MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codefromthecrypt", "html_url": "https://github.com/codefromthecrypt", "followers_url": "https://api.github...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVw...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-10-02T01:29:18
2024-10-07T01:13:28
2024-10-02T16:20:01
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I noticed some changelog in reducing docker image size, but when looking at darwin/arm64, it is significantly larger than 0.3.10. This may also apply to others. Can you mention if this is an accident or intent? ``` ollama/ollama 0.3.12 443040bf2568...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7074/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5979
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5979/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5979/comments
https://api.github.com/repos/ollama/ollama/issues/5979/events
https://github.com/ollama/ollama/issues/5979
2,431,842,051
I_kwDOJ0Z1Ps6Q8vcD
5,979
0.2.6-rocm and above cannot be pulled with containerd on fedora
{ "login": "volatilemolotov", "id": 20559691, "node_id": "MDQ6VXNlcjIwNTU5Njkx", "avatar_url": "https://avatars.githubusercontent.com/u/20559691?v=4", "gravatar_id": "", "url": "https://api.github.com/users/volatilemolotov", "html_url": "https://github.com/volatilemolotov", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
8
2024-07-26T09:39:47
2024-08-01T13:25:38
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Pulling the image results in ``` Error: failed to extract layer sha256:00d2c36d84f963d50ac6a568b0be71eea96f3579770ef47c2ac3f94d4d3c346a: exit status 1: unpigz: skipping: <stdin>: corrupted -- crc32 mismatch ``` This happens for 0.2.6-rocm and further vesions Not sure why it fails and what...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5979/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5979/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/475
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/475/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/475/comments
https://api.github.com/repos/ollama/ollama/issues/475/events
https://github.com/ollama/ollama/issues/475
1,883,710,052
I_kwDOJ0Z1Ps5wRyJk
475
Bug: Importing a local model fails on MacOS
{ "login": "tianxiemaochiyu", "id": 16790771, "node_id": "MDQ6VXNlcjE2NzkwNzcx", "avatar_url": "https://avatars.githubusercontent.com/u/16790771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianxiemaochiyu", "html_url": "https://github.com/tianxiemaochiyu", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2023-09-06T10:24:32
2023-12-04T19:23:52
2023-12-04T19:23:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Importing a local model fails on MacOS: ``` Parsing modelfile Looking for model ⠋ Creating model layer Error: Invalid file magic ``` Here is the content of my Modelfile: ``` FROM ./ggml-Llama2-Chinese-13b-Chat-q4_k_m.ggmlv3.Q4_K_M.bin TEMPLATE """ {{- if .First }} <<SYS>> {{ .System }} <</SYS>> {{- end ...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/475/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6091
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6091/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6091/comments
https://api.github.com/repos/ollama/ollama/issues/6091/events
https://github.com/ollama/ollama/issues/6091
2,439,176,404
I_kwDOJ0Z1Ps6RYuDU
6,091
Parallel Bug: Would rather queue than reload on another GPU
{ "login": "txd0213", "id": 62833076, "node_id": "MDQ6VXNlcjYyODMzMDc2", "avatar_url": "https://avatars.githubusercontent.com/u/62833076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/txd0213", "html_url": "https://github.com/txd0213", "followers_url": "https://api.github.com/users/txd021...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-07-31T05:41:51
2024-08-01T22:19:54
2024-08-01T22:19:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? **Experimental environment: 8 x A6000 GPUs** **LLM: qwen2:7b** **Environment variables:** ``` Environment="OLLAMA_NUM_PARALLEL=16" Environment="OLLAMA_MAX_LOADED_MODELS=4" ``` When the concurrency is less than or equal to **4**, the parallel processing is effective. However, once it e...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6091/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7837
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7837/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7837/comments
https://api.github.com/repos/ollama/ollama/issues/7837/events
https://github.com/ollama/ollama/pull/7837
2,692,768,351
PR_kwDOJ0Z1Ps6DHZ8L
7,837
Export ctx, gpu, parallel parameters via /api/ps
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
[]
open
false
null
[]
null
0
2024-11-26T01:39:39
2024-11-26T01:39:39
null
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7837", "html_url": "https://github.com/ollama/ollama/pull/7837", "diff_url": "https://github.com/ollama/ollama/pull/7837.diff", "patch_url": "https://github.com/ollama/ollama/pull/7837.patch", "merged_at": null }
Allow clients to query some model run-time parameters.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7837/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4683/comments
https://api.github.com/repos/ollama/ollama/issues/4683/events
https://github.com/ollama/ollama/pull/4683
2,321,534,896
PR_kwDOJ0Z1Ps5wylbl
4,683
Fix nvidia detection in install script
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-05-28T16:57:49
2024-05-28T16:59:37
2024-05-28T16:59:37
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4683", "html_url": "https://github.com/ollama/ollama/pull/4683", "diff_url": "https://github.com/ollama/ollama/pull/4683.diff", "patch_url": "https://github.com/ollama/ollama/pull/4683.patch", "merged_at": "2024-05-28T16:59:37" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4683/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3394
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3394/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3394/comments
https://api.github.com/repos/ollama/ollama/issues/3394/events
https://github.com/ollama/ollama/issues/3394
2,214,089,998
I_kwDOJ0Z1Ps6D-FUO
3,394
Add support for MobileVLM
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/follower...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-03-28T20:34:09
2024-03-29T02:31:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? MobileVLM v2 is a very promising multimodal model that is already supported by llama.cpp. Here are the 3 versions: 1.7b: https://huggingface.co/mtgv/MobileVLM_V2-1.7B 3b: https://huggingface.co/mtgv/MobileVLM_V2-3B 7b: https://huggingface.co/mtgv/MobileVLM_V2-7B
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3394/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/432
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/432/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/432/comments
https://api.github.com/repos/ollama/ollama/issues/432/events
https://github.com/ollama/ollama/issues/432
1,868,497,921
I_kwDOJ0Z1Ps5vXwQB
432
Which files to copy in order to use model with Ollama on other computer?
{ "login": "ctsrc", "id": 36199671, "node_id": "MDQ6VXNlcjM2MTk5Njcx", "avatar_url": "https://avatars.githubusercontent.com/u/36199671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ctsrc", "html_url": "https://github.com/ctsrc", "followers_url": "https://api.github.com/users/ctsrc/follow...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
8
2023-08-27T13:31:51
2024-01-07T19:31:58
2023-08-30T00:39:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have two computers with Ollama 0.0.16 installed on both. I downloaded many gigabytes of models on one of them, and then I copied my `~/.ollama/` directory with all of its data from one computer to the other However, Ollama on the other computer still wants to connect to the internet when I try to run one of th...
{ "login": "ctsrc", "id": 36199671, "node_id": "MDQ6VXNlcjM2MTk5Njcx", "avatar_url": "https://avatars.githubusercontent.com/u/36199671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ctsrc", "html_url": "https://github.com/ctsrc", "followers_url": "https://api.github.com/users/ctsrc/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/432/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3123
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3123/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3123/comments
https://api.github.com/repos/ollama/ollama/issues/3123/events
https://github.com/ollama/ollama/issues/3123
2,184,772,436
I_kwDOJ0Z1Ps6COPtU
3,123
Windows build script refinements
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-03-13T19:48:31
2024-04-28T19:10:06
2024-04-28T19:10:06
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
- Soften the developer shell requirement if possible so we can build as long as the compiler is in the path - Modularize the generate script using same variables as linux so we can build the various runners discretely - Modularize the outer build script so we can parallelize the overall build
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3123/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3123/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2527
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2527/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2527/comments
https://api.github.com/repos/ollama/ollama/issues/2527/events
https://github.com/ollama/ollama/issues/2527
2,137,562,624
I_kwDOJ0Z1Ps5_aJ4A
2,527
Windows GPU libraries compiled with AVX2 instead of AVX
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-02-15T22:37:13
2024-02-19T21:13:06
2024-02-19T21:13:06
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Even though we're setting: ``` generating config with: cmake -S ../llama.cpp -B ../llama.cpp/build/windows/amd64/cuda_v11.3 -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -A x64 -DCMAKE_VERBOSE_MAKEFILE=on -DLLAMA_SERVER_VERBOSE=on -DLLAMA_CUBLAS=ON -DLLAMA_AVX=on -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80 ``` The a...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2527/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8184
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8184/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8184/comments
https://api.github.com/repos/ollama/ollama/issues/8184/events
https://github.com/ollama/ollama/issues/8184
2,752,912,841
I_kwDOJ0Z1Ps6kFh3J
8,184
Falcon3 10B in 1.58bit format
{ "login": "thiswillbeyourgithub", "id": 26625900, "node_id": "MDQ6VXNlcjI2NjI1OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/26625900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thiswillbeyourgithub", "html_url": "https://github.com/thiswillbeyourgithub", "followers_url...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-12-20T15:02:17
2025-01-13T01:43:30
2025-01-13T01:43:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'n sort of surprised that all variants of falcon3 have been added very quickly but not the 1.58bit one and nobody seems to have asked for it. The full 10B model is only 3.99Go in 1,58bit format according to [their hf repo](https://huggingface.co/tiiuae/Falcon3-10B-Instruct-1.58bit/tree/main) so I think it would be ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8184/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2240
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2240/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2240/comments
https://api.github.com/repos/ollama/ollama/issues/2240/events
https://github.com/ollama/ollama/issues/2240
2,104,243,882
I_kwDOJ0Z1Ps59bDaq
2,240
How to limit output token generated: Phi model
{ "login": "bm777", "id": 29865600, "node_id": "MDQ6VXNlcjI5ODY1NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bm777", "html_url": "https://github.com/bm777", "followers_url": "https://api.github.com/users/bm777/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
10
2024-01-28T16:28:07
2024-12-20T23:40:28
2024-12-20T23:40:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
From a given context + query, the model generates well the answer, but very long -> around `2000 chars`. Is there any way to do `max_output_tokens=200` like pplx or openAI API? This is my prompt template: ```js _template = "You are an assistant that delivers short answers to the user inquiry from the provided con...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2240/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2240/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/81
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/81/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/81/comments
https://api.github.com/repos/ollama/ollama/issues/81/events
https://github.com/ollama/ollama/pull/81
1,805,608,793
PR_kwDOJ0Z1Ps5Vjzp7
81
fix race
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-07-14T21:36:02
2023-07-14T22:12:13
2023-07-14T22:12:01
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/81", "html_url": "https://github.com/ollama/ollama/pull/81", "diff_url": "https://github.com/ollama/ollama/pull/81.diff", "patch_url": "https://github.com/ollama/ollama/pull/81.patch", "merged_at": "2023-07-14T22:12:01" }
block on write which only returns when the channel is closed. this is contrary to the previous arrangement where the handler may return but the stream hasn't finished writing. it can lead to the client receiving unexpected responses (since the request has been handled) or worst case a nil-pointer dereference as the str...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/81/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/81/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4824
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4824/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4824/comments
https://api.github.com/repos/ollama/ollama/issues/4824/events
https://github.com/ollama/ollama/issues/4824
2,334,777,318
I_kwDOJ0Z1Ps6LKd_m
4,824
Error: llama runner process has terminated: signal: aborted (core dumped)
{ "login": "ignore1999", "id": 64943360, "node_id": "MDQ6VXNlcjY0OTQzMzYw", "avatar_url": "https://avatars.githubusercontent.com/u/64943360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ignore1999", "html_url": "https://github.com/ignore1999", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-06-05T02:36:37
2024-08-07T10:51:28
2024-06-09T17:12:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I run the MiniCPM-Llama3-V-2_5, I get an error:"Error: llama runner process has terminated: signal: aborted (core dumped)",This is the case for both version 0.1.39 and 0.1.41 ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.41
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4824/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4824/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7008
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7008/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7008/comments
https://api.github.com/repos/ollama/ollama/issues/7008/events
https://github.com/ollama/ollama/issues/7008
2,553,626,482
I_kwDOJ0Z1Ps6YNT9y
7,008
/api/embed uses 512 token context window even though model was configured with 8192
{ "login": "khromov", "id": 1207507, "node_id": "MDQ6VXNlcjEyMDc1MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/1207507?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khromov", "html_url": "https://github.com/khromov", "followers_url": "https://api.github.com/users/khromov/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-09-27T19:45:54
2024-10-01T23:49:50
2024-10-01T23:49:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm using Continue.dev and have configured the following to generate embeddings: ```json "embeddingsProvider": { "provider": "ollama", "model": "mxbai-embed-large:latest" }, ``` When inspecting the model, we see context is 8192: ``` ollama show --modelfile nomic-embed-te...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7008/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1259
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1259/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1259/comments
https://api.github.com/repos/ollama/ollama/issues/1259/events
https://github.com/ollama/ollama/issues/1259
2,009,070,995
I_kwDOJ0Z1Ps53v_2T
1,259
Missing logprob
{ "login": "ex3ndr", "id": 400659, "node_id": "MDQ6VXNlcjQwMDY1OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/400659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ex3ndr", "html_url": "https://github.com/ex3ndr", "followers_url": "https://api.github.com/users/ex3ndr/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
2
2023-11-24T04:18:38
2025-01-07T19:25:15
2025-01-07T19:25:15
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
For some reason, there are no way to get logprob of a completion to measure or visualise network performance, it would be nice to have to be able to build advanced tools for network debugging.
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1259/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1259/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8592
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8592/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8592/comments
https://api.github.com/repos/ollama/ollama/issues/8592/events
https://github.com/ollama/ollama/issues/8592
2,811,574,204
I_kwDOJ0Z1Ps6nlTe8
8,592
ollama fails to detect old models after update
{ "login": "nevakrien", "id": 101988414, "node_id": "U_kgDOBhQ4Pg", "avatar_url": "https://avatars.githubusercontent.com/u/101988414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nevakrien", "html_url": "https://github.com/nevakrien", "followers_url": "https://api.github.com/users/nevakr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2025-01-26T13:53:38
2025-01-26T14:03:02
2025-01-26T14:03:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? so my setup has a semi link for runing ollama model and I think i have over a tera byte of model weights so if there is a way to make it so i dont need to download the entire thing again i would be very happy ### OS Linux ### GPU _No response_ ### CPU _No response_ ### Ollama version 0...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8592/timeline
null
duplicate
false
https://api.github.com/repos/ollama/ollama/issues/2237
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2237/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2237/comments
https://api.github.com/repos/ollama/ollama/issues/2237/events
https://github.com/ollama/ollama/issues/2237
2,103,893,583
I_kwDOJ0Z1Ps59Zt5P
2,237
:lady_beetle: Missing model description on `ifioravanti/bagel-hermes`
{ "login": "adriens", "id": 5235127, "node_id": "MDQ6VXNlcjUyMzUxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adriens", "html_url": "https://github.com/adriens", "followers_url": "https://api.github.com/users/adriens/...
[]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
2
2024-01-28T00:57:04
2024-03-12T18:37:21
2024-03-12T18:37:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# :grey_question: About [`ifioravanti/bagel-hermes`](https://ollama.ai/ifioravanti/bagel-hermes) is currently missing his description: ![image](https://github.com/ollama/ollama/assets/5235127/96655c3b-8a78-43f2-99af-19420e7c884f) # :pray: Action :point_right: Please : - [ ] Put a short description like f...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2237/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5770
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5770/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5770/comments
https://api.github.com/repos/ollama/ollama/issues/5770/events
https://github.com/ollama/ollama/issues/5770
2,416,474,514
I_kwDOJ0Z1Ps6QCHmS
5,770
Can we add the new smollm models
{ "login": "psikosen", "id": 5045515, "node_id": "MDQ6VXNlcjUwNDU1MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/5045515?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psikosen", "html_url": "https://github.com/psikosen", "followers_url": "https://api.github.com/users/psiko...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-07-18T13:57:13
2024-07-23T18:16:38
2024-07-23T18:16:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
These small models would be a valuable addition to ollama. https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5770/timeline
null
completed
false