url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6071 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6071/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6071/comments | https://api.github.com/repos/ollama/ollama/issues/6071/events | https://github.com/ollama/ollama/issues/6071 | 2,437,582,747 | I_kwDOJ0Z1Ps6RSo-b | 6,071 | llama3.1:70b <|eom_id|> missing as stop in template | {
"login": "SinanAkkoyun",
"id": 43215895,
"node_id": "MDQ6VXNlcjQzMjE1ODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/43215895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinanAkkoyun",
"html_url": "https://github.com/SinanAkkoyun",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-30T11:17:27 | 2024-09-04T00:53:12 | 2024-09-04T00:53:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
llama3.1:70b outputs <|eom_id|> as a string rather than an input. To test this, make a function call to the model without a 'tool' response message multiple times. 8B hallucinates, 70B wants to refuse, returns an <|eom_id|> but ollama doesn't recognize it.
### OS
Linux
### GPU
Nvidia
### CP... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6071/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3706 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3706/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3706/comments | https://api.github.com/repos/ollama/ollama/issues/3706/events | https://github.com/ollama/ollama/pull/3706 | 2,248,949,001 | PR_kwDOJ0Z1Ps5s9VVW | 3,706 | account for all non-repeating layers | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-17T18:21:39 | 2024-04-17T18:58:21 | 2024-04-17T18:58:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3706",
"html_url": "https://github.com/ollama/ollama/pull/3706",
"diff_url": "https://github.com/ollama/ollama/pull/3706.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3706.patch",
"merged_at": "2024-04-17T18:58:21"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3706/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2116 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2116/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2116/comments | https://api.github.com/repos/ollama/ollama/issues/2116/events | https://github.com/ollama/ollama/pull/2116 | 2,092,261,248 | PR_kwDOJ0Z1Ps5kosQT | 2,116 | Add support for CUDA 5.0 cards | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 6 | 2024-01-20T22:28:09 | 2024-01-31T07:20:36 | 2024-01-27T18:28:38 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2116",
"html_url": "https://github.com/ollama/ollama/pull/2116",
"diff_url": "https://github.com/ollama/ollama/pull/2116.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2116.patch",
"merged_at": "2024-01-27T18:28:38"
} | Building on #2112, this expands back to 5.0 cards, and also adds a few newer targets which theoretically should help performance on the more modern cards. The resulting binary grows a little in size but not significantly
* 0.1.21 => 263M
* #2112 => 264M
* This PR: => 266M
Fixes #1865
I'll keep this draft un... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2116/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2116/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5059 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5059/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5059/comments | https://api.github.com/repos/ollama/ollama/issues/5059/events | https://github.com/ollama/ollama/pull/5059 | 2,354,720,415 | PR_kwDOJ0Z1Ps5yjgmM | 5,059 | Add Vulkan support to ollama | {
"login": "pufferffish",
"id": 74378430,
"node_id": "MDQ6VXNlcjc0Mzc4NDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/74378430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pufferffish",
"html_url": "https://github.com/pufferffish",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 121 | 2024-06-15T10:13:25 | 2025-01-29T09:29:17 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5059",
"html_url": "https://github.com/ollama/ollama/pull/5059",
"diff_url": "https://github.com/ollama/ollama/pull/5059.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5059.patch",
"merged_at": null
} | # Edit: (2025/01/19)
It's been around 7 months and ollama devs don't seem to be interested in merging this PR. I'll maintain this fork as a separate project from now on. If you have any issues please raise them in the fork's repo so I can keep track of them.
#
This PR adds vulkan support to ollama with a proper... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5059/reactions",
"total_count": 140,
"+1": 53,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 59,
"rocket": 28,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5059/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/331 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/331/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/331/comments | https://api.github.com/repos/ollama/ollama/issues/331/events | https://github.com/ollama/ollama/issues/331 | 1,847,107,551 | I_kwDOJ0Z1Ps5uGJ_f | 331 | do not regenerate embeddings layers when not changed | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-08-11T16:25:23 | 2023-08-15T19:10:25 | 2023-08-15T19:10:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/331/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1147 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1147/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1147/comments | https://api.github.com/repos/ollama/ollama/issues/1147/events | https://github.com/ollama/ollama/pull/1147 | 1,995,855,604 | PR_kwDOJ0Z1Ps5fk54z | 1,147 | add faq for proxies | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-11-16T01:16:03 | 2024-02-12T13:58:14 | 2023-11-16T16:43:37 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1147",
"html_url": "https://github.com/ollama/ollama/pull/1147",
"diff_url": "https://github.com/ollama/ollama/pull/1147.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1147.patch",
"merged_at": "2023-11-16T16:43:37"
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1147/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6132 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6132/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6132/comments | https://api.github.com/repos/ollama/ollama/issues/6132/events | https://github.com/ollama/ollama/pull/6132 | 2,443,639,168 | PR_kwDOJ0Z1Ps53L2hI | 6,132 | Fix OpenAI models/{model} URL | {
"login": "thinkverse",
"id": 2221746,
"node_id": "MDQ6VXNlcjIyMjE3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2221746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thinkverse",
"html_url": "https://github.com/thinkverse",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 1 | 2024-08-01T23:05:20 | 2024-08-01T23:31:48 | 2024-08-01T23:31:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6132",
"html_url": "https://github.com/ollama/ollama/pull/6132",
"diff_url": "https://github.com/ollama/ollama/pull/6132.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6132.patch",
"merged_at": "2024-08-01T23:31:47"
} | I added a note about this after the PR was merged @royjhan. | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6132/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4220 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4220/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4220/comments | https://api.github.com/repos/ollama/ollama/issues/4220/events | https://github.com/ollama/ollama/issues/4220 | 2,282,254,626 | I_kwDOJ0Z1Ps6ICHEi | 4,220 | modify template, system,or params on webpage | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-05-07T04:36:28 | 2024-05-09T02:32:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | after pushing a model, sometimes I hope to modify template, system,or params on webpage. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4220/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7845 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7845/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7845/comments | https://api.github.com/repos/ollama/ollama/issues/7845/events | https://github.com/ollama/ollama/issues/7845 | 2,695,932,637 | I_kwDOJ0Z1Ps6gsKrd | 7,845 | Support for rx 580 | {
"login": "alibahal5",
"id": 112767677,
"node_id": "U_kgDOBriyvQ",
"avatar_url": "https://avatars.githubusercontent.com/u/112767677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alibahal5",
"html_url": "https://github.com/alibahal5",
"followers_url": "https://api.github.com/users/alibah... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-26T20:00:06 | 2024-12-02T15:39:27 | 2024-12-02T15:39:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hi
please add support for rx580 or if there is a way to use it tell me.
thanks. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7845/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/522 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/522/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/522/comments | https://api.github.com/repos/ollama/ollama/issues/522/events | https://github.com/ollama/ollama/pull/522 | 1,893,532,942 | PR_kwDOJ0Z1Ps5aLrOP | 522 | add a simple python client to access ollama | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-13T00:38:52 | 2023-09-15T04:25:10 | 2023-09-14T23:37:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/522",
"html_url": "https://github.com/ollama/ollama/pull/522",
"diff_url": "https://github.com/ollama/ollama/pull/522.diff",
"patch_url": "https://github.com/ollama/ollama/pull/522.patch",
"merged_at": "2023-09-14T23:37:38"
} | These are some simple python bindings for interacting with the local Ollama server. Most of the functions should be pretty straight forward, and each of the streaming endpoints has a default way of handling the output but can be passed in a "callback" function to override the default.
The callback functions can be a... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/522/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/336 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/336/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/336/comments | https://api.github.com/repos/ollama/ollama/issues/336/events | https://github.com/ollama/ollama/issues/336 | 1,847,340,064 | I_kwDOJ0Z1Ps5uHCwg | 336 | model names should be case insensitive | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 10 | 2023-08-11T19:46:59 | 2024-12-23T02:18:21 | 2024-12-23T00:52:24 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
ollama pull wizard-math:13b-q3_k_m
```
should pull the same image as
```
ollama pull wizard-math:13b-q3_K_M
``` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/336/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4552 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4552/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4552/comments | https://api.github.com/repos/ollama/ollama/issues/4552/events | https://github.com/ollama/ollama/issues/4552 | 2,307,398,787 | I_kwDOJ0Z1Ps6JiByD | 4,552 | Please support LoongArch ISA | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 6 | 2024-05-21T06:09:40 | 2024-05-29T22:52:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/ggerganov/llama.cpp/pull/6454#issuecomment-2119832472 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4552/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1692 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1692/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1692/comments | https://api.github.com/repos/ollama/ollama/issues/1692/events | https://github.com/ollama/ollama/issues/1692 | 2,054,944,902 | I_kwDOJ0Z1Ps56e_iG | 1,692 | Mac OS Sonoma crashes completely when loading LLM | {
"login": "sanctimon",
"id": 49374753,
"node_id": "MDQ6VXNlcjQ5Mzc0NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/49374753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanctimon",
"html_url": "https://github.com/sanctimon",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2023-12-24T00:30:55 | 2024-01-25T22:50:56 | 2024-01-25T22:50:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have pulled the model (dolphin-mixtral:latest) and when I attempt to run, the entire machine freezes. A few minutes later it restarts.
Specs: MacBook Pro M1 Pro. 16GB RAM.
With Activity Monitor on, it seems to be filling up the RAM quite quickly before the crash. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1692/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6992 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6992/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6992/comments | https://api.github.com/repos/ollama/ollama/issues/6992/events | https://github.com/ollama/ollama/issues/6992 | 2,551,818,342 | I_kwDOJ0Z1Ps6YGahm | 6,992 | /chat endpoint is not coherent | {
"login": "alchymia-sys",
"id": 176962419,
"node_id": "U_kgDOCow7cw",
"avatar_url": "https://avatars.githubusercontent.com/u/176962419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alchymia-sys",
"html_url": "https://github.com/alchymia-sys",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-09-27T01:42:39 | 2024-10-02T00:00:27 | 2024-10-02T00:00:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I observed something with the /chat api endpoint, the model is always not coherent when having a conversation, however when chatting with the model via the terminal using ollama run llam3.2 for instance, i find that the model is coherent.
/chat endpoint
User: Hello there
Model: It looks l... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6992/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/790 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/790/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/790/comments | https://api.github.com/repos/ollama/ollama/issues/790/events | https://github.com/ollama/ollama/issues/790 | 1,943,037,061 | I_kwDOJ0Z1Ps5z0GSF | 790 | "out of memory" when using CUDA | {
"login": "konstantin1722",
"id": 55327489,
"node_id": "MDQ6VXNlcjU1MzI3NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/55327489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konstantin1722",
"html_url": "https://github.com/konstantin1722",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 11 | 2023-10-14T07:12:18 | 2024-03-25T09:53:31 | 2023-10-27T00:49:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I reinstalled ollama, after merge #724, now the error is gone on startup. At startup, it automatically calculates the number of layers that will be loaded into VRAM, but it does so incorrectly, which ultimately results in VRAM not being used at all.
I run the model `nous-hermes:13b-llama2`, after that I get this log... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/790/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/790/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3442 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3442/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3442/comments | https://api.github.com/repos/ollama/ollama/issues/3442/events | https://github.com/ollama/ollama/pull/3442 | 2,219,035,743 | PR_kwDOJ0Z1Ps5rW9A4 | 3,442 | fix generate output | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-01T20:47:47 | 2024-04-01T20:56:10 | 2024-04-01T20:56:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3442",
"html_url": "https://github.com/ollama/ollama/pull/3442",
"diff_url": "https://github.com/ollama/ollama/pull/3442.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3442.patch",
"merged_at": "2024-04-01T20:56:09"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3442/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3432 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3432/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3432/comments | https://api.github.com/repos/ollama/ollama/issues/3432/events | https://github.com/ollama/ollama/issues/3432 | 2,217,416,828 | I_kwDOJ0Z1Ps6EKxh8 | 3,432 | Partial Deletion of Chinese Characters in Ollama Project | {
"login": "TanyaShue",
"id": 152378303,
"node_id": "U_kgDOCRUbvw",
"avatar_url": "https://avatars.githubusercontent.com/u/152378303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TanyaShue",
"html_url": "https://github.com/TanyaShue",
"followers_url": "https://api.github.com/users/TanyaS... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6960960225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnufS4Q... | closed | false | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.... | null | 2 | 2024-04-01T03:03:45 | 2024-05-28T19:04:04 | 2024-05-28T19:04:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Description:
When running the Ollama project (version 0.1.30) on Windows 11 using PowerShell or Windows Terminal, I’ve encountered an issue where only half of each Chinese character is deleted upon attempting to remove them. The steps to reproduce the issue are as follows:
Launch Ollama wi... | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3432/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6530 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6530/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6530/comments | https://api.github.com/repos/ollama/ollama/issues/6530/events | https://github.com/ollama/ollama/pull/6530 | 2,490,133,550 | PR_kwDOJ0Z1Ps55nUGk | 6,530 | fix: comment typo | {
"login": "seankhatiri",
"id": 39913795,
"node_id": "MDQ6VXNlcjM5OTEzNzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/39913795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seankhatiri",
"html_url": "https://github.com/seankhatiri",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-08-27T19:06:21 | 2024-08-27T21:43:43 | 2024-08-27T20:28:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6530",
"html_url": "https://github.com/ollama/ollama/pull/6530",
"diff_url": "https://github.com/ollama/ollama/pull/6530.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6530.patch",
"merged_at": "2024-08-27T20:28:30"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6530/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/388 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/388/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/388/comments | https://api.github.com/repos/ollama/ollama/issues/388/events | https://github.com/ollama/ollama/issues/388 | 1,858,040,708 | I_kwDOJ0Z1Ps5uv3OE | 388 | Chat with documents | {
"login": "mohdyasa",
"id": 29009636,
"node_id": "MDQ6VXNlcjI5MDA5NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/29009636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohdyasa",
"html_url": "https://github.com/mohdyasa",
"followers_url": "https://api.github.com/users/moh... | [] | closed | false | null | [] | null | 3 | 2023-08-20T08:35:27 | 2023-08-21T16:55:31 | 2023-08-21T16:55:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is it possible to chat with documents (pdf, doc, etc.) using this solution? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/388/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7493 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7493/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7493/comments | https://api.github.com/repos/ollama/ollama/issues/7493/events | https://github.com/ollama/ollama/pull/7493 | 2,633,353,310 | PR_kwDOJ0Z1Ps6A08xj | 7,493 | Sign windows arm64 official binaries | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-11-04T17:03:32 | 2024-11-04T17:15:17 | 2024-11-04T17:15:14 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7493",
"html_url": "https://github.com/ollama/ollama/pull/7493",
"diff_url": "https://github.com/ollama/ollama/pull/7493.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7493.patch",
"merged_at": "2024-11-04T17:15:14"
} | This will ensure the zip file artifact contains signed binaries. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7493/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7875 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7875/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7875/comments | https://api.github.com/repos/ollama/ollama/issues/7875/events | https://github.com/ollama/ollama/pull/7875 | 2,703,097,006 | PR_kwDOJ0Z1Ps6DgSMo | 7,875 | llama: update llama.cpp and ggml to `40c6d79f` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-11-28T19:28:08 | 2024-12-11T03:21:36 | 2024-12-11T03:21:34 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7875",
"html_url": "https://github.com/ollama/ollama/pull/7875",
"diff_url": "https://github.com/ollama/ollama/pull/7875.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7875.patch",
"merged_at": "2024-12-11T03:21:34"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7875/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7875/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8273 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8273/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8273/comments | https://api.github.com/repos/ollama/ollama/issues/8273/events | https://github.com/ollama/ollama/issues/8273 | 2,764,131,849 | I_kwDOJ0Z1Ps6kwU4J | 8,273 | Token counting | {
"login": "mjaniec2013",
"id": 5925782,
"node_id": "MDQ6VXNlcjU5MjU3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5925782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjaniec2013",
"html_url": "https://github.com/mjaniec2013",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-12-31T08:58:10 | 2025-01-01T00:21:28 | 2025-01-01T00:21:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would it be possible to add to ollama model-dependent token counting functionality?
Currently, counting tokens requires using HF AutoTokenizer and duplicate installation of the models.
| {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8273/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/6849 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6849/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6849/comments | https://api.github.com/repos/ollama/ollama/issues/6849/events | https://github.com/ollama/ollama/pull/6849 | 2,532,322,555 | PR_kwDOJ0Z1Ps57019R | 6,849 | server: add tool parsing support for nemotron-mini | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-09-17T22:57:11 | 2024-09-18T01:06:18 | 2024-09-18T01:06:16 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6849",
"html_url": "https://github.com/ollama/ollama/pull/6849",
"diff_url": "https://github.com/ollama/ollama/pull/6849.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6849.patch",
"merged_at": "2024-09-18T01:06:16"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6849/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8242 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8242/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8242/comments | https://api.github.com/repos/ollama/ollama/issues/8242/events | https://github.com/ollama/ollama/pull/8242 | 2,759,087,358 | PR_kwDOJ0Z1Ps6GOD6Z | 8,242 | README: add neollama to terminal section of communiy integrations | {
"login": "paradoxical-dev",
"id": 150468425,
"node_id": "U_kgDOCPf3SQ",
"avatar_url": "https://avatars.githubusercontent.com/u/150468425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paradoxical-dev",
"html_url": "https://github.com/paradoxical-dev",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 1 | 2024-12-25T20:47:22 | 2024-12-25T22:16:18 | 2024-12-25T22:16:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8242",
"html_url": "https://github.com/ollama/ollama/pull/8242",
"diff_url": "https://github.com/ollama/ollama/pull/8242.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8242.patch",
"merged_at": "2024-12-25T22:16:12"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8242/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3989 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3989/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3989/comments | https://api.github.com/repos/ollama/ollama/issues/3989/events | https://github.com/ollama/ollama/issues/3989 | 2,267,312,478 | I_kwDOJ0Z1Ps6HJHFe | 3,989 | Please add this model | {
"login": "OE-LUCIFER",
"id": 158988478,
"node_id": "U_kgDOCXn4vg",
"avatar_url": "https://avatars.githubusercontent.com/u/158988478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OE-LUCIFER",
"html_url": "https://github.com/OE-LUCIFER",
"followers_url": "https://api.github.com/users/OE-... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-04-28T03:08:04 | 2024-04-28T13:13:11 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/mradermacher/DevsDoCode-LLama-3-8b-Uncensored-GGUF | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3989/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3989/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4764 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4764/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4764/comments | https://api.github.com/repos/ollama/ollama/issues/4764/events | https://github.com/ollama/ollama/issues/4764 | 2,329,064,674 | I_kwDOJ0Z1Ps6K0rTi | 4,764 | ollama stop [id of running model] | {
"login": "mrdev023",
"id": 11292703,
"node_id": "MDQ6VXNlcjExMjkyNzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11292703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrdev023",
"html_url": "https://github.com/mrdev023",
"followers_url": "https://api.github.com/users/mrd... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-06-01T10:30:29 | 2024-09-11T23:37:01 | 2024-09-11T23:37:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Sometimes, when i use external tools with ollama, some model continue to run after application exit.
It can be usefull to have a command like this
```bash
ollama stop [id of running model]
```
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4764/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4764/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8508 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8508/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8508/comments | https://api.github.com/repos/ollama/ollama/issues/8508/events | https://github.com/ollama/ollama/pull/8508 | 2,800,355,360 | PR_kwDOJ0Z1Ps6IahSH | 8,508 | Set up CI with Azure Pipelines | {
"login": "enzoxic",
"id": 157711992,
"node_id": "U_kgDOCWZ-eA",
"avatar_url": "https://avatars.githubusercontent.com/u/157711992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoxic",
"html_url": "https://github.com/enzoxic",
"followers_url": "https://api.github.com/users/enzoxic/foll... | [] | closed | false | null | [] | null | 1 | 2025-01-20T22:27:08 | 2025-01-27T17:57:09 | 2025-01-27T17:57:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8508",
"html_url": "https://github.com/ollama/ollama/pull/8508",
"diff_url": "https://github.com/ollama/ollama/pull/8508.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8508.patch",
"merged_at": null
} | First pipeline [skip ci] | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8508/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6710 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6710/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6710/comments | https://api.github.com/repos/ollama/ollama/issues/6710/events | https://github.com/ollama/ollama/issues/6710 | 2,513,421,265 | I_kwDOJ0Z1Ps6Vz8PR | 6,710 | Docker: P8 State Power Usage double with 0.3.8+ | {
"login": "t3chn0m4g3",
"id": 4318452,
"node_id": "MDQ6VXNlcjQzMTg0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4318452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t3chn0m4g3",
"html_url": "https://github.com/t3chn0m4g3",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-09-09T09:24:44 | 2024-09-10T09:10:01 | 2024-09-09T18:05:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Versions of `ollama/ollama` < 0.3.8 had a P8 state power usage of about 4-5W. With 0.3.8+ this usage doubled to 9-10W.
Running two Nvidia RTX 4060 Ti and the issue is 100% reproducible. Resetting the driver with `nvidia-smi -r` and then starting `ollama/ollama:0.3.7` shows the expected 4-5W... | {
"login": "t3chn0m4g3",
"id": 4318452,
"node_id": "MDQ6VXNlcjQzMTg0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4318452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t3chn0m4g3",
"html_url": "https://github.com/t3chn0m4g3",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6710/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6710/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7282 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7282/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7282/comments | https://api.github.com/repos/ollama/ollama/issues/7282/events | https://github.com/ollama/ollama/pull/7282 | 2,601,082,609 | PR_kwDOJ0Z1Ps5_P5zH | 7,282 | Next Version | {
"login": "cabelo",
"id": 675645,
"node_id": "MDQ6VXNlcjY3NTY0NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/675645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cabelo",
"html_url": "https://github.com/cabelo",
"followers_url": "https://api.github.com/users/cabelo/follow... | [] | closed | false | null | [] | null | 1 | 2024-10-21T01:49:18 | 2024-10-22T00:45:56 | 2024-10-22T00:45:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7282",
"html_url": "https://github.com/ollama/ollama/pull/7282",
"diff_url": "https://github.com/ollama/ollama/pull/7282.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7282.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7282/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6723 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6723/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6723/comments | https://api.github.com/repos/ollama/ollama/issues/6723/events | https://github.com/ollama/ollama/issues/6723 | 2,515,859,359 | I_kwDOJ0Z1Ps6V9Pef | 6,723 | How to change the system memory folder ? | {
"login": "mdabir1203",
"id": 66947064,
"node_id": "MDQ6VXNlcjY2OTQ3MDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/66947064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mdabir1203",
"html_url": "https://github.com/mdabir1203",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-09-10T09:07:18 | 2024-09-12T01:04:25 | 2024-09-12T01:04:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried to run the llama 3.1 with Ollama and I am getting this :

I have enough diskspace but what are the reasons behind this ?
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama ver... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6723/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5992 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5992/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5992/comments | https://api.github.com/repos/ollama/ollama/issues/5992/events | https://github.com/ollama/ollama/pull/5992 | 2,432,932,950 | PR_kwDOJ0Z1Ps52niPk | 5,992 | fix: model save | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-26T20:26:10 | 2024-07-29T16:53:21 | 2024-07-29T16:53:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5992",
"html_url": "https://github.com/ollama/ollama/pull/5992",
"diff_url": "https://github.com/ollama/ollama/pull/5992.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5992.patch",
"merged_at": "2024-07-29T16:53:19"
} | stop parameter is saved as a slice which is incompatible with modelfile parsing. this change saves stop parameters as individual parameters
e.g.
```
/set parameter stop word1 word2
/save new-model
```
produces `PARAMETER stop [word1 word2]`
instead it should produce
```
PARAMETER stop word1
PARAMETE... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5992/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3009 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3009/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3009/comments | https://api.github.com/repos/ollama/ollama/issues/3009/events | https://github.com/ollama/ollama/issues/3009 | 2,176,547,638 | I_kwDOJ0Z1Ps6Bu3s2 | 3,009 | feat: add "unload model" command/endpoint | {
"login": "knoopx",
"id": 100993,
"node_id": "MDQ6VXNlcjEwMDk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/100993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knoopx",
"html_url": "https://github.com/knoopx",
"followers_url": "https://api.github.com/users/knoopx/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 7 | 2024-03-08T18:22:12 | 2024-03-14T12:45:00 | 2024-03-12T21:05:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There's no way to unload a model from VRAM other than killing/restarting ollama and that requires local system access and privileges. Given ollama is mostly used on limited devices, a command/api endpoint would be fantastic. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3009/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7755 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7755/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7755/comments | https://api.github.com/repos/ollama/ollama/issues/7755/events | https://github.com/ollama/ollama/issues/7755 | 2,674,384,297 | I_kwDOJ0Z1Ps6fZ92p | 7,755 | Proper way to train model on my data and load into Ollama? | {
"login": "robotom",
"id": 45123215,
"node_id": "MDQ6VXNlcjQ1MTIzMjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotom",
"html_url": "https://github.com/robotom",
"followers_url": "https://api.github.com/users/roboto... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-11-20T04:48:00 | 2025-01-13T01:28:42 | 2025-01-13T01:28:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As I've mentioned in the title, I have some huge text based documents which exceed typical context windows, even on large machines with large models (e.g. 405B). Is there a way I could train llama 3.1:8B (for example) on these docs and then load it into Ollama and ask the model about them? Thank you! | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7755/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2641 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2641/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2641/comments | https://api.github.com/repos/ollama/ollama/issues/2641/events | https://github.com/ollama/ollama/issues/2641 | 2,147,182,989 | I_kwDOJ0Z1Ps5_-2mN | 2,641 | GPU sometimes detected, sometimes not (Windows beta) | {
"login": "CrispStrobe",
"id": 154636388,
"node_id": "U_kgDOCTeQZA",
"avatar_url": "https://avatars.githubusercontent.com/u/154636388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CrispStrobe",
"html_url": "https://github.com/CrispStrobe",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-02-21T16:25:11 | 2024-04-16T16:13:32 | 2024-04-16T16:13:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Using ollama 0.1.25 under Windows, sometimes my GPU (A1000) is detected:
From server.log:
time=2024-02-21T17:04:44.912+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-21T17:04:44.912+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.6"
time=2024-02-21T17:04:44.912+... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2641/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2868 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2868/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2868/comments | https://api.github.com/repos/ollama/ollama/issues/2868/events | https://github.com/ollama/ollama/pull/2868 | 2,164,274,800 | PR_kwDOJ0Z1Ps5odNj6 | 2,868 | Update llama.cpp submodule to `c29af7e` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-03-01T22:14:27 | 2024-03-01T23:26:05 | 2024-03-01T23:26:04 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2868",
"html_url": "https://github.com/ollama/ollama/pull/2868",
"diff_url": "https://github.com/ollama/ollama/pull/2868.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2868.patch",
"merged_at": "2024-03-01T23:26:04"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2868/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8229 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8229/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8229/comments | https://api.github.com/repos/ollama/ollama/issues/8229/events | https://github.com/ollama/ollama/issues/8229 | 2,757,475,666 | I_kwDOJ0Z1Ps6kW71S | 8,229 | Have a Happy few days... at least... thank you! all, stay Great. | {
"login": "YonTracks",
"id": 93984913,
"node_id": "U_kgDOBZoYkQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93984913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YonTracks",
"html_url": "https://github.com/YonTracks",
"followers_url": "https://api.github.com/users/YonTrack... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-12-24T09:20:45 | 2024-12-25T22:26:55 | 2024-12-25T22:26:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | have a happy few days whatever you're doing^. "Thank you all for your incredible support and kindness! doing great things, the year has been epic but is not over yet" `ollama` love! you got this. | {
"login": "YonTracks",
"id": 93984913,
"node_id": "U_kgDOBZoYkQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93984913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YonTracks",
"html_url": "https://github.com/YonTracks",
"followers_url": "https://api.github.com/users/YonTrack... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8229/reactions",
"total_count": 5,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8229/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3603 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3603/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3603/comments | https://api.github.com/repos/ollama/ollama/issues/3603/events | https://github.com/ollama/ollama/pull/3603 | 2,238,416,151 | PR_kwDOJ0Z1Ps5sZdUy | 3,603 | updates | {
"login": "ozalperen",
"id": 36607745,
"node_id": "MDQ6VXNlcjM2NjA3NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/36607745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozalperen",
"html_url": "https://github.com/ozalperen",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 0 | 2024-04-11T19:29:57 | 2024-04-11T19:33:36 | 2024-04-11T19:33:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3603",
"html_url": "https://github.com/ollama/ollama/pull/3603",
"diff_url": "https://github.com/ollama/ollama/pull/3603.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3603.patch",
"merged_at": null
} | null | {
"login": "ozalperen",
"id": 36607745,
"node_id": "MDQ6VXNlcjM2NjA3NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/36607745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozalperen",
"html_url": "https://github.com/ozalperen",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3603/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/774 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/774/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/774/comments | https://api.github.com/repos/ollama/ollama/issues/774/events | https://github.com/ollama/ollama/pull/774 | 1,940,976,282 | PR_kwDOJ0Z1Ps5cr_x3 | 774 | add version api and show server version in cli | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-10-12T23:54:56 | 2023-12-06T21:22:57 | 2023-12-06T21:22:56 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/774",
"html_url": "https://github.com/ollama/ollama/pull/774",
"diff_url": "https://github.com/ollama/ollama/pull/774.diff",
"patch_url": "https://github.com/ollama/ollama/pull/774.patch",
"merged_at": "2023-12-06T21:22:56"
} | some minor refactor of the cmd package
Example: server and client are the same version
```
$ ollama --version
Your ollama version 0.0.0
```
Example: server and client have different versions
```
$ ollama --version
Your ollama version 99.99.99999
Warning: Your client version is 0.0.0
```
Example: serve... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/774/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/774/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6348 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6348/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6348/comments | https://api.github.com/repos/ollama/ollama/issues/6348/events | https://github.com/ollama/ollama/issues/6348 | 2,464,469,762 | I_kwDOJ0Z1Ps6S5NMC | 6,348 | Mistral 7B, running on CPU only - can't fix it | {
"login": "openSourcerer9000",
"id": 61931826,
"node_id": "MDQ6VXNlcjYxOTMxODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/61931826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/openSourcerer9000",
"html_url": "https://github.com/openSourcerer9000",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-13T23:32:18 | 2024-08-15T13:57:18 | 2024-08-13T23:33:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Running Mistral 7B instruct, simple prompts take tens of minutes. Task manager shows CPU is in heavy use and GPU is doing nothing. I can run it with quantization normally without ollama. How to force ollama to use GPU?
This is my code:
```
from langchain_ollama import ChatOllama
misty ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6348/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7915 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7915/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7915/comments | https://api.github.com/repos/ollama/ollama/issues/7915/events | https://github.com/ollama/ollama/pull/7915 | 2,714,241,746 | PR_kwDOJ0Z1Ps6D2d-a | 7,915 | llama/llama.go: Remove some duplicates | {
"login": "MeiK2333",
"id": 20951666,
"node_id": "MDQ6VXNlcjIwOTUxNjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20951666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MeiK2333",
"html_url": "https://github.com/MeiK2333",
"followers_url": "https://api.github.com/users/Mei... | [] | closed | false | null | [] | null | 2 | 2024-12-03T07:44:18 | 2024-12-10T21:47:05 | 2024-12-10T21:47:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7915",
"html_url": "https://github.com/ollama/ollama/pull/7915",
"diff_url": "https://github.com/ollama/ollama/pull/7915.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7915.patch",
"merged_at": null
} | Remove some duplicates | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7915/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7526 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7526/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7526/comments | https://api.github.com/repos/ollama/ollama/issues/7526/events | https://github.com/ollama/ollama/issues/7526 | 2,638,310,037 | I_kwDOJ0Z1Ps6dQWqV | 7,526 | 500 Error in Ollama API After LLM Computation Exceeds 2 Minutes | {
"login": "ALLMI78",
"id": 184701677,
"node_id": "U_kgDOCwJS7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/184701677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALLMI78",
"html_url": "https://github.com/ALLMI78",
"followers_url": "https://api.github.com/users/ALLMI78/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 40 | 2024-11-06T14:48:14 | 2024-11-10T23:41:13 | 2024-11-07T18:55:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Sorry, I'm new to github but i got this problem and no solution...
When processing requests with any model in Ollama, a 500 Internal Server Error consistently occurs whenever the LLM computation exceeds exactly 2 minutes. This happens regardless of the model size or GPU/CPU usage, indicating ... | {
"login": "ALLMI78",
"id": 184701677,
"node_id": "U_kgDOCwJS7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/184701677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALLMI78",
"html_url": "https://github.com/ALLMI78",
"followers_url": "https://api.github.com/users/ALLMI78/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7526/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3389 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3389/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3389/comments | https://api.github.com/repos/ollama/ollama/issues/3389/events | https://github.com/ollama/ollama/issues/3389 | 2,213,572,510 | I_kwDOJ0Z1Ps6D8G-e | 3,389 | Custom models in API | {
"login": "CHesketh76",
"id": 38713764,
"node_id": "MDQ6VXNlcjM4NzEzNzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/38713764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CHesketh76",
"html_url": "https://github.com/CHesketh76",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-03-28T15:53:34 | 2024-05-15T00:47:03 | 2024-05-15T00:47:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
After creating a ModelFile for a custom model. Please add the ability to use curl to ping the model.
```
curl http://localhost:11434/api/chat -d '{
"model": "customModel.gguf",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}'
```
If this is alre... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3389/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3389/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6561 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6561/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6561/comments | https://api.github.com/repos/ollama/ollama/issues/6561/events | https://github.com/ollama/ollama/issues/6561 | 2,495,532,046 | I_kwDOJ0Z1Ps6UvswO | 6,561 | Inconsistent API Behavior | {
"login": "negaralizadeh",
"id": 28723076,
"node_id": "MDQ6VXNlcjI4NzIzMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/28723076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/negaralizadeh",
"html_url": "https://github.com/negaralizadeh",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-08-29T20:36:57 | 2024-08-30T08:39:33 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm calling the generate API as follows:
```
url = 'http://localhost:11434/api/generate'
data = {
"model": model_name,
"stream": False,
"options": {
"temperature": 0.2,
"top_p": 0.8,
"seed": 42,
"num_predict": 300,
},
"system": set_role()
}
r... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6561/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1775 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1775/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1775/comments | https://api.github.com/repos/ollama/ollama/issues/1775/events | https://github.com/ollama/ollama/pull/1775 | 2,064,633,517 | PR_kwDOJ0Z1Ps5jK5tA | 1,775 | docs: add Msty app in readme | {
"login": "nikeshparajuli",
"id": 47485043,
"node_id": "MDQ6VXNlcjQ3NDg1MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/47485043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikeshparajuli",
"html_url": "https://github.com/nikeshparajuli",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 3 | 2024-01-03T20:53:26 | 2024-02-20T19:03:34 | 2024-02-20T19:03:33 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1775",
"html_url": "https://github.com/ollama/ollama/pull/1775",
"diff_url": "https://github.com/ollama/ollama/pull/1775.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1775.patch",
"merged_at": "2024-02-20T19:03:33"
} | ERROR: type should be string, got "\r\nhttps://github.com/jmorganca/ollama/assets/47485043/d402e724-5aa4-4d60-92b0-fecc30143c9f\r\n" | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1775/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3107 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3107/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3107/comments | https://api.github.com/repos/ollama/ollama/issues/3107/events | https://github.com/ollama/ollama/issues/3107 | 2,184,210,063 | I_kwDOJ0Z1Ps6CMGaP | 3,107 | Windows Rocm: HSA_OVERRIDE_GFX_VERSION doesn´t work | {
"login": "Detlev1",
"id": 71934197,
"node_id": "MDQ6VXNlcjcxOTM0MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71934197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Detlev1",
"html_url": "https://github.com/Detlev1",
"followers_url": "https://api.github.com/users/Detlev... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 30 | 2024-03-13T14:51:31 | 2024-09-03T19:52:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm eager to explore the new Windows ROCm compatibility feature, but I'm encountering an issue with forcing the GFX version. Currently, I'm using the 0.1.29 pre-release.
My setup includes an RX 6600 XT (GFX1032), which isn't fully supported in the ROCm library. According to the troubleshooting guide available at htt... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3107/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3016 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3016/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3016/comments | https://api.github.com/repos/ollama/ollama/issues/3016/events | https://github.com/ollama/ollama/issues/3016 | 2,177,001,843 | I_kwDOJ0Z1Ps6Bwmlz | 3,016 | Request to http api from chrome extension not allowed in window | {
"login": "DominguitoLamo",
"id": 32754897,
"node_id": "MDQ6VXNlcjMyNzU0ODk3",
"avatar_url": "https://avatars.githubusercontent.com/u/32754897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DominguitoLamo",
"html_url": "https://github.com/DominguitoLamo",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 1 | 2024-03-09T01:53:49 | 2024-03-11T22:15:38 | 2024-03-11T22:15:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I got 403 while request ollama api in localhost from the background.js of chrome extension. It seems the problem caused by cors restriction. However, the source code of ollama indicate that the default config does allow browser extension requests ollama in localhost. Is it the problem of window version? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3016/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1183 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1183/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1183/comments | https://api.github.com/repos/ollama/ollama/issues/1183/events | https://github.com/ollama/ollama/pull/1183 | 2,000,005,215 | PR_kwDOJ0Z1Ps5fzDh1 | 1,183 | Add Rivet to Community Integrations | {
"login": "abrenneke",
"id": 342540,
"node_id": "MDQ6VXNlcjM0MjU0MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/342540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abrenneke",
"html_url": "https://github.com/abrenneke",
"followers_url": "https://api.github.com/users/abre... | [] | closed | false | null | [] | null | 0 | 2023-11-17T22:09:04 | 2023-11-20T15:36:47 | 2023-11-20T15:36:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1183",
"html_url": "https://github.com/ollama/ollama/pull/1183",
"diff_url": "https://github.com/ollama/ollama/pull/1183.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1183.patch",
"merged_at": "2023-11-20T15:36:47"
} | Adds the [Ollama plugin](https://github.com/abrenneke/rivet-plugin-ollama) for [Rivet](https://rivet.ironcladapp.com/) to the community integrations list (Extensions & Plugins section) | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1183/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4525 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4525/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4525/comments | https://api.github.com/repos/ollama/ollama/issues/4525/events | https://github.com/ollama/ollama/pull/4525 | 2,304,764,022 | PR_kwDOJ0Z1Ps5v5Qez | 4,525 | Exposing grammar as a request parameter in completion/chat with go-side grammar validation | {
"login": "richardanaya",
"id": 294042,
"node_id": "MDQ6VXNlcjI5NDA0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/294042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardanaya",
"html_url": "https://github.com/richardanaya",
"followers_url": "https://api.github.com/u... | [] | closed | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 24 | 2024-05-19T20:11:16 | 2024-12-05T00:40:26 | 2024-12-05T00:40:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4525",
"html_url": "https://github.com/ollama/ollama/pull/4525",
"diff_url": "https://github.com/ollama/ollama/pull/4525.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4525.patch",
"merged_at": null
} | **Why is passing down grammars needed?**
Relying upon the context of a prompt to dictate structure can be unreliable (because its dependent upon the model and generational randomness) and takes up context space. Grammar is a well proven way to constrain generational output, and in fact `format="JSON"` even depends ... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4525/reactions",
"total_count": 16,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4525/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3461 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3461/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3461/comments | https://api.github.com/repos/ollama/ollama/issues/3461/events | https://github.com/ollama/ollama/pull/3461 | 2,221,048,410 | PR_kwDOJ0Z1Ps5rd1pA | 3,461 | feat: add OLLAMA_DEBUG in ollama serve help message | {
"login": "cesto93",
"id": 56308444,
"node_id": "MDQ6VXNlcjU2MzA4NDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/56308444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cesto93",
"html_url": "https://github.com/cesto93",
"followers_url": "https://api.github.com/users/cesto9... | [] | closed | false | null | [] | null | 0 | 2024-04-02T17:22:52 | 2024-04-03T01:20:03 | 2024-04-03T01:20:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3461",
"html_url": "https://github.com/ollama/ollama/pull/3461",
"diff_url": "https://github.com/ollama/ollama/pull/3461.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3461.patch",
"merged_at": "2024-04-03T01:20:03"
} | Hello,
added the OLLAMA_DEBUG flag in the ollama serve help message.
This solves this issue: https://github.com/ollama/ollama/issues/3401 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3461/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3461/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8224 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8224/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8224/comments | https://api.github.com/repos/ollama/ollama/issues/8224/events | https://github.com/ollama/ollama/pull/8224 | 2,757,079,139 | PR_kwDOJ0Z1Ps6GIgGH | 8,224 | Add back fallback CPU runner | {
"login": "hack3ric",
"id": 18899791,
"node_id": "MDQ6VXNlcjE4ODk5Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/18899791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hack3ric",
"html_url": "https://github.com/hack3ric",
"followers_url": "https://api.github.com/users/hac... | [] | open | false | null | [] | null | 0 | 2024-12-24T02:56:41 | 2024-12-27T06:02:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8224",
"html_url": "https://github.com/ollama/ollama/pull/8224",
"diff_url": "https://github.com/ollama/ollama/pull/8224.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8224.patch",
"merged_at": null
} | Previous versions of Ollama can build CPU runners without any optimization, but now (0.5.x) it can only build AVX and AVX2 variants. Add back `cpu` runner without any `TARGET_CPU_FLAGS` to enable this. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8224/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/967 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/967/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/967/comments | https://api.github.com/repos/ollama/ollama/issues/967/events | https://github.com/ollama/ollama/issues/967 | 1,973,442,047 | I_kwDOJ0Z1Ps51oFX_ | 967 | how to view the ollama server log please? (to solve the "Invalid request" error) | {
"login": "Xingeqwd",
"id": 46107662,
"node_id": "MDQ6VXNlcjQ2MTA3NjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/46107662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xingeqwd",
"html_url": "https://github.com/Xingeqwd",
"followers_url": "https://api.github.com/users/Xin... | [] | closed | false | null | [] | null | 12 | 2023-11-02T03:33:45 | 2025-01-20T12:47:45 | 2023-11-08T19:55:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, masters
i have a Ollama API server and a continuedev-server, on a same linux server
when i use the continuedev-server send request to Ollama-api, the Ollama-api return "Invalid request to Ollama"
I not sure what request was send to Ollama, causing the "Invalid request to Ollama" problem
I also know t... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/967/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/967/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1625 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1625/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1625/comments | https://api.github.com/repos/ollama/ollama/issues/1625/events | https://github.com/ollama/ollama/issues/1625 | 2,050,233,842 | I_kwDOJ0Z1Ps56NBXy | 1,625 | Is there a way to install ollama and download models on a external SSD on mac M1 ? | {
"login": "pramitsawant",
"id": 18174055,
"node_id": "MDQ6VXNlcjE4MTc0MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/18174055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramitsawant",
"html_url": "https://github.com/pramitsawant",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 6 | 2023-12-20T10:06:19 | 2024-02-23T23:31:51 | 2024-01-03T19:48:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On Macbook Pro M1
The models are downloaded to this path /Users/{username}/.ollama/models/manifests/registry.ollama.ai/library
I use a external ssd where i download and keep all less recently used apps and other files.
can we have an option to set model download path ? | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1625/reactions",
"total_count": 8,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1625/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3437 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3437/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3437/comments | https://api.github.com/repos/ollama/ollama/issues/3437/events | https://github.com/ollama/ollama/pull/3437 | 2,218,202,591 | PR_kwDOJ0Z1Ps5rUD1f | 3,437 | Add chromem-go to community integrations | {
"login": "philippgille",
"id": 170670,
"node_id": "MDQ6VXNlcjE3MDY3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/170670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philippgille",
"html_url": "https://github.com/philippgille",
"followers_url": "https://api.github.com/u... | [] | closed | false | null | [] | null | 0 | 2024-04-01T12:54:08 | 2024-04-01T15:17:38 | 2024-04-01T15:17:37 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3437",
"html_url": "https://github.com/ollama/ollama/pull/3437",
"diff_url": "https://github.com/ollama/ollama/pull/3437.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3437.patch",
"merged_at": "2024-04-01T15:17:37"
} | Hello :wave: , I've been a happy Ollama user for a while (both on macOS and Linux), thank you so much for creating and maintaining this project! I recommend it when talking to other people about running LLMs locally.
I recently worked on a Go library for a simple embedded vector DB, to be able to write RAG applicati... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3437/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3437/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8646 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8646/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8646/comments | https://api.github.com/repos/ollama/ollama/issues/8646/events | https://github.com/ollama/ollama/issues/8646 | 2,817,021,493 | I_kwDOJ0Z1Ps6n6FY1 | 8,646 | Deepseek R1 distilled models dropping tokens from context (deepseek-r1:32b) | {
"login": "brianthompson-sarcat",
"id": 92367346,
"node_id": "U_kgDOBYFp8g",
"avatar_url": "https://avatars.githubusercontent.com/u/92367346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brianthompson-sarcat",
"html_url": "https://github.com/brianthompson-sarcat",
"followers_url": "http... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-29T01:13:08 | 2025-01-29T11:54:16 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Example: I am running cline and the system prompt includes tool commands. The responses from the model are dropping word boundary tokens for file names and for tool names.
e.g., snake_game.py becomes snake_ame.py | replace_in_file becomes replace_in_ile.
Seems to mostly happen when using under... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8646/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8091 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8091/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8091/comments | https://api.github.com/repos/ollama/ollama/issues/8091/events | https://github.com/ollama/ollama/pull/8091 | 2,739,234,731 | PR_kwDOJ0Z1Ps6FMOst | 8,091 | runner: switch logging back to stderr | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-12-13T21:26:12 | 2024-12-13T22:37:00 | 2024-12-13T22:36:51 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8091",
"html_url": "https://github.com/ollama/ollama/pull/8091",
"diff_url": "https://github.com/ollama/ollama/pull/8091.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8091.patch",
"merged_at": "2024-12-13T22:36:51"
} | This puts the low-level runner logging back on stderr for consistency with prior releases | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8091/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4085 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4085/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4085/comments | https://api.github.com/repos/ollama/ollama/issues/4085/events | https://github.com/ollama/ollama/pull/4085 | 2,273,992,653 | PR_kwDOJ0Z1Ps5uRvL- | 4,085 | Use LF for line endings | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-01T18:42:28 | 2024-05-01T19:02:46 | 2024-05-01T19:02:45 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4085",
"html_url": "https://github.com/ollama/ollama/pull/4085",
"diff_url": "https://github.com/ollama/ollama/pull/4085.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4085.patch",
"merged_at": "2024-05-01T19:02:45"
} | This fixes builds using Docker for Windows | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4085/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2476 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2476/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2476/comments | https://api.github.com/repos/ollama/ollama/issues/2476/events | https://github.com/ollama/ollama/pull/2476 | 2,132,526,299 | PR_kwDOJ0Z1Ps5mw2Is | 2,476 | Add OpenAI /v1/models API support | {
"login": "keriati",
"id": 439392,
"node_id": "MDQ6VXNlcjQzOTM5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/439392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keriati",
"html_url": "https://github.com/keriati",
"followers_url": "https://api.github.com/users/keriati/fo... | [] | closed | false | null | [] | null | 10 | 2024-02-13T15:00:57 | 2024-07-02T18:50:57 | 2024-07-02T18:50:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2476",
"html_url": "https://github.com/ollama/ollama/pull/2476",
"diff_url": "https://github.com/ollama/ollama/pull/2476.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2476.patch",
"merged_at": null
} | Add openaAI API **v1/models** endpoint compatibility.
See spec at: https://platform.openai.com/docs/api-reference/models/list
Personally I am not so sure about putting the ListModelsHandlerOpenAI method into the router file, however the original ollama ListModelsHandler function is also there.
I generally don'... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2476/reactions",
"total_count": 30,
"+1": 20,
"-1": 0,
"laugh": 0,
"hooray": 9,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2476/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7506 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7506/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7506/comments | https://api.github.com/repos/ollama/ollama/issues/7506/events | https://github.com/ollama/ollama/issues/7506 | 2,635,184,628 | I_kwDOJ0Z1Ps6dEbn0 | 7,506 | docker image not using GPU | {
"login": "781574155",
"id": 12928809,
"node_id": "MDQ6VXNlcjEyOTI4ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/12928809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/781574155",
"html_url": "https://github.com/781574155",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-11-05T11:24:56 | 2024-11-05T12:44:41 | 2024-11-05T12:44:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

In the docker image, ollama ps shows 100% GPU, but in fact it use 0% GPU. the inference is really slow.
the gpu usage (nvidia-smi) is always 0%. Is it a bug? my NVIDIA Container Toolkit CLI version 1.17... | {
"login": "781574155",
"id": 12928809,
"node_id": "MDQ6VXNlcjEyOTI4ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/12928809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/781574155",
"html_url": "https://github.com/781574155",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7506/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7577 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7577/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7577/comments | https://api.github.com/repos/ollama/ollama/issues/7577/events | https://github.com/ollama/ollama/issues/7577 | 2,644,585,989 | I_kwDOJ0Z1Ps6doS4F | 7,577 | Build fails and wipes parent folder if the path to the source contains a directory name with a space | {
"login": "gautamz",
"id": 18205052,
"node_id": "MDQ6VXNlcjE4MjA1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/18205052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gautamz",
"html_url": "https://github.com/gautamz",
"followers_url": "https://api.github.com/users/gautam... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7700262114,
"node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-11-08T16:51:38 | 2024-12-10T17:47:23 | 2024-12-10T17:47:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I cloned the repo and tried to build ollama on my device running Linux with Intel cpu, Nvidia GPU and CUDA installed.
The path to source was - /home/user/workspace/sourcebuild/ollama stuff/ollama
When I ran make, the "clean payload" target from the Makefile in sub-directory "llama" wiped ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7577/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7680 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7680/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7680/comments | https://api.github.com/repos/ollama/ollama/issues/7680/events | https://github.com/ollama/ollama/issues/7680 | 2,660,932,636 | I_kwDOJ0Z1Ps6empwc | 7,680 | diffusion model support | {
"login": "ww-nightly",
"id": 86903380,
"node_id": "MDQ6VXNlcjg2OTAzMzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/86903380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ww-nightly",
"html_url": "https://github.com/ww-nightly",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-11-15T06:28:09 | 2024-11-15T08:08:42 | 2024-11-15T08:08:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Will ollama support diffusion model like SDXL, flux? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7680/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6313 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6313/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6313/comments | https://api.github.com/repos/ollama/ollama/issues/6313/events | https://github.com/ollama/ollama/issues/6313 | 2,459,673,669 | I_kwDOJ0Z1Ps6Sm6RF | 6,313 | openbmb / MiniCPM-Llama3-V-2_5 | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-08-11T16:51:59 | 2024-08-12T00:20:19 | 2024-08-12T00:20:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It looks like llama.cpp now [supports openbmb/MiniCPM-Llama3-V-2_5.](https://github.com/ggerganov/llama.cpp/pull/7599)
Here's the [official gguf.](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf)
### [Model Summary from their Huggingface:](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)
MiniCPM-L... | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6313/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/6313/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6805 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6805/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6805/comments | https://api.github.com/repos/ollama/ollama/issues/6805/events | https://github.com/ollama/ollama/issues/6805 | 2,526,562,467 | I_kwDOJ0Z1Ps6WmEij | 6,805 | Slow setup download from Pakistan | {
"login": "hashirnouman",
"id": 72395400,
"node_id": "MDQ6VXNlcjcyMzk1NDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/72395400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hashirnouman",
"html_url": "https://github.com/hashirnouman",
"followers_url": "https://api.github.c... | [
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"... | open | false | null | [] | null | 6 | 2024-09-14T18:51:56 | 2024-10-03T16:02:02 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The ollama exe and curl both are downloading super slow. My internet is find becuase I download 2gb file in parallel and it downloaded very fast.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6805/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6303 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6303/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6303/comments | https://api.github.com/repos/ollama/ollama/issues/6303/events | https://github.com/ollama/ollama/issues/6303 | 2,459,240,748 | I_kwDOJ0Z1Ps6SlQks | 6,303 | Llama 3.1 405B fix-update | {
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-08-10T18:09:15 | 2024-08-12T02:22:29 | 2024-08-12T02:22:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The update reduces memory usage, maintaining the same quality of the previous version. 🎉
<img width="597" alt="Screenshot 2024-08-10 at 20 04 09" src="https://github.com/user-attachments/assets/a4c9f9c6-7598-43a7-9967-b7f32044b269">
Link to updated model:
https://huggingface.co/meta-llama/Meta-Llama-3.1-405B
... | {
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6303/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1558 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1558/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1558/comments | https://api.github.com/repos/ollama/ollama/issues/1558/events | https://github.com/ollama/ollama/pull/1558 | 2,044,561,712 | PR_kwDOJ0Z1Ps5iJ8qP | 1,558 | add magic header for unit tests | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-16T02:01:01 | 2023-12-21T08:03:27 | 2023-12-18T18:41:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1558",
"html_url": "https://github.com/ollama/ollama/pull/1558",
"diff_url": "https://github.com/ollama/ollama/pull/1558.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1558.patch",
"merged_at": "2023-12-18T18:41:02"
} | This change adds in the magic GGUF header for the temporary model image layer. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1558/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8648 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8648/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8648/comments | https://api.github.com/repos/ollama/ollama/issues/8648/events | https://github.com/ollama/ollama/issues/8648 | 2,817,169,473 | I_kwDOJ0Z1Ps6n6phB | 8,648 | olama installer should ask in drive user wants to install it | {
"login": "VikramNagwal",
"id": 123088024,
"node_id": "U_kgDOB1YsmA",
"avatar_url": "https://avatars.githubusercontent.com/u/123088024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VikramNagwal",
"html_url": "https://github.com/VikramNagwal",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2025-01-29T03:40:28 | 2025-01-29T17:20:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, Ollama Desktop is being installed on the C drive. However, if users prefer not to have it stored there, the system should offer an option to choose a different installation location during the setup process.
**Feature Request:**
The Ollama Desktop installation wizard should prompt users to choose their pr... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8648/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/682 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/682/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/682/comments | https://api.github.com/repos/ollama/ollama/issues/682/events | https://github.com/ollama/ollama/issues/682 | 1,922,778,216 | I_kwDOJ0Z1Ps5ym0Ro | 682 | System messages are not respected | {
"login": "OgulcanCelik",
"id": 7064011,
"node_id": "MDQ6VXNlcjcwNjQwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7064011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OgulcanCelik",
"html_url": "https://github.com/OgulcanCelik",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 10 | 2023-10-02T21:43:14 | 2024-11-14T23:09:47 | 2023-10-30T22:22:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Created Mario example, used mistral. Also tried with llama2-uncensored and without temperature parameter.
```
FROM mistral
PARAMETER temperature 0.9
SYSTEM """
You are Mario from super mario bros, acting as an assistant.
"""
```
run:
```
>>> who are you
I am Mistral, a Large Language Model trained by the... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/682/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6357 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6357/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6357/comments | https://api.github.com/repos/ollama/ollama/issues/6357/events | https://github.com/ollama/ollama/issues/6357 | 2,465,355,557 | I_kwDOJ0Z1Ps6S8lcl | 6,357 | Error: unknown data type: U8 | {
"login": "YaBoyBigPat",
"id": 88011040,
"node_id": "MDQ6VXNlcjg4MDExMDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/88011040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YaBoyBigPat",
"html_url": "https://github.com/YaBoyBigPat",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 28 | 2024-08-14T09:34:43 | 2024-11-28T11:59:28 | 2024-08-28T00:54:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm having trouble converting my llama 3.1 model to ollama, here's the error I get:
PS C:\Users\ljjx> ollama create -q Q4_K_M llama3.1q4 -f "C:\Users\ljjx\HFModels\Modelfile"
transferring model data
converting model
Error: unknown data type: U8
here's how I set up the modfile:
FROM C:\Users\ljjx\HFModels\Met... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6357/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2336 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2336/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2336/comments | https://api.github.com/repos/ollama/ollama/issues/2336/events | https://github.com/ollama/ollama/issues/2336 | 2,116,335,248 | I_kwDOJ0Z1Ps5-JLaQ | 2,336 | [Feature Request] Add api/stream endpoint useful for visualization | {
"login": "tgraupmann",
"id": 931421,
"node_id": "MDQ6VXNlcjkzMTQyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/931421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tgraupmann",
"html_url": "https://github.com/tgraupmann",
"followers_url": "https://api.github.com/users/t... | [] | closed | false | null | [] | null | 1 | 2024-02-03T07:46:44 | 2024-03-11T23:54:59 | 2024-03-11T23:54:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was looking at old projects specifically Winamp visualizers from a decade past.
Example: [Winamp Visualizations](https://www.youtube.com/watch?v=RBkhUg1oVIE)
The old school visualizers work on audio data. And I was thinking what if we used sentence encoding for inputs and outputs from Ollama for the data being us... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2336/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3906 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3906/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3906/comments | https://api.github.com/repos/ollama/ollama/issues/3906/events | https://github.com/ollama/ollama/issues/3906 | 2,263,257,900 | I_kwDOJ0Z1Ps6G5pMs | 3,906 | asr models? | {
"login": "yuanjie-ai",
"id": 20265321,
"node_id": "MDQ6VXNlcjIwMjY1MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/20265321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanjie-ai",
"html_url": "https://github.com/yuanjie-ai",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-04-25T10:39:55 | 2024-04-25T10:39:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | asr models? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3906/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1172 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1172/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1172/comments | https://api.github.com/repos/ollama/ollama/issues/1172/events | https://github.com/ollama/ollama/issues/1172 | 1,999,229,511 | I_kwDOJ0Z1Ps53KdJH | 1,172 | The server stops working after long context window is passed with zephyr model deployed with docker container on A10 GPU | {
"login": "ajasingh",
"id": 15189049,
"node_id": "MDQ6VXNlcjE1MTg5MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajasingh",
"html_url": "https://github.com/ajasingh",
"followers_url": "https://api.github.com/users/aja... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2023-11-17T14:14:02 | 2024-03-11T18:19:43 | 2024-03-11T18:19:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have deployed ollama on Nvidia A10 using Docker container , it works fine or smaller context window but server starts stop to work and gives below error
2023/11/17 14:08:33 llama.go:385: signal: killed
2023/11/17 14:08:33 llama.go:459: llama runner stopped successfully
Is there any guide on why this might be ha... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1172/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/597 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/597/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/597/comments | https://api.github.com/repos/ollama/ollama/issues/597/events | https://github.com/ollama/ollama/pull/597 | 1,912,416,561 | PR_kwDOJ0Z1Ps5bLGbV | 597 | build slim, GPU-less docker image | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 2 | 2023-09-25T23:15:46 | 2024-05-09T22:18:00 | 2024-04-14T22:46:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/597",
"html_url": "https://github.com/ollama/ollama/pull/597",
"diff_url": "https://github.com/ollama/ollama/pull/597.diff",
"patch_url": "https://github.com/ollama/ollama/pull/597.patch",
"merged_at": null
} | build a cpu-only docker image which is significantly smaller than the gpu image
```
ollama cuda dfdbcb88bc3d 4 minutes ago 754MB
ollama slim fb2e67c26718 7 minutes ago 148MB
```
Related #516 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/597/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/597/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8204 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8204/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8204/comments | https://api.github.com/repos/ollama/ollama/issues/8204/events | https://github.com/ollama/ollama/pull/8204 | 2,754,294,023 | PR_kwDOJ0Z1Ps6F_0NW | 8,204 | add aria2c support for faster downloads | {
"login": "A-Akhil",
"id": 50855133,
"node_id": "MDQ6VXNlcjUwODU1MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/50855133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/A-Akhil",
"html_url": "https://github.com/A-Akhil",
"followers_url": "https://api.github.com/users/A-Akhi... | [] | closed | false | null | [] | null | 1 | 2024-12-21T17:58:06 | 2024-12-23T15:04:30 | 2024-12-23T15:04:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8204",
"html_url": "https://github.com/ollama/ollama/pull/8204",
"diff_url": "https://github.com/ollama/ollama/pull/8204.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8204.patch",
"merged_at": null
} | This PR adds optimized aria2c support to the installation script, improving download speeds and reliability while maintaining curl as fallback.
## Changes
- Create unified `download_and_extract` function to handle both aria2c and curl
- Maintain backward compatibility with existing curl implementation
## Veri... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8204/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/725 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/725/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/725/comments | https://api.github.com/repos/ollama/ollama/issues/725/events | https://github.com/ollama/ollama/issues/725 | 1,930,971,909 | I_kwDOJ0Z1Ps5zGEsF | 725 | GPU allocation on Linux | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-10-06T21:49:06 | 2023-10-10T20:16:10 | 2023-10-10T20:16:10 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, too much GPU is allocated on Linux when loading a model | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/725/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/725/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2327 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2327/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2327/comments | https://api.github.com/repos/ollama/ollama/issues/2327/events | https://github.com/ollama/ollama/issues/2327 | 2,114,912,540 | I_kwDOJ0Z1Ps5-DwEc | 2,327 | AMD EOF Model Error | {
"login": "jusgrave",
"id": 528103,
"node_id": "MDQ6VXNlcjUyODEwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/528103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jusgrave",
"html_url": "https://github.com/jusgrave",
"followers_url": "https://api.github.com/users/jusgrav... | [] | closed | false | null | [] | null | 13 | 2024-02-02T13:16:55 | 2024-02-02T19:23:12 | 2024-02-02T19:13:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I been having issues trying to get Ollama running when i run a model Im get a EOF error and invaild pointer
```
Feb 02 07:53:39 justin-ArchEndos ollama[48332]: 2024/02/02 07:53:39 payload_common.go:106: INFO Extracting d>
Feb 02 07:53:41 justin-ArchEndos ollama[48332]: 2024/02/02 07:53:41 payload_common.go:145: IN... | {
"login": "jusgrave",
"id": 528103,
"node_id": "MDQ6VXNlcjUyODEwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/528103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jusgrave",
"html_url": "https://github.com/jusgrave",
"followers_url": "https://api.github.com/users/jusgrav... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2327/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8163 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8163/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8163/comments | https://api.github.com/repos/ollama/ollama/issues/8163/events | https://github.com/ollama/ollama/pull/8163 | 2,748,813,877 | PR_kwDOJ0Z1Ps6Fs9s_ | 8,163 | Reuse type InvalidModelNameErrMsg, unify the const parameters. | {
"login": "zhanluxianshen",
"id": 161462588,
"node_id": "U_kgDOCZ-5PA",
"avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhanluxianshen",
"html_url": "https://github.com/zhanluxianshen",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2024-12-18T21:52:07 | 2024-12-23T21:55:18 | 2024-12-23T15:38:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8163",
"html_url": "https://github.com/ollama/ollama/pull/8163",
"diff_url": "https://github.com/ollama/ollama/pull/8163.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8163.patch",
"merged_at": "2024-12-23T15:38:34"
} | Reuse type InvalidModelNameErrMsg, unify the const parameters. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8163/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3026 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3026/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3026/comments | https://api.github.com/repos/ollama/ollama/issues/3026/events | https://github.com/ollama/ollama/pull/3026 | 2,177,369,551 | PR_kwDOJ0Z1Ps5pJvcI | 3,026 | Doc how to set up ROCm builds on windows | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-03-09T19:30:02 | 2024-03-09T22:17:23 | 2024-03-09T22:17:20 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3026",
"html_url": "https://github.com/ollama/ollama/pull/3026",
"diff_url": "https://github.com/ollama/ollama/pull/3026.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3026.patch",
"merged_at": "2024-03-09T22:17:20"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3026/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7220 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7220/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7220/comments | https://api.github.com/repos/ollama/ollama/issues/7220/events | https://github.com/ollama/ollama/issues/7220 | 2,590,509,344 | I_kwDOJ0Z1Ps6aaAkg | 7,220 | Reference failure for user uploaded models in OpenAI compatible API | {
"login": "aretrace",
"id": 83451189,
"node_id": "MDQ6VXNlcjgzNDUxMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/83451189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aretrace",
"html_url": "https://github.com/aretrace",
"followers_url": "https://api.github.com/users/are... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-10-16T03:43:35 | 2024-10-16T23:34:52 | 2024-10-16T23:34:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am attempting to call `adrienbrault/nous-hermes2theta-llama3-8b` via the OpenAI node SDK, I get a 500 with a `model ... not found, try pulling it first` error (I already have it and chatted with it).
I did a simple curl test:
```zsh
curl -X POST http://localhost:11434/api/generate -d '{
... | {
"login": "aretrace",
"id": 83451189,
"node_id": "MDQ6VXNlcjgzNDUxMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/83451189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aretrace",
"html_url": "https://github.com/aretrace",
"followers_url": "https://api.github.com/users/are... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7220/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4798 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4798/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4798/comments | https://api.github.com/repos/ollama/ollama/issues/4798/events | https://github.com/ollama/ollama/issues/4798 | 2,331,505,749 | I_kwDOJ0Z1Ps6K9_RV | 4,798 | The rocm driver rx7900xtx has been installed but cannot be used normally. | {
"login": "HaoZhang66",
"id": 166794154,
"node_id": "U_kgDOCfETqg",
"avatar_url": "https://avatars.githubusercontent.com/u/166794154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaoZhang66",
"html_url": "https://github.com/HaoZhang66",
"followers_url": "https://api.github.com/users/Hao... | [
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1... | closed | false | null | [] | null | 5 | 2024-06-03T15:50:54 | 2024-09-24T18:29:47 | 2024-06-18T21:51:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Error: llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4798/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1422 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1422/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1422/comments | https://api.github.com/repos/ollama/ollama/issues/1422/events | https://github.com/ollama/ollama/issues/1422 | 2,031,614,653 | I_kwDOJ0Z1Ps55F_q9 | 1,422 | original Unraid container gone? | {
"login": "ktfcaptain",
"id": 66204113,
"node_id": "MDQ6VXNlcjY2MjA0MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/66204113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktfcaptain",
"html_url": "https://github.com/ktfcaptain",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2023-12-07T22:14:21 | 2023-12-07T23:16:05 | 2023-12-07T23:16:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I installed a version(for lack of better words) of Ollama on an Unraid server a couple weeks ago and now it's gone from the community store. There was a replacement titled "Chatbot-Ollama" but I have not been able to get that version running correctly on Unraid.
So on one of my Unraid servers I have this(i think,... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1422/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/8202 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8202/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8202/comments | https://api.github.com/repos/ollama/ollama/issues/8202/events | https://github.com/ollama/ollama/issues/8202 | 2,754,266,775 | I_kwDOJ0Z1Ps6kKsaX | 8,202 | Request to support whisper and fast-whisper | {
"login": "jwnder",
"id": 24688121,
"node_id": "MDQ6VXNlcjI0Njg4MTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/24688121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwnder",
"html_url": "https://github.com/jwnder",
"followers_url": "https://api.github.com/users/jwnder/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-12-21T17:20:20 | 2024-12-23T01:11:25 | 2024-12-23T01:11:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Request to support for whisper models specially fast whisper like groq:
API request example used by Groq where audio_buffer is the audio buffer
url = "https://api.groq.com/openai/v1/audio/transcriptions"
files = {
"file": ("recorded_audio.wav", audio_buffer, "audio/wav")
}
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8202/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5993 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5993/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5993/comments | https://api.github.com/repos/ollama/ollama/issues/5993/events | https://github.com/ollama/ollama/issues/5993 | 2,432,950,689 | I_kwDOJ0Z1Ps6RA-Gh | 5,993 | Tool calls streaming | {
"login": "jacob-ebey",
"id": 12063586,
"node_id": "MDQ6VXNlcjEyMDYzNTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/12063586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacob-ebey",
"html_url": "https://github.com/jacob-ebey",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-07-26T20:41:57 | 2024-09-04T04:23:03 | 2024-09-04T04:23:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Looking through PR's and issues around tool calls + streaming, it seems we are overlooking that when a tool call is encountered, the body of the tool-call should be discriminated from the "chat response".
We need to be able to implement the following high level logic:
```js
for await (con... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5993/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5993/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6672 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6672/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6672/comments | https://api.github.com/repos/ollama/ollama/issues/6672/events | https://github.com/ollama/ollama/issues/6672 | 2,509,896,125 | I_kwDOJ0Z1Ps6Vmfm9 | 6,672 | Inconsistent `prompt_eval_count` for Large Prompts in Ollama Python Library | {
"login": "surajyadav91",
"id": 86972819,
"node_id": "MDQ6VXNlcjg2OTcyODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/86972819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surajyadav91",
"html_url": "https://github.com/surajyadav91",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-06T08:57:48 | 2024-09-06T09:25:36 | 2024-09-06T09:25:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### Inconsistent `prompt_eval_count` for Large Prompts in Ollama Python Library
For larger prompts, when using the Ollama Python library with the `llama3.1:8b-instruct-fp16` model, the `prompt_eval_count` remains constant at fixed value (1026) tokens, even when the input prompt size varies ... | {
"login": "surajyadav91",
"id": 86972819,
"node_id": "MDQ6VXNlcjg2OTcyODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/86972819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surajyadav91",
"html_url": "https://github.com/surajyadav91",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6672/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3064 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3064/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3064/comments | https://api.github.com/repos/ollama/ollama/issues/3064/events | https://github.com/ollama/ollama/pull/3064 | 2,180,034,277 | PR_kwDOJ0Z1Ps5pSufm | 3,064 | update llama.cpp submodule to `ceca1ae` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-03-11T19:24:57 | 2024-03-11T19:57:49 | 2024-03-11T19:57:48 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3064",
"html_url": "https://github.com/ollama/ollama/pull/3064",
"diff_url": "https://github.com/ollama/ollama/pull/3064.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3064.patch",
"merged_at": "2024-03-11T19:57:48"
} | Fixes https://github.com/ollama/ollama/issues/3058 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3064/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7980 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7980/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7980/comments | https://api.github.com/repos/ollama/ollama/issues/7980/events | https://github.com/ollama/ollama/pull/7980 | 2,724,120,612 | PR_kwDOJ0Z1Ps6EYnJJ | 7,980 | prompt: Don't trim whitespace from prompts | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-12-06T23:24:01 | 2024-12-09T19:02:57 | 2024-12-09T19:02:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7980",
"html_url": "https://github.com/ollama/ollama/pull/7980",
"diff_url": "https://github.com/ollama/ollama/pull/7980.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7980.patch",
"merged_at": "2024-12-09T19:02:55"
} | New lines can be an important part of a user's prompt and trimming it can alter the results. We previously only trimmed prompts with images but refactoring brought this behavior to all prompts, where it became more noticeable.
The /generate endpoint adds less whitespace and therefore doesn't need to trim it out - th... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7980/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7980/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7758 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7758/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7758/comments | https://api.github.com/repos/ollama/ollama/issues/7758/events | https://github.com/ollama/ollama/issues/7758 | 2,674,714,085 | I_kwDOJ0Z1Ps6fbOXl | 7,758 | OLLAMA_MAX_QUEUE does not limit requests to the same model | {
"login": "yyx1111",
"id": 20674290,
"node_id": "MDQ6VXNlcjIwNjc0Mjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20674290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyx1111",
"html_url": "https://github.com/yyx1111",
"followers_url": "https://api.github.com/users/yyx111... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-11-20T07:13:20 | 2024-11-21T17:17:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It seems that OLLAMA_MAX_QUEUE is not taking effect. My environment is Windows 11, and I have set OLLAMA_NUM_PARALLEL=1,
set OLLAMA_MAX_QUEUE=1, but excessive requests are still queuing up instead of returning an error.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7758/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5738 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5738/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5738/comments | https://api.github.com/repos/ollama/ollama/issues/5738/events | https://github.com/ollama/ollama/issues/5738 | 2,412,425,509 | I_kwDOJ0Z1Ps6PyrEl | 5,738 | How can I make the model produce consistent and stable results for the same prompt? | {
"login": "aitd0506",
"id": 166348996,
"node_id": "U_kgDOCepIxA",
"avatar_url": "https://avatars.githubusercontent.com/u/166348996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aitd0506",
"html_url": "https://github.com/aitd0506",
"followers_url": "https://api.github.com/users/aitd0506/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-07-17T01:49:35 | 2024-07-24T08:49:17 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
As the title suggests, my goal is to ensure that the model produces consistent and stable output for the same prompt.
I call the native gemma2:9b docker using the ollama-python lib API. I have explicitly modified the temperature to 0 and set a fixed seed in the Options, but I found that it did ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5738/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5738/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6573 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6573/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6573/comments | https://api.github.com/repos/ollama/ollama/issues/6573/events | https://github.com/ollama/ollama/issues/6573 | 2,498,182,463 | I_kwDOJ0Z1Ps6U5z0_ | 6,573 | Getting Error: llama runner process has terminated: exit status 127 | {
"login": "Blasserman",
"id": 45080630,
"node_id": "MDQ6VXNlcjQ1MDgwNjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/45080630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Blasserman",
"html_url": "https://github.com/Blasserman",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-30T21:15:43 | 2024-08-30T21:51:04 | 2024-08-30T21:51:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
On two different aarch64 SBC's running Debian. One running Bookworm, one running bullseye. Both get the error when loading the model llama3.1 into memory with Ollama 0.3.8. The previous version ollama worked fine.
dave@ai:~$ ollama -v
ollama version is 0.3.8
dave@ai:~$ ollama run llama3.1
... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6573/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4801 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4801/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4801/comments | https://api.github.com/repos/ollama/ollama/issues/4801/events | https://github.com/ollama/ollama/issues/4801 | 2,332,190,974 | I_kwDOJ0Z1Ps6LAmj- | 4,801 | Surface the embedding dimension of a model in docs/api | {
"login": "Kuinox",
"id": 18743295,
"node_id": "MDQ6VXNlcjE4NzQzMjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/18743295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kuinox",
"html_url": "https://github.com/Kuinox",
"followers_url": "https://api.github.com/users/Kuinox/fo... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
... | open | false | null | [] | null | 0 | 2024-06-03T22:44:53 | 2024-11-06T17:26:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, the embedding dimension isn't listed on the ollama docs or on the model card from ollama. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4801/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3737 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3737/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3737/comments | https://api.github.com/repos/ollama/ollama/issues/3737/events | https://github.com/ollama/ollama/pull/3737 | 2,251,712,102 | PR_kwDOJ0Z1Ps5tGu_m | 3,737 | update create handler to use model.Name | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2024-04-18T22:48:01 | 2024-06-05T19:05:06 | 2024-06-05T19:05:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3737",
"html_url": "https://github.com/ollama/ollama/pull/3737",
"diff_url": "https://github.com/ollama/ollama/pull/3737.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3737.patch",
"merged_at": "2024-06-05T19:05:05"
} | follow up to #3718 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3737/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2962 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2962/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2962/comments | https://api.github.com/repos/ollama/ollama/issues/2962/events | https://github.com/ollama/ollama/pull/2962 | 2,172,503,301 | PR_kwDOJ0Z1Ps5o5LJp | 2,962 | cmd: add usage for `ollama run` environment variables | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 2 | 2024-03-06T21:56:32 | 2024-03-07T21:57:08 | 2024-03-07T21:57:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2962",
"html_url": "https://github.com/ollama/ollama/pull/2962",
"diff_url": "https://github.com/ollama/ollama/pull/2962.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2962.patch",
"merged_at": "2024-03-07T21:57:08"
} | Also, fix the indentation of usage for `ollama serve` environment variables.
Fixes #2944 | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2962/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4952 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4952/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4952/comments | https://api.github.com/repos/ollama/ollama/issues/4952/events | https://github.com/ollama/ollama/issues/4952 | 2,342,272,579 | I_kwDOJ0Z1Ps6LnD5D | 4,952 | Modify .dockerignore to optimize Docker image build | {
"login": "Gabrielfernandes7",
"id": 78227127,
"node_id": "MDQ6VXNlcjc4MjI3MTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/78227127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gabrielfernandes7",
"html_url": "https://github.com/Gabrielfernandes7",
"followers_url": "https... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-06-09T13:57:22 | 2024-06-09T17:05:12 | 2024-06-09T17:05:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | To optimize the Docker image build and ensure unnecessary files are not included, it is important to add a `.dockerignore` file to the repository. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4952/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6989 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6989/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6989/comments | https://api.github.com/repos/ollama/ollama/issues/6989/events | https://github.com/ollama/ollama/pull/6989 | 2,551,529,764 | PR_kwDOJ0Z1Ps582Pk6 | 6,989 | llama: Exercise the new build in CI | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-09-26T21:05:00 | 2024-09-27T21:49:59 | 2024-09-27T21:49:56 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6989",
"html_url": "https://github.com/ollama/ollama/pull/6989",
"diff_url": "https://github.com/ollama/ollama/pull/6989.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6989.patch",
"merged_at": "2024-09-27T21:49:55"
} | Conditionally build the new runners if any code in that tree is touched. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6989/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3672 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3672/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3672/comments | https://api.github.com/repos/ollama/ollama/issues/3672/events | https://github.com/ollama/ollama/pull/3672 | 2,245,930,967 | PR_kwDOJ0Z1Ps5szCJx | 3,672 | Update README.md | {
"login": "ovovovovisov",
"id": 110344798,
"node_id": "U_kgDOBpO6Xg",
"avatar_url": "https://avatars.githubusercontent.com/u/110344798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ovovovovisov",
"html_url": "https://github.com/ovovovovisov",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2024-04-16T12:33:39 | 2024-04-16T21:00:48 | 2024-04-16T21:00:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3672",
"html_url": "https://github.com/ollama/ollama/pull/3672",
"diff_url": "https://github.com/ollama/ollama/pull/3672.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3672.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3672/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8448 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8448/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8448/comments | https://api.github.com/repos/ollama/ollama/issues/8448/events | https://github.com/ollama/ollama/issues/8448 | 2,791,776,340 | I_kwDOJ0Z1Ps6mZyBU | 8,448 | ollama create 0.5.5+ fails for models with multiple blobs | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 1 | 2025-01-16T06:00:24 | 2025-01-16T08:14:08 | 2025-01-16T08:14:08 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```console
$ ollama list llama3.2-vision
NAME ID SIZE MODIFIED
llama3.2-vision:latest 085a1fdae525 7.9 GB 2 weeks ago
$ ollama show --modelfile llama3.2-vision > Modelfile
$ head Modelfile
# Modelfile generated by "ollama show"
# To build a... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8448/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3994 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3994/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3994/comments | https://api.github.com/repos/ollama/ollama/issues/3994/events | https://github.com/ollama/ollama/issues/3994 | 2,267,374,664 | I_kwDOJ0Z1Ps6HJWRI | 3,994 | pulling manifest Error | {
"login": "xinewzhou",
"id": 168318989,
"node_id": "U_kgDOCghYDQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168318989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinewzhou",
"html_url": "https://github.com/xinewzhou",
"followers_url": "https://api.github.com/users/xinewz... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-04-28T06:23:30 | 2024-05-02T18:09:35 | 2024-05-02T18:09:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=KuhzAPbr727aW4zzdmUWPA&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1714285350": read tcp 192.168.2.33:60544->34.120.132.20:443: read: connection reset by peer
#... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3994/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1884 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1884/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1884/comments | https://api.github.com/repos/ollama/ollama/issues/1884/events | https://github.com/ollama/ollama/issues/1884 | 2,073,583,263 | I_kwDOJ0Z1Ps57mF6f | 1,884 | Running on Windows Docker vs WSL versions | {
"login": "ewebgh33",
"id": 123797054,
"node_id": "U_kgDOB2D-Pg",
"avatar_url": "https://avatars.githubusercontent.com/u/123797054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ewebgh33",
"html_url": "https://github.com/ewebgh33",
"followers_url": "https://api.github.com/users/ewebgh33/... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 10 | 2024-01-10T04:53:07 | 2024-03-13T15:10:43 | 2024-03-12T18:01:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi
Not really an issue, unless you include "more explanation" as an issue. But you guys don't have a discussion section here on Github.
**Is there any benefit (or con) to running in WSL vs Docker for Windows? (as still no sign of a Win version coming).**
I am leaning towards WSL simply because I've had issues in... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1884/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6603 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6603/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6603/comments | https://api.github.com/repos/ollama/ollama/issues/6603/events | https://github.com/ollama/ollama/issues/6603 | 2,502,156,165 | I_kwDOJ0Z1Ps6VI9-F | 6,603 | Add Phi-3.5-mini-instruct | {
"login": "oscarwang22",
"id": 141192440,
"node_id": "U_kgDOCGps-A",
"avatar_url": "https://avatars.githubusercontent.com/u/141192440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscarwang22",
"html_url": "https://github.com/oscarwang22",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-09-03T07:44:00 | 2024-09-05T17:04:17 | 2024-09-05T17:04:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Add Phi-3.5-mini-instruct. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6603/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8678 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8678/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8678/comments | https://api.github.com/repos/ollama/ollama/issues/8678/events | https://github.com/ollama/ollama/issues/8678 | 2,819,618,742 | I_kwDOJ0Z1Ps6oD_e2 | 8,678 | Missing support for name field | {
"login": "gagb",
"id": 13227607,
"node_id": "MDQ6VXNlcjEzMjI3NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/13227607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagb",
"html_url": "https://github.com/gagb",
"followers_url": "https://api.github.com/users/gagb/followers"... | [
{
"id": 6657611864,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjNMYWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/compatibility",
"name": "compatibility",
"color": "bfdadc",
"default": false,
"description": ""
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",... | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 3 | 2025-01-30T00:20:26 | 2025-01-30T08:40:28 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
For many models, phi-4, deepseek-r1, Ollama support OpenAI chat completion format, but it seems like it does not support the name field in the message history. It only supports the role and content field. Is there a plan to fix this?
### OS
_No response_
### GPU
_No response_
### CPU
_No r... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8678/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7903 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7903/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7903/comments | https://api.github.com/repos/ollama/ollama/issues/7903/events | https://github.com/ollama/ollama/issues/7903 | 2,710,268,254 | I_kwDOJ0Z1Ps6hi2le | 7,903 | Ollama missing support for tool calling stream | {
"login": "Teeeeg",
"id": 34847171,
"node_id": "MDQ6VXNlcjM0ODQ3MTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/34847171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Teeeeg",
"html_url": "https://github.com/Teeeeg",
"followers_url": "https://api.github.com/users/Teeeeg/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-12-02T02:54:10 | 2024-12-02T02:57:43 | 2024-12-02T02:57:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`async for event in agent.astream_events(**kwargs, version="v2")`
If there is a ToolNode, there is no onToolStart kind of event. Causing no further updates on the response.
I am not sure it is a bug with LangGraph.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.6 | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7903/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6040 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6040/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6040/comments | https://api.github.com/repos/ollama/ollama/issues/6040/events | https://github.com/ollama/ollama/issues/6040 | 2,434,768,786 | I_kwDOJ0Z1Ps6RH5-S | 6,040 | [Model request] Llama3.1 `text` model (not `instruct`) | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-07-29T08:20:19 | 2024-08-03T22:24:23 | 2024-07-30T01:23:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As per comment: https://github.com/ollama/ollama/issues/5919#issuecomment-2248604007
Could you please add the Meta-Llama-3.1 models to Ollama models please?
https://ollama.com/library/llama3.1
Currently, there are only the instruction-finetuned models (`...-instruct-...`), but not the vanilla models (`...-text-.... | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6040/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.