url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/8238 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8238/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8238/comments | https://api.github.com/repos/ollama/ollama/issues/8238/events | https://github.com/ollama/ollama/issues/8238 | 2,758,668,018 | I_kwDOJ0Z1Ps6kbe7y | 8,238 | vocabulary is larger than expected | {
"login": "lx687",
"id": 192780267,
"node_id": "U_kgDOC32X6w",
"avatar_url": "https://avatars.githubusercontent.com/u/192780267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lx687",
"html_url": "https://github.com/lx687",
"followers_url": "https://api.github.com/users/lx687/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2024-12-25T09:26:30 | 2024-12-27T02:05:34 | 2024-12-27T01:45:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama create model -f ./Modelfile report errors: Error: vocabulary is larger than expected '128257'instead of '128256'
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.5.4 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8238/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2729 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2729/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2729/comments | https://api.github.com/repos/ollama/ollama/issues/2729/events | https://github.com/ollama/ollama/issues/2729 | 2,152,283,323 | I_kwDOJ0Z1Ps6ASTy7 | 2,729 | 性能不佳:在本地笔记本电脑上通过Ollama运行大型模型 | {
"login": "GeYingzhen01",
"id": 155865563,
"node_id": "U_kgDOCUpR2w",
"avatar_url": "https://avatars.githubusercontent.com/u/155865563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeYingzhen01",
"html_url": "https://github.com/GeYingzhen01",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-02-24T12:14:58 | 2024-03-05T09:00:59 | 2024-02-25T06:06:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |

Running large models through Ollama on a local laptop results in significant lag, and the computer's performance is not fully utilized.
 to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple designs:
Strong Vision Encoder: we explored... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4257/reactions",
"total_count": 26,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4257/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6469 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6469/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6469/comments | https://api.github.com/repos/ollama/ollama/issues/6469/events | https://github.com/ollama/ollama/pull/6469 | 2,482,118,192 | PR_kwDOJ0Z1Ps55Ma7I | 6,469 | Link Time Optimization - cabelo@opensuse.org | {
"login": "cabelo",
"id": 675645,
"node_id": "MDQ6VXNlcjY3NTY0NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/675645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cabelo",
"html_url": "https://github.com/cabelo",
"followers_url": "https://api.github.com/users/cabelo/follow... | [] | closed | false | null | [] | null | 2 | 2024-08-23T02:30:02 | 2024-09-02T19:39:16 | 2024-09-02T19:39:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6469",
"html_url": "https://github.com/ollama/ollama/pull/6469",
"diff_url": "https://github.com/ollama/ollama/pull/6469.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6469.patch",
"merged_at": null
} | Tested in Debian, Ubuntu, Fedora, Redhad, openSUSE and SUSE the Link Time Optimization for interprocedural optimization in the performat the time of linking application code. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6469/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6469/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5035 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5035/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5035/comments | https://api.github.com/repos/ollama/ollama/issues/5035/events | https://github.com/ollama/ollama/issues/5035 | 2,352,132,207 | I_kwDOJ0Z1Ps6MMrBv | 5,035 | Ollama not use GPU | {
"login": "Mina4ever",
"id": 92083902,
"node_id": "U_kgDOBX0Wvg",
"avatar_url": "https://avatars.githubusercontent.com/u/92083902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mina4ever",
"html_url": "https://github.com/Mina4ever",
"followers_url": "https://api.github.com/users/Mina4eve... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 10 | 2024-06-13T21:58:12 | 2024-07-09T15:19:45 | 2024-06-14T17:00:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am using Ollama , it use CPU only and not use GPU, although I installed cuda v 12.5 and cudnn v 9.2.0 and I can check that python using gpu in liabrary like pytourch (result of command (>>> print(torch.backends.cudnn.is_available())
**True**, ), I have Nvidia 1050 ti and I am trying to runn... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5035/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5035/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3708 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3708/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3708/comments | https://api.github.com/repos/ollama/ollama/issues/3708/events | https://github.com/ollama/ollama/pull/3708 | 2,248,998,182 | PR_kwDOJ0Z1Ps5s9gKv | 3,708 | move Ollama static build to its own flag | {
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/foll... | [] | closed | false | null | [] | null | 2 | 2024-04-17T18:45:17 | 2024-04-18T23:04:12 | 2024-04-18T23:04:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3708",
"html_url": "https://github.com/ollama/ollama/pull/3708",
"diff_url": "https://github.com/ollama/ollama/pull/3708.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3708.patch",
"merged_at": "2024-04-18T23:04:12"
} | `static` builds by default, allows skipping, forces build if OLLAMA_CPU_TARGET="static" | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3708/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/894 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/894/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/894/comments | https://api.github.com/repos/ollama/ollama/issues/894/events | https://github.com/ollama/ollama/pull/894 | 1,959,795,310 | PR_kwDOJ0Z1Ps5dq1Pj | 894 | Linux uninstall instructions | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-24T18:04:51 | 2023-10-24T18:07:06 | 2023-10-24T18:07:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/894",
"html_url": "https://github.com/ollama/ollama/pull/894",
"diff_url": "https://github.com/ollama/ollama/pull/894.diff",
"patch_url": "https://github.com/ollama/ollama/pull/894.patch",
"merged_at": "2023-10-24T18:07:05"
} | Document how to clean up the standard Linux installation. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/894/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/934 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/934/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/934/comments | https://api.github.com/repos/ollama/ollama/issues/934/events | https://github.com/ollama/ollama/pull/934 | 1,965,851,513 | PR_kwDOJ0Z1Ps5d_ZJe | 934 | catch insufficient permissions nvidia err | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-27T16:36:50 | 2023-10-27T16:42:41 | 2023-10-27T16:42:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/934",
"html_url": "https://github.com/ollama/ollama/pull/934",
"diff_url": "https://github.com/ollama/ollama/pull/934.diff",
"patch_url": "https://github.com/ollama/ollama/pull/934.patch",
"merged_at": "2023-10-27T16:42:40"
} | If there is an insufficient permissions error on `nvidia-smi` execution if would be logged as a parsing error. Catch the error before this happens.
#932 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/934/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/972 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/972/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/972/comments | https://api.github.com/repos/ollama/ollama/issues/972/events | https://github.com/ollama/ollama/pull/972 | 1,974,503,930 | PR_kwDOJ0Z1Ps5ecmAG | 972 | reformat api docs for more examples | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-11-02T15:27:40 | 2023-11-03T14:57:01 | 2023-11-03T14:57:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/972",
"html_url": "https://github.com/ollama/ollama/pull/972",
"diff_url": "https://github.com/ollama/ollama/pull/972.diff",
"patch_url": "https://github.com/ollama/ollama/pull/972.patch",
"merged_at": "2023-11-03T14:57:00"
} | I'd like to add an example for raw requests in #952 to the docs, but that requires formatting them in way that is more friendly to multiple request/responses. This change moves request/response under an "examples" header. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/972/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5245 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5245/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5245/comments | https://api.github.com/repos/ollama/ollama/issues/5245/events | https://github.com/ollama/ollama/issues/5245 | 2,368,902,095 | I_kwDOJ0Z1Ps6NMpPP | 5,245 | Allow importing multi-file GGUF models | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 27 | 2024-06-23T21:45:41 | 2025-01-30T04:57:12 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Currently Ollama can [import GGUF files](https://github.com/ollama/ollama/blob/main/docs/import.md). However, larger models are sometimes split into separate files. Ollama should support loading multiple GGUF files similar to loading safetensor files.
### OS
_No response_
### GPU
_No respo... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5245/reactions",
"total_count": 19,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
} | https://api.github.com/repos/ollama/ollama/issues/5245/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4757 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4757/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4757/comments | https://api.github.com/repos/ollama/ollama/issues/4757/events | https://github.com/ollama/ollama/issues/4757 | 2,328,634,129 | I_kwDOJ0Z1Ps6KzCMR | 4,757 | please add support for AMD RX 580 | {
"login": "eliabexp",
"id": 74092305,
"node_id": "MDQ6VXNlcjc0MDkyMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/74092305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliabexp",
"html_url": "https://github.com/eliabexp",
"followers_url": "https://api.github.com/users/eli... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-31T21:51:22 | 2024-05-31T22:08:24 | 2024-05-31T22:08:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | please continue upgrading AMD gpu support adding RX 580 to the supported GPUs, I think this will expand the ollama accessibility | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4757/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4757/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3843 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3843/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3843/comments | https://api.github.com/repos/ollama/ollama/issues/3843/events | https://github.com/ollama/ollama/pull/3843 | 2,258,824,089 | PR_kwDOJ0Z1Ps5teTZk | 3,843 | Correct the kubernetes terminology | {
"login": "cloudmelon",
"id": 4621560,
"node_id": "MDQ6VXNlcjQ2MjE1NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4621560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cloudmelon",
"html_url": "https://github.com/cloudmelon",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-04-23T13:01:41 | 2024-05-07T16:53:09 | 2024-05-07T16:53:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3843",
"html_url": "https://github.com/ollama/ollama/pull/3843",
"diff_url": "https://github.com/ollama/ollama/pull/3843.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3843.patch",
"merged_at": "2024-05-07T16:53:09"
} | Correct the kubernetes terminology and explain the steps for testing. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3843/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/539 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/539/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/539/comments | https://api.github.com/repos/ollama/ollama/issues/539/events | https://github.com/ollama/ollama/issues/539 | 1,899,491,164 | I_kwDOJ0Z1Ps5xN-9c | 539 | docs need updated for langchainjs example | {
"login": "dprosper",
"id": 11874942,
"node_id": "MDQ6VXNlcjExODc0OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11874942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dprosper",
"html_url": "https://github.com/dprosper",
"followers_url": "https://api.github.com/users/dpr... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396210,
... | closed | false | null | [] | null | 3 | 2023-09-16T16:28:16 | 2023-12-24T21:40:42 | 2023-12-24T21:40:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | `docs/tutorials/langchainjs.md`
1. Missing an `await` in
```
const loader = new CheerioWebBaseLoader("https://en.wikipedia.org/wiki/2023_Hawaii_wildfires");
const data = await loader.load();
```
without it you get to the `.splitDocuments` without any data.
2. The cheerio module needs to be installed, i.e. ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/539/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1618 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1618/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1618/comments | https://api.github.com/repos/ollama/ollama/issues/1618/events | https://github.com/ollama/ollama/issues/1618 | 2,049,515,334 | I_kwDOJ0Z1Ps56KR9G | 1,618 | WSL: Error: timed out waiting for llama runner to start | {
"login": "otavio-silva",
"id": 22914610,
"node_id": "MDQ6VXNlcjIyOTE0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22914610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/otavio-silva",
"html_url": "https://github.com/otavio-silva",
"followers_url": "https://api.github.c... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 10 | 2023-12-19T22:11:12 | 2024-01-27T19:33:01 | 2024-01-27T19:33:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # Description
When trying to run the [dolphin-mixtral](https://ollama.ai/library/dolphin-mixtral) model in a container, I get a `Error: timed out waiting for llama runner to start` response.
# Steps to reproduce
```cmd
> podman run --device nvidia.com/gpu=all --security-opt label=disable --detach --volume .ollam... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1618/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1618/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/534/comments | https://api.github.com/repos/ollama/ollama/issues/534/events | https://github.com/ollama/ollama/pull/534 | 1,898,765,299 | PR_kwDOJ0Z1Ps5adYKQ | 534 | linux installer script | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-15T16:42:29 | 2023-09-22T16:01:04 | 2023-09-22T16:01:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/534",
"html_url": "https://github.com/ollama/ollama/pull/534",
"diff_url": "https://github.com/ollama/ollama/pull/534.diff",
"patch_url": "https://github.com/ollama/ollama/pull/534.patch",
"merged_at": "2023-09-22T16:01:03"
} | Add an install script to the website which downloads the appropriate linux package, unpackages it, adds it to the /usr/local/bin directory, and adds ollama as start-up service.
Before our next release we will:
- Do linux amd64 and aarch64 builds with CUDA enabled.
- Add them to the pre-release of the jmorgan/oll... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/534/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3654 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3654/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3654/comments | https://api.github.com/repos/ollama/ollama/issues/3654/events | https://github.com/ollama/ollama/pull/3654 | 2,243,971,528 | PR_kwDOJ0Z1Ps5ssT55 | 3,654 | chore: add dependabot | {
"login": "hutchic",
"id": 697188,
"node_id": "MDQ6VXNlcjY5NzE4OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/697188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hutchic",
"html_url": "https://github.com/hutchic",
"followers_url": "https://api.github.com/users/hutchic/fo... | [] | closed | false | null | [] | null | 1 | 2024-04-15T15:26:39 | 2024-11-21T10:05:44 | 2024-11-21T10:05:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3654",
"html_url": "https://github.com/ollama/ollama/pull/3654",
"diff_url": "https://github.com/ollama/ollama/pull/3654.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3654.patch",
"merged_at": null
} | related to https://github.com/ollama/ollama/pull/3627 though I don't recall is dependabot will catch a submodule if it's a directory down? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3654/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4581 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4581/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4581/comments | https://api.github.com/repos/ollama/ollama/issues/4581/events | https://github.com/ollama/ollama/pull/4581 | 2,311,674,262 | PR_kwDOJ0Z1Ps5wQ_Y- | 4,581 | DO NOT MERGE - testing CI | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-05-22T23:50:21 | 2024-05-22T23:59:27 | 2024-05-22T23:59:25 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4581",
"html_url": "https://github.com/ollama/ollama/pull/4581",
"diff_url": "https://github.com/ollama/ollama/pull/4581.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4581.patch",
"merged_at": null
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4581/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2897 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2897/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2897/comments | https://api.github.com/repos/ollama/ollama/issues/2897/events | https://github.com/ollama/ollama/issues/2897 | 2,165,438,352 | I_kwDOJ0Z1Ps6BEfeQ | 2,897 | Windows preview CUDA 5.2 support | {
"login": "lyczak",
"id": 4741907,
"node_id": "MDQ6VXNlcjQ3NDE5MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4741907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lyczak",
"html_url": "https://github.com/lyczak",
"followers_url": "https://api.github.com/users/lyczak/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-03-03T16:24:17 | 2024-03-21T17:36:20 | 2024-03-21T11:38:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello folks,
I've been trying to get started with the Windows preview version of ollama. However, I'm currently encountering an issue where my GTX 970 is not detected by the software. I've tried updating drivers and updating Windows to no avail. Assuming this is related to old CUDA version (CUDA 5.2) as mentioned in... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2897/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/68 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/68/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/68/comments | https://api.github.com/repos/ollama/ollama/issues/68/events | https://github.com/ollama/ollama/issues/68 | 1,799,725,660 | I_kwDOJ0Z1Ps5rRaJc | 68 | add an `/api/tokens` endpoint which returns the amount of token in a given input | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2023-07-11T20:44:24 | 2023-09-10T03:38:06 | 2023-09-10T03:38:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is useful for determining where to trim an input in the client | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/68/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/68/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1570 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1570/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1570/comments | https://api.github.com/repos/ollama/ollama/issues/1570/events | https://github.com/ollama/ollama/pull/1570 | 2,045,221,967 | PR_kwDOJ0Z1Ps5iMCWn | 1,570 | Fix omitempty typo | {
"login": "gluonfield",
"id": 5672094,
"node_id": "MDQ6VXNlcjU2NzIwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gluonfield",
"html_url": "https://github.com/gluonfield",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 1 | 2023-12-17T13:35:11 | 2023-12-18T22:01:05 | 2023-12-18T22:01:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1570",
"html_url": "https://github.com/ollama/ollama/pull/1570",
"diff_url": "https://github.com/ollama/ollama/pull/1570.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1570.patch",
"merged_at": null
} | - Removes space typo before omitempty | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1570/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7095 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7095/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7095/comments | https://api.github.com/repos/ollama/ollama/issues/7095/events | https://github.com/ollama/ollama/issues/7095 | 2,565,155,539 | I_kwDOJ0Z1Ps6Y5SrT | 7,095 | how to show the tray menu in dev mode? | {
"login": "hichemfantar",
"id": 34947993,
"node_id": "MDQ6VXNlcjM0OTQ3OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/34947993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hichemfantar",
"html_url": "https://github.com/hichemfantar",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-10-03T23:34:44 | 2024-10-03T23:41:31 | 2024-10-03T23:41:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm trying to work on the tray menu but It doesn't seem to show in dev mode.
Is there a special flag or other steps i need to do to get the dev build to register a tray icon? | {
"login": "hichemfantar",
"id": 34947993,
"node_id": "MDQ6VXNlcjM0OTQ3OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/34947993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hichemfantar",
"html_url": "https://github.com/hichemfantar",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7095/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3735 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3735/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3735/comments | https://api.github.com/repos/ollama/ollama/issues/3735/events | https://github.com/ollama/ollama/issues/3735 | 2,251,067,134 | I_kwDOJ0Z1Ps6GLI7- | 3,735 | Can you support llama3? | {
"login": "ICLXL",
"id": 30027321,
"node_id": "MDQ6VXNlcjMwMDI3MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/30027321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ICLXL",
"html_url": "https://github.com/ICLXL",
"followers_url": "https://api.github.com/users/ICLXL/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 27 | 2024-04-18T16:06:36 | 2024-04-19T21:40:53 | 2024-04-19T21:40:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://llama.meta.com/llama3/ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3735/reactions",
"total_count": 20,
"+1": 20,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3735/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4857 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4857/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4857/comments | https://api.github.com/repos/ollama/ollama/issues/4857/events | https://github.com/ollama/ollama/pull/4857 | 2,338,328,567 | PR_kwDOJ0Z1Ps5xr3F2 | 4,857 | feat: initial steps allow image embeddings | {
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/fo... | [] | open | false | null | [] | null | 1 | 2024-06-06T13:53:37 | 2024-10-07T19:20:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4857",
"html_url": "https://github.com/ollama/ollama/pull/4857",
"diff_url": "https://github.com/ollama/ollama/pull/4857.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4857.patch",
"merged_at": null
} | I would like to start the discussion about the possibility to add Image as an Input to the Embedding route.
I guess it would also need the llama.cpp runner to be able to handle it, for which I still need to look at.
Also, I would like to know if `batch` embeddings is something you would consider adding.
This P... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4857/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5262 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5262/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5262/comments | https://api.github.com/repos/ollama/ollama/issues/5262/events | https://github.com/ollama/ollama/issues/5262 | 2,371,429,144 | I_kwDOJ0Z1Ps6NWSMY | 5,262 | api 404 403 | {
"login": "vc815",
"id": 31056384,
"node_id": "MDQ6VXNlcjMxMDU2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31056384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vc815",
"html_url": "https://github.com/vc815",
"followers_url": "https://api.github.com/users/vc815/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-06-25T01:37:16 | 2024-06-25T01:56:02 | 2024-06-25T01:56:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama Run sqlcoder has been successfully run, but the test interface api, http://localhost:11434/api/chat, http://localhost:11434/api/generate The above addresses cannot be accessed 403,404 error, windows11 system, what is the reason for this? Port 11434 has been added to the firewall
### OS
... | {
"login": "vc815",
"id": 31056384,
"node_id": "MDQ6VXNlcjMxMDU2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31056384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vc815",
"html_url": "https://github.com/vc815",
"followers_url": "https://api.github.com/users/vc815/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5262/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1511 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1511/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1511/comments | https://api.github.com/repos/ollama/ollama/issues/1511/events | https://github.com/ollama/ollama/issues/1511 | 2,040,462,951 | I_kwDOJ0Z1Ps55nv5n | 1,511 | Submitting an image as the first argument in a prompt to a LLaVA model results in `Unknown command` | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2023-12-13T21:02:08 | 2023-12-21T18:21:01 | 2023-12-21T18:21:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
ollama run llava
>>> /Users/bruce/Downloads/Ollama_christmas_background.png hi
Unknown command '/Users/bruce/Downloads/Ollama_christmas_background.png'. Type /? for help
>>>
``` | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1511/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7212/comments | https://api.github.com/repos/ollama/ollama/issues/7212/events | https://github.com/ollama/ollama/pull/7212 | 2,589,551,127 | PR_kwDOJ0Z1Ps5-txh5 | 7,212 | Better support for AMD multi-GPU on linux | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-10-15T18:28:12 | 2024-10-26T21:04:17 | 2024-10-26T21:04:14 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7212",
"html_url": "https://github.com/ollama/ollama/pull/7212",
"diff_url": "https://github.com/ollama/ollama/pull/7212.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7212.patch",
"merged_at": "2024-10-26T21:04:14"
} | This resolves a number of problems related to AMD multi-GPU setups on linux.
The numeric IDs used by rocm are not the same as the numeric IDs exposed in sysfs although the ordering is consistent. We have to count up from the first valid gfx (major/minor/patch with non-zero values) we find starting at zero.
There... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7212/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7212/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7566 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7566/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7566/comments | https://api.github.com/repos/ollama/ollama/issues/7566/events | https://github.com/ollama/ollama/issues/7566 | 2,642,721,062 | I_kwDOJ0Z1Ps6dhLkm | 7,566 | Having trouble with vram using priority | {
"login": "morika546",
"id": 187546431,
"node_id": "U_kgDOCy27Pw",
"avatar_url": "https://avatars.githubusercontent.com/u/187546431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morika546",
"html_url": "https://github.com/morika546",
"followers_url": "https://api.github.com/users/morika... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-11-08T03:22:42 | 2024-11-08T09:19:34 | 2024-11-08T09:19:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
My GPU is 6800xt with 16 GB VRAM,and my RAM is 64 GB.When I running the model,It always using so many shared VRAM while there is still a lot of free dedicated VRAM.For example, when I run a 13 b sized model,it needs 12GB VRAM,and only 4GB Runs by dedicated VRAM,the resist 8GB runs by shared VRA... | {
"login": "morika546",
"id": 187546431,
"node_id": "U_kgDOCy27Pw",
"avatar_url": "https://avatars.githubusercontent.com/u/187546431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morika546",
"html_url": "https://github.com/morika546",
"followers_url": "https://api.github.com/users/morika... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7566/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1134 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1134/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1134/comments | https://api.github.com/repos/ollama/ollama/issues/1134/events | https://github.com/ollama/ollama/pull/1134 | 1,993,837,800 | PR_kwDOJ0Z1Ps5feCUJ | 1,134 | progress bar | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 9 | 2023-11-15T01:06:58 | 2023-11-17T22:03:36 | 2023-11-17T22:03:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1134",
"html_url": "https://github.com/ollama/ollama/pull/1134",
"diff_url": "https://github.com/ollama/ollama/pull/1134.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1134.patch",
"merged_at": "2023-11-17T22:03:35"
} | Example:
```
$ ollama pull mistral
pulling manifest (1s)
downloading 6ae280299950 100.0% [===============================... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1134/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6306 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6306/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6306/comments | https://api.github.com/repos/ollama/ollama/issues/6306/events | https://github.com/ollama/ollama/issues/6306 | 2,459,393,297 | I_kwDOJ0Z1Ps6Sl10R | 6,306 | Running ollama on island device with no Internet connection | {
"login": "whatdhack",
"id": 12969966,
"node_id": "MDQ6VXNlcjEyOTY5OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/12969966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whatdhack",
"html_url": "https://github.com/whatdhack",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 11 | 2024-08-11T03:01:43 | 2024-08-11T19:26:47 | 2024-08-11T17:33:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Trying to run ollama on island devices with no Internet connection. Getting the following error message.
`Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest":` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6306/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6982 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6982/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6982/comments | https://api.github.com/repos/ollama/ollama/issues/6982/events | https://github.com/ollama/ollama/issues/6982 | 2,551,230,311 | I_kwDOJ0Z1Ps6YEK9n | 6,982 | Mistral-NeMo-Minitron-8B-Base/Chat | {
"login": "Axenide",
"id": 66109459,
"node_id": "MDQ6VXNlcjY2MTA5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/66109459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Axenide",
"html_url": "https://github.com/Axenide",
"followers_url": "https://api.github.com/users/Axenid... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 4 | 2024-09-26T18:05:35 | 2024-11-17T15:33:13 | 2024-11-17T15:33:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Mistral-NeMo-12B has great capabilities, but it doesn't fit in my GPU so I have to offload part of it to the CPU and RAM, which makes it really slow. 8B models work great though, so I think it would be a great adition to have this model.
Here is the base model:
https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6982/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6982/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3596 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3596/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3596/comments | https://api.github.com/repos/ollama/ollama/issues/3596/events | https://github.com/ollama/ollama/pull/3596 | 2,237,770,624 | PR_kwDOJ0Z1Ps5sXOTG | 3,596 | api: fill up API documentation | {
"login": "eliben",
"id": 1130906,
"node_id": "MDQ6VXNlcjExMzA5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1130906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliben",
"html_url": "https://github.com/eliben",
"followers_url": "https://api.github.com/users/eliben/foll... | [] | closed | false | null | [] | null | 3 | 2024-04-11T13:47:32 | 2024-05-07T23:27:47 | 2024-05-07T23:27:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3596",
"html_url": "https://github.com/ollama/ollama/pull/3596",
"diff_url": "https://github.com/ollama/ollama/pull/3596.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3596.patch",
"merged_at": "2024-05-07T23:27:47"
} | Followup for #2878
Now that the documentation is more complete, mention it in the README. Once a new version of ollama is tagged, pkg.go.dev will pick up the documentation comments and display everything on the linked page
Updates #2840 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3596/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3596/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8222 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8222/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8222/comments | https://api.github.com/repos/ollama/ollama/issues/8222/events | https://github.com/ollama/ollama/issues/8222 | 2,756,724,761 | I_kwDOJ0Z1Ps6kUEgZ | 8,222 | Change ToolFunction->Parameters to json.RawMessage like in the Format property | {
"login": "jerbob92",
"id": 1312921,
"node_id": "MDQ6VXNlcjEzMTI5MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1312921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerbob92",
"html_url": "https://github.com/jerbob92",
"followers_url": "https://api.github.com/users/jerbo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 0 | 2024-12-23T21:15:28 | 2024-12-24T19:22:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to use Tools in the `ChatRequest`, but the `Parameters` property in `ToolFunction` does not allow me to put my full JSON schema in there, while the `Format` property does.
I would suggest changing the type of `Parameters` into `json.RawMessage` just like `Format`.
I'm current... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8222/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2852 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2852/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2852/comments | https://api.github.com/repos/ollama/ollama/issues/2852/events | https://github.com/ollama/ollama/issues/2852 | 2,162,517,774 | I_kwDOJ0Z1Ps6A5WcO | 2,852 | Missing example files in examples/python-chat-app. | {
"login": "caol64",
"id": 6183265,
"node_id": "MDQ6VXNlcjYxODMyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6183265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caol64",
"html_url": "https://github.com/caol64",
"followers_url": "https://api.github.com/users/caol64/foll... | [] | closed | false | null | [] | null | 2 | 2024-03-01T03:24:46 | 2024-03-12T00:17:10 | 2024-03-12T00:17:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2852/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4471 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4471/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4471/comments | https://api.github.com/repos/ollama/ollama/issues/4471/events | https://github.com/ollama/ollama/issues/4471 | 2,299,859,960 | I_kwDOJ0Z1Ps6JFRP4 | 4,471 | Warning: client version is different than Ollama version in Linux | {
"login": "sohang3112",
"id": 31966963,
"node_id": "MDQ6VXNlcjMxOTY2OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/31966963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sohang3112",
"html_url": "https://github.com/sohang3112",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-16T09:31:20 | 2025-01-30T08:31:35 | 2024-05-16T19:52:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I had installed Ollama version 0.1.31 previously in Linux. Now after upgrading, ollama says its version is different than client version. How to fix this so that both are upgraded to 0.1.38?
```console
$ curl -fsSL https://ollama.com/install.sh | sh # command to upgrade ollama
$ ollama ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4471/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/138 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/138/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/138/comments | https://api.github.com/repos/ollama/ollama/issues/138/events | https://github.com/ollama/ollama/issues/138 | 1,814,256,339 | I_kwDOJ0Z1Ps5sI1rT | 138 | Progress spinner not quite right on WSL | {
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.gith... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-07-20T15:23:10 | 2023-08-30T21:35:44 | 2023-08-30T21:35:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/138/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3589 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3589/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3589/comments | https://api.github.com/repos/ollama/ollama/issues/3589/events | https://github.com/ollama/ollama/pull/3589 | 2,237,107,619 | PR_kwDOJ0Z1Ps5sU8bI | 3,589 | types/model: remove (*Digest).Scan and Digest.Value | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-04-11T07:37:18 | 2024-04-11T07:37:27 | 2024-04-11T07:37:27 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3589",
"html_url": "https://github.com/ollama/ollama/pull/3589",
"diff_url": "https://github.com/ollama/ollama/pull/3589.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3589.patch",
"merged_at": "2024-04-11T07:37:27"
} | null | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3589/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1337 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1337/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1337/comments | https://api.github.com/repos/ollama/ollama/issues/1337/events | https://github.com/ollama/ollama/issues/1337 | 2,019,796,865 | I_kwDOJ0Z1Ps54Y6eB | 1,337 | API interface works fine, CLI returns non-descriptive error presumably due to proxy with Docker install | {
"login": "mlewis1973",
"id": 2373703,
"node_id": "MDQ6VXNlcjIzNzM3MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2373703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mlewis1973",
"html_url": "https://github.com/mlewis1973",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 8 | 2023-12-01T00:41:20 | 2024-08-23T21:05:09 | 2024-08-23T21:05:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Docker image installed on multiple Linux and Mac systems, both with and wo GPUs.
Local proxy settings set in daemon.json and well as passed to docker with -e and --env
API interface works fine, but CLI generates error for
'ollama run llama2'
'ollama list'
'ollama pull mistral'
ollama --version and --help do ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1337/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/174 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/174/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/174/comments | https://api.github.com/repos/ollama/ollama/issues/174/events | https://github.com/ollama/ollama/pull/174 | 1,816,639,351 | PR_kwDOJ0Z1Ps5WJFmW | 174 | allocate a large enough tokens slice | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-22T06:05:38 | 2023-07-24T15:49:52 | 2023-07-24T15:22:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/174",
"html_url": "https://github.com/ollama/ollama/pull/174",
"diff_url": "https://github.com/ollama/ollama/pull/174.diff",
"patch_url": "https://github.com/ollama/ollama/pull/174.patch",
"merged_at": "2023-07-24T15:22:51"
} | cherry picked from #102 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/174/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7321 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7321/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7321/comments | https://api.github.com/repos/ollama/ollama/issues/7321/events | https://github.com/ollama/ollama/issues/7321 | 2,606,044,576 | I_kwDOJ0Z1Ps6bVRWg | 7,321 | Support loading the same model more than once | {
"login": "jfwreinhardt",
"id": 185949500,
"node_id": "U_kgDOCxVdPA",
"avatar_url": "https://avatars.githubusercontent.com/u/185949500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jfwreinhardt",
"html_url": "https://github.com/jfwreinhardt",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-10-22T17:20:53 | 2024-10-22T17:36:45 | 2024-10-22T17:36:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Are there any plans to support loading the same model more than once?
On a CUDA based system with multiple GPUs, I have observed that performance decreases for each new concurrent prompt against the same model. To put it another way, we see higher tokens/s for sending a prompt to four different models concurrentl... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7321/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5662 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5662/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5662/comments | https://api.github.com/repos/ollama/ollama/issues/5662/events | https://github.com/ollama/ollama/pull/5662 | 2,406,659,729 | PR_kwDOJ0Z1Ps51R2xs | 5,662 | fix system prompt | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-13T03:22:40 | 2024-07-13T04:04:46 | 2024-07-13T04:04:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5662",
"html_url": "https://github.com/ollama/ollama/pull/5662",
"diff_url": "https://github.com/ollama/ollama/pull/5662.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5662.patch",
"merged_at": "2024-07-13T04:04:44"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5662/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7764 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7764/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7764/comments | https://api.github.com/repos/ollama/ollama/issues/7764/events | https://github.com/ollama/ollama/pull/7764 | 2,676,671,410 | PR_kwDOJ0Z1Ps6CkDRu | 7,764 | Fix minor typo in import.md | {
"login": "iamrohitanshu",
"id": 85547195,
"node_id": "MDQ6VXNlcjg1NTQ3MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/85547195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamrohitanshu",
"html_url": "https://github.com/iamrohitanshu",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2024-11-20T17:36:08 | 2024-11-20T17:57:32 | 2024-11-20T17:57:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7764",
"html_url": "https://github.com/ollama/ollama/pull/7764",
"diff_url": "https://github.com/ollama/ollama/pull/7764.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7764.patch",
"merged_at": "2024-11-20T17:57:32"
} | changed 'containg' to 'containing'. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7764/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5386 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5386/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5386/comments | https://api.github.com/repos/ollama/ollama/issues/5386/events | https://github.com/ollama/ollama/issues/5386 | 2,381,960,350 | I_kwDOJ0Z1Ps6N-dSe | 5,386 | Add environment variable for "read only" mode | {
"login": "steren",
"id": 360895,
"node_id": "MDQ6VXNlcjM2MDg5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/360895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steren",
"html_url": "https://github.com/steren",
"followers_url": "https://api.github.com/users/steren/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2024-06-29T21:07:53 | 2024-06-29T21:43:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running as a server, Ollama currently exposes all Ollama features as [an API](https://github.com/ollama/ollama/blob/main/docs/api.md)
If run as a public API, API maintainers might want to only expose Ollama's generation and model listing capabilities, so that their endpoint is "read only", and not other endpoin... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5386/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8663 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8663/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8663/comments | https://api.github.com/repos/ollama/ollama/issues/8663/events | https://github.com/ollama/ollama/pull/8663 | 2,818,399,336 | PR_kwDOJ0Z1Ps6JXytP | 8,663 | Update README.md Adding DeepSeek to the table of models | {
"login": "teymuur",
"id": 64795612,
"node_id": "MDQ6VXNlcjY0Nzk1NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/64795612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teymuur",
"html_url": "https://github.com/teymuur",
"followers_url": "https://api.github.com/users/teymuu... | [] | open | false | null | [] | null | 0 | 2025-01-29T14:34:27 | 2025-01-30T05:12:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8663",
"html_url": "https://github.com/ollama/ollama/pull/8663",
"diff_url": "https://github.com/ollama/ollama/pull/8663.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8663.patch",
"merged_at": null
} | This is just a minor change, I added DeepSeek R1 to the model library table. Only changed `README.md`. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8663/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8039 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8039/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8039/comments | https://api.github.com/repos/ollama/ollama/issues/8039/events | https://github.com/ollama/ollama/pull/8039 | 2,731,883,281 | PR_kwDOJ0Z1Ps6EzP5A | 8,039 | win: builtin arm runner | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-12-11T05:43:29 | 2024-12-11T16:32:17 | 2024-12-11T16:32:13 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8039",
"html_url": "https://github.com/ollama/ollama/pull/8039",
"diff_url": "https://github.com/ollama/ollama/pull/8039.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8039.patch",
"merged_at": "2024-12-11T16:32:13"
} | The new build embeds the arm runner in the
main binary, so there is no longer a lib/ollama | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8039/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1875 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1875/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1875/comments | https://api.github.com/repos/ollama/ollama/issues/1875/events | https://github.com/ollama/ollama/pull/1875 | 2,073,103,979 | PR_kwDOJ0Z1Ps5jng0P | 1,875 | Calculate overhead based number of gpu devices | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-01-09T20:24:26 | 2024-01-09T20:53:34 | 2024-01-09T20:53:33 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1875",
"html_url": "https://github.com/ollama/ollama/pull/1875",
"diff_url": "https://github.com/ollama/ollama/pull/1875.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1875.patch",
"merged_at": "2024-01-09T20:53:33"
} | The CUDA memory allocated for overhead is placed on a single GPU | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1875/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4472 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4472/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4472/comments | https://api.github.com/repos/ollama/ollama/issues/4472/events | https://github.com/ollama/ollama/issues/4472 | 2,299,878,366 | I_kwDOJ0Z1Ps6JFVve | 4,472 | `llama3-chatqa` always returns `Empty reponse` | {
"login": "pnmartinez",
"id": 29891887,
"node_id": "MDQ6VXNlcjI5ODkxODg3",
"avatar_url": "https://avatars.githubusercontent.com/u/29891887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pnmartinez",
"html_url": "https://github.com/pnmartinez",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-05-16T09:39:59 | 2024-05-21T16:56:41 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # Problem
I've been toying around with RAG using `ollama` and `llama-index`.
The results I am getting with `llama3 8b` are not that good, so I was happy to see `llama3-chatqa` being added in `v0.1.35`.
However, I always get "Empty response" using `llama3-chatqa`. Is there sth I am missing?
## Code
```py
from... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4472/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/717 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/717/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/717/comments | https://api.github.com/repos/ollama/ollama/issues/717/events | https://github.com/ollama/ollama/issues/717 | 1,930,346,259 | I_kwDOJ0Z1Ps5zDr8T | 717 | Change system model when running as a service | {
"login": "wifiuk",
"id": 3785545,
"node_id": "MDQ6VXNlcjM3ODU1NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3785545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wifiuk",
"html_url": "https://github.com/wifiuk",
"followers_url": "https://api.github.com/users/wifiuk/foll... | [] | closed | false | null | [] | null | 4 | 2023-10-06T14:51:44 | 2023-10-06T15:50:37 | 2023-10-06T14:56:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If I originally was messing around with Llama 7b and got it running as a background service, how do I change the model that it uses? | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/717/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2282 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2282/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2282/comments | https://api.github.com/repos/ollama/ollama/issues/2282/events | https://github.com/ollama/ollama/issues/2282 | 2,108,819,234 | I_kwDOJ0Z1Ps59sgci | 2,282 | Slow response with concurrent requests | {
"login": "oxaronick",
"id": 86964206,
"node_id": "MDQ6VXNlcjg2OTY0MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/86964206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oxaronick",
"html_url": "https://github.com/oxaronick",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 6 | 2024-01-30T21:31:26 | 2024-03-11T22:34:39 | 2024-03-11T22:32:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama is great. It makes deploying LLMs easy. However, I have an issue with sending two requests to Ollama within a second or so of each other.
When I do this, Ollama usually responds to one of the requests fine, but the CPU usage jumps by at least 100% and the other request doesn't get a response. Sometimes it wil... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2282/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4582 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4582/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4582/comments | https://api.github.com/repos/ollama/ollama/issues/4582/events | https://github.com/ollama/ollama/issues/4582 | 2,311,815,287 | I_kwDOJ0Z1Ps6Jy4B3 | 4,582 | Add Alpaca to 'Community Integrations' | {
"login": "Jeffser",
"id": 69224322,
"node_id": "MDQ6VXNlcjY5MjI0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/69224322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jeffser",
"html_url": "https://github.com/Jeffser",
"followers_url": "https://api.github.com/users/Jeffse... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-23T02:37:48 | 2024-12-25T04:25:14 | 2024-12-25T04:25:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, I've been working on this app called [Alpaca](https://github.com/Jeffser/Alpaca), it's an Ollama client that uses GTK and Adwaita, it's meant for Gnome users, it comes with an integrated Ollama instance.
It's available only on [Flathub](https://flathub.org/apps/com.jeffser.Alpaca) right now. | {
"login": "Jeffser",
"id": 69224322,
"node_id": "MDQ6VXNlcjY5MjI0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/69224322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jeffser",
"html_url": "https://github.com/Jeffser",
"followers_url": "https://api.github.com/users/Jeffse... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4582/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4582/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8654 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8654/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8654/comments | https://api.github.com/repos/ollama/ollama/issues/8654/events | https://github.com/ollama/ollama/issues/8654 | 2,817,986,286 | I_kwDOJ0Z1Ps6n9w7u | 8,654 | Available memory check should be disabled when mmap is in use | {
"login": "outis151",
"id": 11805613,
"node_id": "MDQ6VXNlcjExODA1NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11805613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/outis151",
"html_url": "https://github.com/outis151",
"followers_url": "https://api.github.com/users/out... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-29T11:48:38 | 2025-01-29T13:07:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
With mmap enabled, a model does not need to fit in the system RAM. Therefore the associated check should be disabled in this case.
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8654/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8654/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/593 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/593/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/593/comments | https://api.github.com/repos/ollama/ollama/issues/593/events | https://github.com/ollama/ollama/pull/593 | 1,912,249,816 | PR_kwDOJ0Z1Ps5bKh_h | 593 | update install.sh | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-25T20:40:09 | 2023-09-25T21:09:41 | 2023-09-25T21:09:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/593",
"html_url": "https://github.com/ollama/ollama/pull/593",
"diff_url": "https://github.com/ollama/ollama/pull/593.diff",
"patch_url": "https://github.com/ollama/ollama/pull/593.patch",
"merged_at": "2023-09-25T21:09:40"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/593/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3069 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3069/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3069/comments | https://api.github.com/repos/ollama/ollama/issues/3069/events | https://github.com/ollama/ollama/pull/3069 | 2,180,404,073 | PR_kwDOJ0Z1Ps5pUBQT | 3,069 | use `-trimpath` when building releases | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-03-11T22:56:50 | 2024-03-11T22:58:47 | 2024-03-11T22:58:47 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3069",
"html_url": "https://github.com/ollama/ollama/pull/3069",
"diff_url": "https://github.com/ollama/ollama/pull/3069.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3069.patch",
"merged_at": "2024-03-11T22:58:47"
} | Fixes https://github.com/ollama/ollama/issues/2958 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3069/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4709 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4709/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4709/comments | https://api.github.com/repos/ollama/ollama/issues/4709/events | https://github.com/ollama/ollama/issues/4709 | 2,324,189,173 | I_kwDOJ0Z1Ps6KiE_1 | 4,709 | Code models like codestral should have a lower temperature | {
"login": "DuckyBlender",
"id": 42645784,
"node_id": "MDQ6VXNlcjQyNjQ1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuckyBlender",
"html_url": "https://github.com/DuckyBlender",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 6 | 2024-05-29T20:25:45 | 2024-07-03T12:44:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This makes the code more correct | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4709/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2543 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2543/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2543/comments | https://api.github.com/repos/ollama/ollama/issues/2543/events | https://github.com/ollama/ollama/issues/2543 | 2,139,017,719 | I_kwDOJ0Z1Ps5_ftH3 | 2,543 | Ollama crashes on Llava on windows after passing image path OOM | {
"login": "jkfnc",
"id": 56741357,
"node_id": "MDQ6VXNlcjU2NzQxMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/56741357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkfnc",
"html_url": "https://github.com/jkfnc",
"followers_url": "https://api.github.com/users/jkfnc/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 7 | 2024-02-16T17:04:10 | 2024-05-10T01:10:13 | 2024-05-10T01:10:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama crashes when tried with this for llava
What's in this image? C:\Users\test\Downloads\pexels-oleksandr-p-321552.jpg | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2543/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6249 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6249/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6249/comments | https://api.github.com/repos/ollama/ollama/issues/6249/events | https://github.com/ollama/ollama/issues/6249 | 2,454,650,191 | I_kwDOJ0Z1Ps6STv1P | 6,249 | ollama run llama3.1 command outputs nonsense | {
"login": "erfan-khalaji",
"id": 54494671,
"node_id": "MDQ6VXNlcjU0NDk0Njcx",
"avatar_url": "https://avatars.githubusercontent.com/u/54494671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erfan-khalaji",
"html_url": "https://github.com/erfan-khalaji",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-08-08T01:20:42 | 2024-08-09T05:30:35 | 2024-08-08T22:15:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After installing Ollama on macOS, I attempted to run the model using the `ollama run llama3.1` command. However, when I tried running the model by inputting "hello," it returned what appeared to be random ASCII characters, which didn't make sense. I then used `ollama pull llama2` and `ollama pul... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6249/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7000 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7000/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7000/comments | https://api.github.com/repos/ollama/ollama/issues/7000/events | https://github.com/ollama/ollama/issues/7000 | 2,552,479,448 | I_kwDOJ0Z1Ps6YI77Y | 7,000 | Respect the Access-Control-Allow-Private-Network in Chrome | {
"login": "PaulKinlan",
"id": 45510,
"node_id": "MDQ6VXNlcjQ1NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/45510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulKinlan",
"html_url": "https://github.com/PaulKinlan",
"followers_url": "https://api.github.com/users/PaulKin... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-09-27T09:40:32 | 2024-09-27T09:40:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm testing ollama from an environment hosted on repl.it running in the browser. I have a local version of ollama running with the `OLLAMA_HOST=*, https://48a38c67-3eda-41cf-804b-e04fba963d55-00-14tthqngapcgy.worf.replit.dev` (other variations result in the same error).
It looks like the new ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7000/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/7000/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7760 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7760/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7760/comments | https://api.github.com/repos/ollama/ollama/issues/7760/events | https://github.com/ollama/ollama/issues/7760 | 2,675,873,873 | I_kwDOJ0Z1Ps6ffphR | 7,760 | qwen2.5-coder isn't utilizing the GPU | {
"login": "Novido",
"id": 4237670,
"node_id": "MDQ6VXNlcjQyMzc2NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4237670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Novido",
"html_url": "https://github.com/Novido",
"followers_url": "https://api.github.com/users/Novido/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2024-11-20T13:22:24 | 2024-12-14T16:41:12 | 2024-12-14T16:41:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When running a query with qwen2.5 (32b) it only uses the CPU for some reason. I can switch to another model (llama, phi, gemma) and they all utilize the GPU.
Reproduce:
1. Run docker in an Ubuntu container on an standalone server
2. Install Ollama and Open-Webui
3. Download models qwen2.5-... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7760/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6887 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6887/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6887/comments | https://api.github.com/repos/ollama/ollama/issues/6887/events | https://github.com/ollama/ollama/issues/6887 | 2,537,641,050 | I_kwDOJ0Z1Ps6XQVRa | 6,887 | `temperature` for reader-lm should be 0 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-09-20T01:29:27 | 2024-10-05T21:40:15 | null | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [reader-lm](https://ollama.com/library/reader-lm) converts HTML to Markdown but with the default temperature, it hallucinates content: https://github.com/ollama/ollama/issues/6875. Setting `temperature` to zero appears to resolve this. This would be nice to have in the model config in the ollama library. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6887/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6887/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5284 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5284/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5284/comments | https://api.github.com/repos/ollama/ollama/issues/5284/events | https://github.com/ollama/ollama/pull/5284 | 2,373,688,590 | PR_kwDOJ0Z1Ps5zjQIb | 5,284 | tools | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 11 | 2024-06-25T21:26:54 | 2024-07-25T09:50:57 | 2024-07-16T01:03:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5284",
"html_url": "https://github.com/ollama/ollama/pull/5284",
"diff_url": "https://github.com/ollama/ollama/pull/5284.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5284.patch",
"merged_at": "2024-07-16T01:03:38"
} | ```
curl -s 127.0.0.1:11434/api/chat -d '{
"model": "mike/mistral",
"messages": [
{
"role": "user",
"content": "What's the weather like today in Paris?"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "89a1e453-0bce-4de3-a456-c54bed09c520",
... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5284/reactions",
"total_count": 59,
"+1": 30,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 18,
"rocket": 11,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5284/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3793 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3793/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3793/comments | https://api.github.com/repos/ollama/ollama/issues/3793/events | https://github.com/ollama/ollama/issues/3793 | 2,254,948,247 | I_kwDOJ0Z1Ps6GZ8eX | 3,793 | Can`t get correct response via API if the content has Chinese words | {
"login": "wei-z-git",
"id": 32572815,
"node_id": "MDQ6VXNlcjMyNTcyODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/32572815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wei-z-git",
"html_url": "https://github.com/wei-z-git",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-21T07:43:39 | 2024-06-04T22:33:47 | 2024-06-04T22:33:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I try to get response with chinese words using API, like
## Use API
### In Chinese
Request
```shell
$ curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages":[{"role":"user","content": "为什么天空是蓝色的"}] ,"stream": false}'
```
It seems the llama didn`t get my poin... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3793/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8010 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8010/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8010/comments | https://api.github.com/repos/ollama/ollama/issues/8010/events | https://github.com/ollama/ollama/issues/8010 | 2,726,545,757 | I_kwDOJ0Z1Ps6ig8ld | 8,010 | Llama 3.3 still has metadata from Llama 3.1 | {
"login": "SamuelHafner",
"id": 45936995,
"node_id": "MDQ6VXNlcjQ1OTM2OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/45936995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelHafner",
"html_url": "https://github.com/SamuelHafner",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-12-09T09:55:59 | 2024-12-11T09:33:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
The model of LLAMA3.3 still has metadata from LLAMA3.1

btw. What type of Quantization are you using?
Thank you
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8010/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2889 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2889/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2889/comments | https://api.github.com/repos/ollama/ollama/issues/2889/events | https://github.com/ollama/ollama/issues/2889 | 2,165,127,969 | I_kwDOJ0Z1Ps6BDTsh | 2,889 | Windows CUDA OOM GTX 1650 switching models between mistral and gemma | {
"login": "qianjun1985",
"id": 65411571,
"node_id": "MDQ6VXNlcjY1NDExNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/65411571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qianjun1985",
"html_url": "https://github.com/qianjun1985",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 6 | 2024-03-03T03:53:31 | 2024-05-18T03:04:45 | 2024-05-18T03:04:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use an AI translator program that can use ollama to load local Llms, at first it worked well with one model (mistral), but after I downloaded another, memma 7b, both models failed to work. The UI of that translator program shows error information as follows:
Failed to call API, error sending request for url (... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2889/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4534/comments | https://api.github.com/repos/ollama/ollama/issues/4534/events | https://github.com/ollama/ollama/issues/4534 | 2,305,150,640 | I_kwDOJ0Z1Ps6JZc6w | 4,534 | Ctrl+Backspace doesn't delete full words in `ollama run ...` mode | {
"login": "DeflateAwning",
"id": 11021263,
"node_id": "MDQ6VXNlcjExMDIxMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/11021263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeflateAwning",
"html_url": "https://github.com/DeflateAwning",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.... | null | 7 | 2024-05-20T05:53:17 | 2024-05-22T23:40:24 | 2024-05-22T05:49:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
In `ollama run <model>` mode, pressing Ctrl+Backspace should delete a whole word backwards.
To implement this, "listen" for "Ctrl+H" presses (as that's what Ctrl+Backspace is sent as)
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.38 | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4534/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3049 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3049/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3049/comments | https://api.github.com/repos/ollama/ollama/issues/3049/events | https://github.com/ollama/ollama/pull/3049 | 2,177,924,627 | PR_kwDOJ0Z1Ps5pLde3 | 3,049 | Disable execstack for amd libraries | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2024-03-10T22:02:37 | 2024-03-11T16:22:00 | 2024-03-11T16:21:59 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3049",
"html_url": "https://github.com/ollama/ollama/pull/3049",
"diff_url": "https://github.com/ollama/ollama/pull/3049.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3049.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3049/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6289 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6289/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6289/comments | https://api.github.com/repos/ollama/ollama/issues/6289/events | https://github.com/ollama/ollama/issues/6289 | 2,458,446,849 | I_kwDOJ0Z1Ps6SiOwB | 6,289 | some models crash on rocm (7900XT) | {
"login": "markg85",
"id": 49061,
"node_id": "MDQ6VXNlcjQ5MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/49061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markg85",
"html_url": "https://github.com/markg85",
"followers_url": "https://api.github.com/users/markg85/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 12 | 2024-08-09T18:21:38 | 2024-10-23T13:23:28 | 2024-10-23T13:23:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I was trying to run the (new) embedding example:
```
curl http://10.0.3.22:11434/api/embed -d '{
"model": "all-minilm",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}'
```
Which triggered a crash (i did pull the model first). Note that it crashes for some models but ... | {
"login": "markg85",
"id": 49061,
"node_id": "MDQ6VXNlcjQ5MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/49061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markg85",
"html_url": "https://github.com/markg85",
"followers_url": "https://api.github.com/users/markg85/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6289/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6289/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1658 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1658/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1658/comments | https://api.github.com/repos/ollama/ollama/issues/1658/events | https://github.com/ollama/ollama/issues/1658 | 2,052,629,556 | I_kwDOJ0Z1Ps56WKQ0 | 1,658 | Feature request: delete partially downloaded model | {
"login": "kokizzu",
"id": 1061610,
"node_id": "MDQ6VXNlcjEwNjE2MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1061610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kokizzu",
"html_url": "https://github.com/kokizzu",
"followers_url": "https://api.github.com/users/kokizzu/... | [] | closed | false | null | [] | null | 2 | 2023-12-21T15:27:21 | 2023-12-21T19:24:58 | 2023-12-21T19:19:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | So i was downloading some model, and apparently it took like 26GB of disk space XD
so i cancelled it midway
tried to `ollama rm dolphin-mixtral` but it shows `Error: model 'dolphin-mixtral' not found`
it would be nice if there's command to remove partially downloaded model. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1658/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5539 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5539/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5539/comments | https://api.github.com/repos/ollama/ollama/issues/5539/events | https://github.com/ollama/ollama/issues/5539 | 2,394,886,691 | I_kwDOJ0Z1Ps6OvxIj | 5,539 | can't embedding PDF file in Korean | {
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-07-08T07:38:39 | 2024-07-09T08:47:43 | 2024-07-09T08:47:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to use rag by embedding a PDF file in Korean, but the encoding seems to be broken. When saved to vectordb, broken strings are stored.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48 | {
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5539/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8456 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8456/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8456/comments | https://api.github.com/repos/ollama/ollama/issues/8456/events | https://github.com/ollama/ollama/issues/8456 | 2,792,306,559 | I_kwDOJ0Z1Ps6mbzd_ | 8,456 | ollama create fails for GGUF files with unaligned tensors | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-16T10:24:14 | 2025-01-16T13:04:57 | null | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
$ ollama show --modelfile minicpm-v > Modelfile
$ ollama create minicpm-v:test
gathering model components
copying file sha256:262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0 100%
copying file sha256:f8a805e9e62085805c69c427287acefc284932eb4abfe6e1b1ce431d27e2f4e0 100%
par... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8456/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6990 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6990/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6990/comments | https://api.github.com/repos/ollama/ollama/issues/6990/events | https://github.com/ollama/ollama/issues/6990 | 2,551,601,831 | I_kwDOJ0Z1Ps6YFlqn | 6,990 | Unrecognized import path "gorgonia.org/vecf64" | {
"login": "opacicmarko",
"id": 98588282,
"node_id": "U_kgDOBeBWeg",
"avatar_url": "https://avatars.githubusercontent.com/u/98588282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/opacicmarko",
"html_url": "https://github.com/opacicmarko",
"followers_url": "https://api.github.com/users/op... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-26T22:04:05 | 2024-09-30T21:04:23 | 2024-09-30T21:04:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried to build the project from source, but running `go build .` fails with the following errors:
```
go: downloading gorgonia.org/vecf32 v0.9.0
go: downloading gorgonia.org/vecf64 v0.9.0
../../go/pkg/mod/github.com/pdevine/tensor@v0.0.0-20240510204454-f88f4562727c/internal/execution/gener... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6990/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7543 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7543/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7543/comments | https://api.github.com/repos/ollama/ollama/issues/7543/events | https://github.com/ollama/ollama/issues/7543 | 2,640,173,480 | I_kwDOJ0Z1Ps6dXdmo | 7,543 | Please add qwen2-vl-7b | {
"login": "bingbing6",
"id": 51957370,
"node_id": "MDQ6VXNlcjUxOTU3Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/51957370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bingbing6",
"html_url": "https://github.com/bingbing6",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-11-07T07:46:51 | 2024-11-07T22:15:20 | 2024-11-07T22:15:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please add qwen2-vl-7b | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7543/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6845 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6845/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6845/comments | https://api.github.com/repos/ollama/ollama/issues/6845/events | https://github.com/ollama/ollama/pull/6845 | 2,532,179,326 | PR_kwDOJ0Z1Ps570V3F | 6,845 | llama: fix race in parallel make | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-09-17T21:19:07 | 2024-09-23T19:04:02 | 2024-09-23T19:03:54 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6845",
"html_url": "https://github.com/ollama/ollama/pull/6845",
"diff_url": "https://github.com/ollama/ollama/pull/6845.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6845.patch",
"merged_at": null
} | Ensure the cleanup step completes before starting to build targets | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6845/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5082 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5082/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5082/comments | https://api.github.com/repos/ollama/ollama/issues/5082/events | https://github.com/ollama/ollama/issues/5082 | 2,355,836,344 | I_kwDOJ0Z1Ps6MazW4 | 5,082 | `ollama list` shows empty list of models | {
"login": "DoLife",
"id": 67223389,
"node_id": "MDQ6VXNlcjY3MjIzMzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/67223389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DoLife",
"html_url": "https://github.com/DoLife",
"followers_url": "https://api.github.com/users/DoLife/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 7 | 2024-06-16T15:21:30 | 2024-07-12T19:25:14 | 2024-07-12T19:25:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi
My models no longer load.
When i do ollama list it gives me a blank list, but all the models is in the directories.
See Images, it was working correctly a few days ago.

I st... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8235/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8235/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3172 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3172/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3172/comments | https://api.github.com/repos/ollama/ollama/issues/3172/events | https://github.com/ollama/ollama/issues/3172 | 2,189,586,448 | I_kwDOJ0Z1Ps6CgnAQ | 3,172 | Allow to choose a preferred variant (an AMD GPU/an NIVIDIA GPU/CPU) when running a model | {
"login": "Inokinoki",
"id": 8311300,
"node_id": "MDQ6VXNlcjgzMTEzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8311300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Inokinoki",
"html_url": "https://github.com/Inokinoki",
"followers_url": "https://api.github.com/users/In... | [
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 12 | 2024-03-15T22:55:41 | 2024-05-04T22:07:26 | 2024-05-04T22:07:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I have both NVIDIA and AMD cards on one PC. Both `nvml.dll` and `amdhip64.dll` are available on Windows.
I saw in `gpu/gpu.go` ollama tries to detect first NVIDIA and will not try AMD if it found NVIDIA.
### How should we solve this?
Could it be possible to add an arg to indicate the... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3172/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3172/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4273 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4273/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4273/comments | https://api.github.com/repos/ollama/ollama/issues/4273/events | https://github.com/ollama/ollama/issues/4273 | 2,286,772,593 | I_kwDOJ0Z1Ps6ITWFx | 4,273 | API useing | {
"login": "w1757876747",
"id": 38978960,
"node_id": "MDQ6VXNlcjM4OTc4OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/38978960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/w1757876747",
"html_url": "https://github.com/w1757876747",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-09T02:25:40 | 2024-06-04T22:31:45 | 2024-06-04T22:31:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have a problem. When I call api/pull of the project, the background displays the following error, and no streaming response such as status is returned, because I want to get the real-time download progress. May I ask why I can download normally if I use the run model command in the service`tim... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4273/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/271 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/271/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/271/comments | https://api.github.com/repos/ollama/ollama/issues/271/events | https://github.com/ollama/ollama/pull/271 | 1,835,653,035 | PR_kwDOJ0Z1Ps5XJBFb | 271 | README.md: Add info about `serve`, logging, and env vars (+ some icons) | {
"login": "drhino",
"id": 2538708,
"node_id": "MDQ6VXNlcjI1Mzg3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2538708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drhino",
"html_url": "https://github.com/drhino",
"followers_url": "https://api.github.com/users/drhino/foll... | [] | closed | false | null | [] | null | 3 | 2023-08-03T19:56:15 | 2023-10-24T22:17:14 | 2023-10-24T22:17:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/271",
"html_url": "https://github.com/ollama/ollama/pull/271",
"diff_url": "https://github.com/ollama/ollama/pull/271.diff",
"patch_url": "https://github.com/ollama/ollama/pull/271.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/271/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1000 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1000/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1000/comments | https://api.github.com/repos/ollama/ollama/issues/1000/events | https://github.com/ollama/ollama/pull/1000 | 1,977,357,341 | PR_kwDOJ0Z1Ps5emOFP | 1,000 | Added clear command | {
"login": "tommyneu",
"id": 57959550,
"node_id": "MDQ6VXNlcjU3OTU5NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57959550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommyneu",
"html_url": "https://github.com/tommyneu",
"followers_url": "https://api.github.com/users/tom... | [] | closed | false | null | [] | null | 1 | 2023-11-04T14:05:13 | 2023-11-09T00:50:48 | 2023-11-09T00:49:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1000",
"html_url": "https://github.com/ollama/ollama/pull/1000",
"diff_url": "https://github.com/ollama/ollama/pull/1000.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1000.patch",
"merged_at": null
} | Added clear command for ease of use
Closes #989 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1000/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2382 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2382/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2382/comments | https://api.github.com/repos/ollama/ollama/issues/2382/events | https://github.com/ollama/ollama/issues/2382 | 2,122,399,323 | I_kwDOJ0Z1Ps5-gT5b | 2,382 | Some LLM are not really open source | {
"login": "Edmartt",
"id": 47486245,
"node_id": "MDQ6VXNlcjQ3NDg2MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/47486245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edmartt",
"html_url": "https://github.com/Edmartt",
"followers_url": "https://api.github.com/users/Edmart... | [] | closed | false | null | [] | null | 2 | 2024-02-07T07:49:59 | 2024-02-21T00:05:18 | 2024-02-21T00:05:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Not because a company says their LLM are open source is truth:

[https://spectrum.ieee.org/open-source-llm-not-open](url) | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2382/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2382/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4766 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4766/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4766/comments | https://api.github.com/repos/ollama/ollama/issues/4766/events | https://github.com/ollama/ollama/pull/4766 | 2,329,126,774 | PR_kwDOJ0Z1Ps5xMgp1 | 4,766 | add embed model command and fix question invoke | {
"login": "shoebham",
"id": 25881429,
"node_id": "MDQ6VXNlcjI1ODgxNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/25881429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoebham",
"html_url": "https://github.com/shoebham",
"followers_url": "https://api.github.com/users/sho... | [] | closed | false | null | [] | null | 0 | 2024-06-01T12:51:34 | 2024-06-04T05:21:18 | 2024-06-04T05:20:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4766",
"html_url": "https://github.com/ollama/ollama/pull/4766",
"diff_url": "https://github.com/ollama/ollama/pull/4766.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4766.patch",
"merged_at": "2024-06-04T05:20:48"
} | I was following the tutorial but i couldn't run it because embedding model was not available, so i had to download the embedding model using `ollama pull nomic-embed-text`. Also the code where we are asking the LLM the question, we are not printing anything so fixed that too. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4766/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7818 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7818/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7818/comments | https://api.github.com/repos/ollama/ollama/issues/7818/events | https://github.com/ollama/ollama/pull/7818 | 2,687,948,631 | PR_kwDOJ0Z1Ps6C8TWp | 7,818 | Update README.md | {
"login": "adarshM84",
"id": 95633830,
"node_id": "U_kgDOBbNBpg",
"avatar_url": "https://avatars.githubusercontent.com/u/95633830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adarshM84",
"html_url": "https://github.com/adarshM84",
"followers_url": "https://api.github.com/users/adarshM8... | [] | closed | false | null | [] | null | 0 | 2024-11-24T15:57:00 | 2024-11-24T18:32:24 | 2024-11-24T18:32:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7818",
"html_url": "https://github.com/ollama/ollama/pull/7818",
"diff_url": "https://github.com/ollama/ollama/pull/7818.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7818.patch",
"merged_at": "2024-11-24T18:32:24"
} | Description added for link | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7818/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4841 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4841/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4841/comments | https://api.github.com/repos/ollama/ollama/issues/4841/events | https://github.com/ollama/ollama/pull/4841 | 2,336,501,019 | PR_kwDOJ0Z1Ps5xlkeF | 4,841 | Remove False Time Fields | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-06-05T17:56:28 | 2024-06-05T18:02:23 | 2024-06-05T18:02:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4841",
"html_url": "https://github.com/ollama/ollama/pull/4841",
"diff_url": "https://github.com/ollama/ollama/pull/4841.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4841.patch",
"merged_at": null
} | /api/tags was returning "0001-01-01T00:00:00Z" for 'expires_at'
/api/ps was returning "0001-01-01T00:00:00Z" for 'modified_at'
Removes these fields from the respective endpoints
Added assertion in test case, and tested locally both with curl and CLI | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4841/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1492 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1492/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1492/comments | https://api.github.com/repos/ollama/ollama/issues/1492/events | https://github.com/ollama/ollama/issues/1492 | 2,038,707,483 | I_kwDOJ0Z1Ps55hDUb | 1,492 | 7b model on Colab: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8001: out of memory | {
"login": "nnWhisperer",
"id": 13225349,
"node_id": "MDQ6VXNlcjEzMjI1MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13225349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nnWhisperer",
"html_url": "https://github.com/nnWhisperer",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2023-12-12T23:41:00 | 2024-01-14T22:13:02 | 2024-01-14T22:13:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
On a Google Colab 50GB ram 16GB Vram T4 instance (problem persisted in V100 instance), I install ollama as follows:
```
!sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
!sudo chmod +x /usr/bin/ollama
!ollama serve
```
On the terminal I say:
`ollama run yarn-mistral:7b-128k`
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1492/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6419 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6419/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6419/comments | https://api.github.com/repos/ollama/ollama/issues/6419/events | https://github.com/ollama/ollama/issues/6419 | 2,473,004,850 | I_kwDOJ0Z1Ps6TZw8y | 6,419 | Ollama Tools - random results without providing tools in second call | {
"login": "jprogramista",
"id": 240528,
"node_id": "MDQ6VXNlcjI0MDUyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/240528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jprogramista",
"html_url": "https://github.com/jprogramista",
"followers_url": "https://api.github.com/u... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-19T10:22:41 | 2024-08-22T03:58:57 | 2024-08-22T01:27:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Not always, but very often this script: https://github.com/ollama/ollama-python/blob/main/examples/tools/main.py gives me random answers (I have modified `'NYC-LAX': {'departure': '08:00 AM', 'arrival': '11:30 PM', 'duration': '15h 30m'}` and I expect to get 15h 30m from tools, but get 5h 30m or... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6419/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/533 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/533/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/533/comments | https://api.github.com/repos/ollama/ollama/issues/533/events | https://github.com/ollama/ollama/issues/533 | 1,898,422,416 | I_kwDOJ0Z1Ps5xJ6CQ | 533 | GPU Support for Ollama on Microsoft Windows | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 13 | 2023-09-15T13:19:06 | 2024-03-17T03:00:44 | 2023-10-26T00:34:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
To make run Ollama from source code with Nvidia GPU on Microsoft Windows, actually there is no setup description and the Ollama sourcecode has some ToDo's as well, is that right ?
Here some thoughts.
Setup
-
1. NVidia drivers
1A. Software drivers: https://www.nvidia.com/download/index.aspx
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/533/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/533/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3282 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3282/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3282/comments | https://api.github.com/repos/ollama/ollama/issues/3282/events | https://github.com/ollama/ollama/pull/3282 | 2,199,761,390 | PR_kwDOJ0Z1Ps5qV3K5 | 3,282 | Add docs for GPU selection and nvidia uvm workaround | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-03-21T10:20:29 | 2024-07-24T15:14:33 | 2024-03-24T18:15:04 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3282",
"html_url": "https://github.com/ollama/ollama/pull/3282",
"diff_url": "https://github.com/ollama/ollama/pull/3282.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3282.patch",
"merged_at": "2024-03-24T18:15:04"
} | Fixes #1813
Fixes #2934
Fixes #2718 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3282/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3282/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3506 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3506/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3506/comments | https://api.github.com/repos/ollama/ollama/issues/3506/events | https://github.com/ollama/ollama/pull/3506 | 2,228,366,846 | PR_kwDOJ0Z1Ps5r3PBG | 3,506 | cgo quantize | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-05T16:00:19 | 2024-04-09T19:32:54 | 2024-04-09T19:32:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3506",
"html_url": "https://github.com/ollama/ollama/pull/3506",
"diff_url": "https://github.com/ollama/ollama/pull/3506.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3506.patch",
"merged_at": "2024-04-09T19:32:53"
} | revive #307
this will _only_ quantize a converted model. quantizing an arbitrary fp16/fp32 will be a follow up | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3506/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8124 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8124/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8124/comments | https://api.github.com/repos/ollama/ollama/issues/8124/events | https://github.com/ollama/ollama/pull/8124 | 2,743,484,124 | PR_kwDOJ0Z1Ps6Fao2N | 8,124 | grammar: introduce new grammar package | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | open | false | null | [] | null | 0 | 2024-12-16T22:00:57 | 2024-12-18T00:10:59 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8124",
"html_url": "https://github.com/ollama/ollama/pull/8124",
"diff_url": "https://github.com/ollama/ollama/pull/8124.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8124.patch",
"merged_at": null
} | This package provides a way to convert JSON schemas to equivalent EBNF. It is intended to be a replacement to llama.cpp's schema_to_grammar.
This is still an early version and does not yet support all JSON schema features. The to-do list includes:
- [ ] minumum/maximum constraints on integer types
- [ ] minLengt... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8124/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.