url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/459 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/459/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/459/comments | https://api.github.com/repos/ollama/ollama/issues/459/events | https://github.com/ollama/ollama/pull/459 | 1,878,857,374 | PR_kwDOJ0Z1Ps5ZaZOq | 459 | generate binary dependencies based on `GOARCH` on macos | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2023-09-02T21:54:52 | 2023-09-05T16:54:00 | 2023-09-05T16:53:58 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/459",
"html_url": "https://github.com/ollama/ollama/pull/459",
"diff_url": "https://github.com/ollama/ollama/pull/459.diff",
"patch_url": "https://github.com/ollama/ollama/pull/459.patch",
"merged_at": "2023-09-05T16:53:58"
} | This will allow building a universal binary (or cross compiling for `amd64`) on `arm64` Macs:
```
% GOARCH=amd64 go generate ./...
% GOARCH=amd64 go build .
% file ./ollama
./ollama: Mach-O 64-bit executable x86_64
```
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/459/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/956 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/956/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/956/comments | https://api.github.com/repos/ollama/ollama/issues/956/events | https://github.com/ollama/ollama/pull/956 | 1,971,226,406 | PR_kwDOJ0Z1Ps5eRgt9 | 956 | docs: clarify and clean up API docs | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-10-31T20:12:17 | 2023-11-01T04:43:12 | 2023-11-01T04:43:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/956",
"html_url": "https://github.com/ollama/ollama/pull/956",
"diff_url": "https://github.com/ollama/ollama/pull/956.diff",
"patch_url": "https://github.com/ollama/ollama/pull/956.patch",
"merged_at": "2023-11-01T04:43:11"
} | null | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/956/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3407 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3407/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3407/comments | https://api.github.com/repos/ollama/ollama/issues/3407/events | https://github.com/ollama/ollama/issues/3407 | 2,215,453,262 | I_kwDOJ0Z1Ps6EDSJO | 3,407 | Ollama errors when using json mode with `command-r` model | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-03-29T14:11:01 | 2024-04-19T15:41:37 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using json mode with command-r, Ollama will hang.
https://github.com/ggerganov/llama.cpp/issues/6112
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3407/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3407/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1456 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1456/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1456/comments | https://api.github.com/repos/ollama/ollama/issues/1456/events | https://github.com/ollama/ollama/issues/1456 | 2,034,460,321 | I_kwDOJ0Z1Ps55Q2ah | 1,456 | Wrong font in the model sorting dropdown menu in the model page for Safari | {
"login": "ggetv",
"id": 36490494,
"node_id": "MDQ6VXNlcjM2NDkwNDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/36490494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggetv",
"html_url": "https://github.com/ggetv",
"followers_url": "https://api.github.com/users/ggetv/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw... | closed | false | null | [] | null | 3 | 2023-12-10T17:25:06 | 2024-04-08T21:46:22 | 2024-04-08T21:46:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Just noticed a small issue on the model page (https://ollama.ai/library?sort=newest), in Safari browser it somehow showed the wrong font, other browsers (Chrome, Firefox) do not have this issue. I am using Safari Version 17.1 (19616.2.9.11.7).
<img width="1344" alt="ollama-font-issue-safari" src="https://github.com/... | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1456/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1456/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6490 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6490/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6490/comments | https://api.github.com/repos/ollama/ollama/issues/6490/events | https://github.com/ollama/ollama/issues/6490 | 2,484,748,005 | I_kwDOJ0Z1Ps6UGj7l | 6,490 | WISPER | {
"login": "DewiarQR",
"id": 64423698,
"node_id": "MDQ6VXNlcjY0NDIzNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64423698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DewiarQR",
"html_url": "https://github.com/DewiarQR",
"followers_url": "https://api.github.com/users/Dew... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-08-24T17:41:33 | 2024-08-27T21:23:24 | 2024-08-27T21:23:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello. You have both regular LLM models and those that support digital vision. Now it remains to add models for transcription and voice synthesis... and it would be possible to solve any problems on your system.
Can we expect such models as
https://huggingface.co/Systran/faster-distil-whisper-large-v3
Or maybe i... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6490/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6490/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1853 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1853/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1853/comments | https://api.github.com/repos/ollama/ollama/issues/1853/events | https://github.com/ollama/ollama/issues/1853 | 2,070,234,198 | I_kwDOJ0Z1Ps57ZURW | 1,853 | phi not working | {
"login": "morandalex",
"id": 9484568,
"node_id": "MDQ6VXNlcjk0ODQ1Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9484568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morandalex",
"html_url": "https://github.com/morandalex",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 8 | 2024-01-08T11:14:02 | 2024-03-11T19:33:29 | 2024-03-11T19:33:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
ollama run phi
>>> hello
Hello, how can I assist you today?
>>> create a js function
Error: Post "http://127.0.0.1:11434/api/generate": EOF
```
mistral is working on my machine. but phi not working , what is happening ? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1853/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4532 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4532/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4532/comments | https://api.github.com/repos/ollama/ollama/issues/4532/events | https://github.com/ollama/ollama/issues/4532 | 2,305,022,439 | I_kwDOJ0Z1Ps6JY9nn | 4,532 | codegemma 2b v1.1 q8 and q5_1 have incorrect model names | {
"login": "mroark1m",
"id": 708826,
"node_id": "MDQ6VXNlcjcwODgyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/708826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mroark1m",
"html_url": "https://github.com/mroark1m",
"followers_url": "https://api.github.com/users/mroark1... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-20T03:59:44 | 2024-05-21T16:18:23 | 2024-05-21T16:18:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
At these two urls:
https://ollama.com/library/codegemma:2b-code-v1.1-q8_0, I see "quantization Q5_1" in the list of files
https://ollama.com/library/codegemma:2b-code-v1.1-q5_1 I see "quantization Q8_1" in the list of files
'
ggml_metal... | {
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.gith... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/149/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4136 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4136/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4136/comments | https://api.github.com/repos/ollama/ollama/issues/4136/events | https://github.com/ollama/ollama/issues/4136 | 2,278,289,915 | I_kwDOJ0Z1Ps6Hy_H7 | 4,136 | [Feature] Rapid Modelfile Updates | {
"login": "Arcitec",
"id": 38923130,
"node_id": "MDQ6VXNlcjM4OTIzMTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/38923130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arcitec",
"html_url": "https://github.com/Arcitec",
"followers_url": "https://api.github.com/users/Arcite... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-05-03T19:22:17 | 2024-05-28T04:08:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama is an absolutely brilliant project. Thank you everyone involved in creating it!
I've been working on local models, and noticed one weakness of Ollama. The initial import obviously has to take some time to convert the GGUF model weights into Ollama's native format. But after that, I need to tweak parameters, s... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4136/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4136/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6998 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6998/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6998/comments | https://api.github.com/repos/ollama/ollama/issues/6998/events | https://github.com/ollama/ollama/issues/6998 | 2,552,293,931 | I_kwDOJ0Z1Ps6YIOor | 6,998 | Is your llama3.2 models working? | {
"login": "dhandhalyabhavik",
"id": 86345824,
"node_id": "MDQ6VXNlcjg2MzQ1ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/86345824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhandhalyabhavik",
"html_url": "https://github.com/dhandhalyabhavik",
"followers_url": "https://... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-09-27T08:07:36 | 2024-09-28T23:03:55 | 2024-09-28T23:03:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
llm_load_tensors: ggml ctx size = 0.13 MiB
llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 255, got 254
llama_load_model_from_file: exception loading model
terminate called after throwing an instance of 'std::runtime_error'
what(): done_ge... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6998/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2339 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2339/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2339/comments | https://api.github.com/repos/ollama/ollama/issues/2339/events | https://github.com/ollama/ollama/issues/2339 | 2,116,670,213 | I_kwDOJ0Z1Ps5-KdMF | 2,339 | `/api/generate` hangs after about 100 requests | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-02-03T20:36:58 | 2024-02-27T13:40:00 | 2024-02-12T16:10:17 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2339/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2339/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3388 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3388/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3388/comments | https://api.github.com/repos/ollama/ollama/issues/3388/events | https://github.com/ollama/ollama/issues/3388 | 2,213,425,294 | I_kwDOJ0Z1Ps6D7jCO | 3,388 | Stanford Alpaca | {
"login": "xvbingbing",
"id": 45099689,
"node_id": "MDQ6VXNlcjQ1MDk5Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45099689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xvbingbing",
"html_url": "https://github.com/xvbingbing",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 0 | 2024-03-28T14:46:34 | 2024-03-28T14:46:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
Can Alpaca model be added? Thank you so much!!
https://github.com/tatsu-lab/stanford_alpaca | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3388/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2854 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2854/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2854/comments | https://api.github.com/repos/ollama/ollama/issues/2854/events | https://github.com/ollama/ollama/issues/2854 | 2,162,653,469 | I_kwDOJ0Z1Ps6A53kd | 2,854 | Starting Ollama a second time on Windows 11 creates another instance | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-03-01T05:44:34 | 2024-09-24T15:53:08 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2854/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/8121 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8121/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8121/comments | https://api.github.com/repos/ollama/ollama/issues/8121/events | https://github.com/ollama/ollama/pull/8121 | 2,743,108,550 | PR_kwDOJ0Z1Ps6FZUoI | 8,121 | cuda: adjust variant based on detected runners | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | open | false | null | [] | null | 3 | 2024-12-16T18:35:34 | 2025-01-07T08:47:58 | null | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8121",
"html_url": "https://github.com/ollama/ollama/pull/8121",
"diff_url": "https://github.com/ollama/ollama/pull/8121.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8121.patch",
"merged_at": null
} | When building from source, or using downstream packaging systems, multiple versions of cuda runners may not be present. This adjusts the discoverry logic to only use versioned variants if they are detected at runtime. It also adds a new warning message in the log if no cuda runners are present but cuda GPUs are detec... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8121/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6360 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6360/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6360/comments | https://api.github.com/repos/ollama/ollama/issues/6360/events | https://github.com/ollama/ollama/issues/6360 | 2,465,916,058 | I_kwDOJ0Z1Ps6S-uSa | 6,360 | Detected as a virus by windows defender during/after update | {
"login": "mcDandy",
"id": 18588943,
"node_id": "MDQ6VXNlcjE4NTg4OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/18588943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcDandy",
"html_url": "https://github.com/mcDandy",
"followers_url": "https://api.github.com/users/mcDand... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-14T13:55:40 | 2024-08-14T14:02:20 | 2024-08-14T14:02:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Windows Defender thinks it is some sort of command and control malware.


### OS
Windows
### GPU
Nvidia
##... | {
"login": "mcDandy",
"id": 18588943,
"node_id": "MDQ6VXNlcjE4NTg4OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/18588943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcDandy",
"html_url": "https://github.com/mcDandy",
"followers_url": "https://api.github.com/users/mcDand... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6360/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4119 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4119/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4119/comments | https://api.github.com/repos/ollama/ollama/issues/4119/events | https://github.com/ollama/ollama/pull/4119 | 2,277,056,340 | PR_kwDOJ0Z1Ps5ucNZK | 4,119 | 👌 IMPROVE: add portkey library for production tools | {
"login": "Saif-Shines",
"id": 17451294,
"node_id": "MDQ6VXNlcjE3NDUxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/17451294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saif-Shines",
"html_url": "https://github.com/Saif-Shines",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-05-03T07:02:22 | 2024-05-06T17:25:23 | 2024-05-06T17:25:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4119",
"html_url": "https://github.com/ollama/ollama/pull/4119",
"diff_url": "https://github.com/ollama/ollama/pull/4119.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4119.patch",
"merged_at": "2024-05-06T17:25:23"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4119/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/113 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/113/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/113/comments | https://api.github.com/repos/ollama/ollama/issues/113/events | https://github.com/ollama/ollama/issues/113 | 1,811,133,459 | I_kwDOJ0Z1Ps5r87QT | 113 | Some users do not have /usr/local/bin | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-07-19T04:46:38 | 2023-07-19T08:25:46 | 2023-07-19T08:25:45 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Need to check /usr/local/bin is created to add ollama into path | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/113/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6423 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6423/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6423/comments | https://api.github.com/repos/ollama/ollama/issues/6423/events | https://github.com/ollama/ollama/issues/6423 | 2,473,791,158 | I_kwDOJ0Z1Ps6Tcw62 | 6,423 | Running on MI300X via Docker fails with `rocBLAS error: Could not initialize Tensile host: No devices found` | {
"login": "peterschmidt85",
"id": 54148038,
"node_id": "MDQ6VXNlcjU0MTQ4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/54148038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peterschmidt85",
"html_url": "https://github.com/peterschmidt85",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-08-19T16:55:17 | 2024-09-10T15:51:08 | 2024-09-03T23:20:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Steps to reproduce:**
1. Run a Docker container using `ollama/ollama:rocm` on a machine with a single MI300X
2. Inside the container, run `ollama run llama3.1:70B`
**Actual behaviour:**
```
rocBLAS error: Could not initialize Tensile host: No devices found
```
The full output:
```
ollama serve &
[1] 6... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6423/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4279 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4279/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4279/comments | https://api.github.com/repos/ollama/ollama/issues/4279/events | https://github.com/ollama/ollama/issues/4279 | 2,287,137,613 | I_kwDOJ0Z1Ps6IUvNN | 4,279 | Ollama reports an error when running the AI model using GPU | {
"login": "xiaomo0925",
"id": 112382100,
"node_id": "U_kgDOBrLQlA",
"avatar_url": "https://avatars.githubusercontent.com/u/112382100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaomo0925",
"html_url": "https://github.com/xiaomo0925",
"followers_url": "https://api.github.com/users/xia... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-09T08:06:18 | 2024-05-21T23:55:54 | 2024-05-21T23:55:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I use the command :
’docker run --gpus all -d -v f:/ai/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama‘
the following error will occur,
“docker:Error response from daemon:failed to create task for container:failed to create shim task:OCIruntime create failed:runc create f... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4279/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/693 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/693/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/693/comments | https://api.github.com/repos/ollama/ollama/issues/693/events | https://github.com/ollama/ollama/issues/693 | 1,924,959,439 | I_kwDOJ0Z1Ps5yvIzP | 693 | Mario System Prompt not working with Mistral Model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2023-10-03T21:12:35 | 2023-11-02T03:00:38 | 2023-11-02T03:00:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In this example: https://github.com/jmorganca/ollama/blob/main/examples/mario/readme.md
I can successfully create a new model with mistral, however it seems to ignore the system prompt. I tried various system prompts but seems to revert back to Mistral.
Here is my results:
>ollama run MARIO
> who r u?
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/693/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8189 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8189/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8189/comments | https://api.github.com/repos/ollama/ollama/issues/8189/events | https://github.com/ollama/ollama/pull/8189 | 2,753,538,512 | PR_kwDOJ0Z1Ps6F9Qdp | 8,189 | remove tutorials.md which pointed to removed tutorials | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-12-20T22:01:46 | 2024-12-20T22:04:22 | 2024-12-20T22:04:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8189",
"html_url": "https://github.com/ollama/ollama/pull/8189",
"diff_url": "https://github.com/ollama/ollama/pull/8189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8189.patch",
"merged_at": "2024-12-20T22:04:20"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8189/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7854 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7854/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7854/comments | https://api.github.com/repos/ollama/ollama/issues/7854/events | https://github.com/ollama/ollama/issues/7854 | 2,697,262,990 | I_kwDOJ0Z1Ps6gxPeO | 7,854 | Different outputs for first and subsequent inferences after model load | {
"login": "akamaus",
"id": 58955,
"node_id": "MDQ6VXNlcjU4OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/58955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akamaus",
"html_url": "https://github.com/akamaus",
"followers_url": "https://api.github.com/users/akamaus/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-11-27T06:17:59 | 2024-11-27T19:11:41 | 2024-11-27T19:11:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The result I get just after model load into VRAM differ from subsequent ones. It's easily reproduced and consistent.
After issuing ollama clean, the first time I get A, and next times I get B. I tried several models (marco-o1 and qwen2.5 ) and both CPU (with num_gpu=0 option) and GPU inferen... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7854/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/680 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/680/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/680/comments | https://api.github.com/repos/ollama/ollama/issues/680/events | https://github.com/ollama/ollama/issues/680 | 1,922,700,473 | I_kwDOJ0Z1Ps5ymhS5 | 680 | Is there a way to change the download/run directory? | {
"login": "improvethings",
"id": 16601027,
"node_id": "MDQ6VXNlcjE2NjAxMDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/16601027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/improvethings",
"html_url": "https://github.com/improvethings",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 25 | 2023-10-02T20:58:02 | 2025-01-30T06:12:44 | 2023-12-04T19:42:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On Linux, I want to download/run it from a directory with more space than /usr/share/ | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/680/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/680/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1117 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1117/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1117/comments | https://api.github.com/repos/ollama/ollama/issues/1117/events | https://github.com/ollama/ollama/issues/1117 | 1,991,835,598 | I_kwDOJ0Z1Ps52uP_O | 1,117 | Change Default 11434 Port & fw question | {
"login": "jjsarf",
"id": 34278274,
"node_id": "MDQ6VXNlcjM0Mjc4Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/34278274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjsarf",
"html_url": "https://github.com/jjsarf",
"followers_url": "https://api.github.com/users/jjsarf/fo... | [] | closed | false | null | [] | null | 3 | 2023-11-14T02:03:06 | 2023-11-14T04:45:31 | 2023-11-14T02:55:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Does anyone know how to change Ollama's default port?
Also how do we allow other computers to hit the /generate api?
Thanks,
John | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1117/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7044 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7044/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7044/comments | https://api.github.com/repos/ollama/ollama/issues/7044/events | https://github.com/ollama/ollama/issues/7044 | 2,556,272,088 | I_kwDOJ0Z1Ps6YXZ3Y | 7,044 | Support detailed logs for each request | {
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-09-30T10:48:31 | 2024-12-14T17:10:27 | 2024-12-14T17:10:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi thanks for the library! In order to see what happens, it would be great to see detailed logs for each request. For example, not only the real string send into LLM, but also temperature, top_p, etc. It would be even greater if these could be output to a separate log or tracing service etc. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7044/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3930 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3930/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3930/comments | https://api.github.com/repos/ollama/ollama/issues/3930/events | https://github.com/ollama/ollama/issues/3930 | 2,264,929,569 | I_kwDOJ0Z1Ps6HABUh | 3,930 | GPU allocation lost after container idle period | {
"login": "hl-hok",
"id": 120292146,
"node_id": "U_kgDOByuDMg",
"avatar_url": "https://avatars.githubusercontent.com/u/120292146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hl-hok",
"html_url": "https://github.com/hl-hok",
"followers_url": "https://api.github.com/users/hl-hok/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 14 | 2024-04-26T04:18:38 | 2024-10-15T19:07:14 | 2024-05-31T21:21:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm experiencing an issue with Ollama where the Docker container fails to utilize the GPU unless I restart the container. This occurs when the container remains idle for an extended period (e.g., a day).
Initially, the GPU is configured correctly and allocated to the container. However, afte... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3930/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3930/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2212/comments | https://api.github.com/repos/ollama/ollama/issues/2212/events | https://github.com/ollama/ollama/pull/2212 | 2,102,744,598 | PR_kwDOJ0Z1Ps5lL_vK | 2,212 | fix build | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-26T19:04:39 | 2024-01-26T19:19:09 | 2024-01-26T19:19:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2212",
"html_url": "https://github.com/ollama/ollama/pull/2212",
"diff_url": "https://github.com/ollama/ollama/pull/2212.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2212.patch",
"merged_at": "2024-01-26T19:19:08"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2212/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/316 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/316/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/316/comments | https://api.github.com/repos/ollama/ollama/issues/316/events | https://github.com/ollama/ollama/pull/316 | 1,845,012,518 | PR_kwDOJ0Z1Ps5XoVCp | 316 | fix a typo in the tweetwriter example Modelfile | {
"login": "soroushj",
"id": 4595459,
"node_id": "MDQ6VXNlcjQ1OTU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4595459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soroushj",
"html_url": "https://github.com/soroushj",
"followers_url": "https://api.github.com/users/sorou... | [] | closed | false | null | [] | null | 0 | 2023-08-10T11:44:23 | 2023-08-10T15:23:24 | 2023-08-10T14:19:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/316",
"html_url": "https://github.com/ollama/ollama/pull/316",
"diff_url": "https://github.com/ollama/ollama/pull/316.diff",
"patch_url": "https://github.com/ollama/ollama/pull/316.patch",
"merged_at": "2023-08-10T14:19:53"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/316/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7650 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7650/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7650/comments | https://api.github.com/repos/ollama/ollama/issues/7650/events | https://github.com/ollama/ollama/issues/7650 | 2,655,039,375 | I_kwDOJ0Z1Ps6eQK-P | 7,650 | AMD Radeon 780M GPU (Pop OS !) System 76 | {
"login": "ihgumilar",
"id": 49016400,
"node_id": "MDQ6VXNlcjQ5MDE2NDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/49016400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ihgumilar",
"html_url": "https://github.com/ihgumilar",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 37 | 2024-11-13T10:46:05 | 2024-11-14T19:32:25 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I would like to ask your help.
I am running Ollama with the following GPU, but it seems that it is not picking up my GPU. Is there any advice ?
AMD Ryzen™ 7 7840U processor.
When I **run ollama serve**, it gives me this error. Any advice ?
Thanks
```
2024/11/13 17:40:14 routes... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7650/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7650/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3566 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3566/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3566/comments | https://api.github.com/repos/ollama/ollama/issues/3566/events | https://github.com/ollama/ollama/pull/3566 | 2,234,479,985 | PR_kwDOJ0Z1Ps5sL_Ol | 3,566 | Handle very slow model loads | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-09T23:36:02 | 2024-04-09T23:53:52 | 2024-04-09T23:53:49 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3566",
"html_url": "https://github.com/ollama/ollama/pull/3566",
"diff_url": "https://github.com/ollama/ollama/pull/3566.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3566.patch",
"merged_at": "2024-04-09T23:53:49"
} | During testing, we're seeing some models take over 3 minutes. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3566/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4775 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4775/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4775/comments | https://api.github.com/repos/ollama/ollama/issues/4775/events | https://github.com/ollama/ollama/issues/4775 | 2,329,413,441 | I_kwDOJ0Z1Ps6K2AdB | 4,775 | Error: llama runner process has terminated: exit status 1 | {
"login": "BAK-HOME",
"id": 145625297,
"node_id": "U_kgDOCK4Q0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/145625297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BAK-HOME",
"html_url": "https://github.com/BAK-HOME",
"followers_url": "https://api.github.com/users/BAK-HOME/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-06-02T01:06:41 | 2024-11-05T23:15:16 | 2024-11-05T23:15:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have tried several versions before this problem occurs, I would like to ask who encountered such a problem, can you help solve it.
Error: llama runner process has terminated: exit status 1
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.40 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4775/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2522 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2522/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2522/comments | https://api.github.com/repos/ollama/ollama/issues/2522/events | https://github.com/ollama/ollama/issues/2522 | 2,137,445,173 | I_kwDOJ0Z1Ps5_ZtM1 | 2,522 | Clicking view logs menu item multiple times causes it to stop working on Ollama Windows preview | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-02-15T21:05:30 | 2024-02-17T01:23:38 | 2024-02-17T01:23:38 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
time=2024-02-15T21:04:25.135Z level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\jeff\\AppData\\Local\\Ollama"
time=2024-02-15T21:04:32.644Z level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\jeff\\AppData\\Local\\Ollama"
``` | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2522/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/734 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/734/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/734/comments | https://api.github.com/repos/ollama/ollama/issues/734/events | https://github.com/ollama/ollama/issues/734 | 1,931,662,834 | I_kwDOJ0Z1Ps5zItXy | 734 | Need an option with low memory of GPU | {
"login": "tacsotai",
"id": 80247372,
"node_id": "MDQ6VXNlcjgwMjQ3Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/80247372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tacsotai",
"html_url": "https://github.com/tacsotai",
"followers_url": "https://api.github.com/users/tac... | [] | closed | false | null | [] | null | 1 | 2023-10-08T06:09:57 | 2023-10-08T07:38:44 | 2023-10-08T07:38:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried your great program "ollama".
I was succeeded with CPU, but unfortunately my linux machine not have enough memory.
So, could you prepare an option with low memory of GPU ?
```
$ ollama serve
2023/10/08 06:05:12 images.go:996: total blobs: 17
2023/10/08 06:05:12 images.go:1003: total unused blobs removed:... | {
"login": "tacsotai",
"id": 80247372,
"node_id": "MDQ6VXNlcjgwMjQ3Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/80247372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tacsotai",
"html_url": "https://github.com/tacsotai",
"followers_url": "https://api.github.com/users/tac... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/734/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5031 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5031/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5031/comments | https://api.github.com/repos/ollama/ollama/issues/5031/events | https://github.com/ollama/ollama/pull/5031 | 2,351,967,559 | PR_kwDOJ0Z1Ps5yaGcf | 5,031 | fix: multibyte utf16 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-06-13T20:08:58 | 2024-06-13T20:14:56 | 2024-06-13T20:14:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5031",
"html_url": "https://github.com/ollama/ollama/pull/5031",
"diff_url": "https://github.com/ollama/ollama/pull/5031.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5031.patch",
"merged_at": "2024-06-13T20:14:55"
} | follow up to #5025 and #4715 which fixes multibyte runes for utf16 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5031/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6286 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6286/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6286/comments | https://api.github.com/repos/ollama/ollama/issues/6286/events | https://github.com/ollama/ollama/issues/6286 | 2,458,052,205 | I_kwDOJ0Z1Ps6SguZt | 6,286 | Context window size cannot be changed | {
"login": "mihaelagrigore",
"id": 38474985,
"node_id": "MDQ6VXNlcjM4NDc0OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/38474985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihaelagrigore",
"html_url": "https://github.com/mihaelagrigore",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 21 | 2024-08-09T14:26:58 | 2024-10-17T07:52:29 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I see this issue has been partially reported, but none of the previous reports seem to be extensive in their tests of possible methods to set this option.
The problem:
Ollama server truncates the input to 2048 tokens regardless of the chat completion API used.
My setup:
I tried Ollama... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6286/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4134 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4134/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4134/comments | https://api.github.com/repos/ollama/ollama/issues/4134/events | https://github.com/ollama/ollama/issues/4134 | 2,278,239,364 | I_kwDOJ0Z1Ps6HyyyE | 4,134 | WithSecure quarantined ollama_llama_server.exe as harmful file / Malware | {
"login": "sjdevcode",
"id": 168860269,
"node_id": "U_kgDOChCabQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168860269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjdevcode",
"html_url": "https://github.com/sjdevcode",
"followers_url": "https://api.github.com/users/sjdevc... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-05-03T18:49:42 | 2024-05-28T21:01:51 | 2024-05-28T21:01:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After updating Ollama to version 0.1.33 WithSecure Elements identified ollama_llama_server.exe as a harmful file and put it in quarantine. It classified it as "Category: Malware and Type: Exploit".
It's about ollama_llama_server.exe in the \ollama_runners\cpu_avx folder. The executables in th... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4134/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6599 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6599/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6599/comments | https://api.github.com/repos/ollama/ollama/issues/6599/events | https://github.com/ollama/ollama/issues/6599 | 2,501,898,827 | I_kwDOJ0Z1Ps6VH_JL | 6,599 | Unable to resolve Cuda-drivers on RHEL8.9 | {
"login": "DanielPradoPino",
"id": 26769287,
"node_id": "MDQ6VXNlcjI2NzY5Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/26769287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielPradoPino",
"html_url": "https://github.com/DanielPradoPino",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-09-03T04:34:31 | 2024-09-09T11:15:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
While in RHEL8.9 Ollama Installation cannot finish due to an issue while installing cuda-drivers.
Nvidia repository is successfully installed and I can see cuda-drivers are listed there but when triggering repolist only cuda-drivers-fabricmanager are listed.
I an running crazy with this issue... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6599/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6599/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7952 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7952/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7952/comments | https://api.github.com/repos/ollama/ollama/issues/7952/events | https://github.com/ollama/ollama/issues/7952 | 2,721,015,206 | I_kwDOJ0Z1Ps6iL2Wm | 7,952 | Problems (with nvidia-smi) after upgrading to 0.4.7 (from 0.3 series) | {
"login": "stronk7",
"id": 167147,
"node_id": "MDQ6VXNlcjE2NzE0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stronk7",
"html_url": "https://github.com/stronk7",
"followers_url": "https://api.github.com/users/stronk7/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-05T17:31:25 | 2025-01-16T16:11:11 | 2025-01-16T16:11:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
while testing the new 0.4.7 series, everything seems to be working ok (Mac), but I've detected a problem when running on Ubuntu 24.04, with docker.
And, more specifically, the problem is with `nvidia-smi` because, unless I'm wrong, the GPU is being used normally and not the CPU.
Wit... | {
"login": "stronk7",
"id": 167147,
"node_id": "MDQ6VXNlcjE2NzE0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stronk7",
"html_url": "https://github.com/stronk7",
"followers_url": "https://api.github.com/users/stronk7/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7952/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7952/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3368 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3368/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3368/comments | https://api.github.com/repos/ollama/ollama/issues/3368/events | https://github.com/ollama/ollama/issues/3368 | 2,210,089,380 | I_kwDOJ0Z1Ps6Du0mk | 3,368 | Reranking models | {
"login": "YuanfengZhang",
"id": 71358306,
"node_id": "MDQ6VXNlcjcxMzU4MzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/71358306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuanfengZhang",
"html_url": "https://github.com/YuanfengZhang",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | 34 | 2024-03-27T07:41:15 | 2025-01-23T19:37:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
Till now, ollama supports LLM and embedding models. I wonder if it could support popular reranking models later?
Such as:
1. [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large)
2. [mixedbread-ai/mxbai-rerank-large-v1](https://huggingface.co/mixedbread-ai/mxbai-... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3368/reactions",
"total_count": 154,
"+1": 113,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 38,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3368/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1280 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1280/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1280/comments | https://api.github.com/repos/ollama/ollama/issues/1280/events | https://github.com/ollama/ollama/pull/1280 | 2,011,208,708 | PR_kwDOJ0Z1Ps5gYfhr | 1,280 | fix: disable ':' in tag names | {
"login": "tjbck",
"id": 25473318,
"node_id": "MDQ6VXNlcjI1NDczMzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/25473318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjbck",
"html_url": "https://github.com/tjbck",
"followers_url": "https://api.github.com/users/tjbck/follow... | [] | closed | false | null | [] | null | 0 | 2023-11-26T21:14:26 | 2023-11-29T18:33:45 | 2023-11-29T18:33:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1280",
"html_url": "https://github.com/ollama/ollama/pull/1280",
"diff_url": "https://github.com/ollama/ollama/pull/1280.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1280.patch",
"merged_at": "2023-11-29T18:33:45"
} | Resolves #1247 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1280/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/100 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/100/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/100/comments | https://api.github.com/repos/ollama/ollama/issues/100/events | https://github.com/ollama/ollama/pull/100 | 1,810,601,534 | PR_kwDOJ0Z1Ps5V0glw | 100 | skip files in the list if we can't get the correct model path | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-18T19:38:43 | 2023-07-18T19:39:08 | 2023-07-18T19:39:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/100",
"html_url": "https://github.com/ollama/ollama/pull/100",
"diff_url": "https://github.com/ollama/ollama/pull/100.diff",
"patch_url": "https://github.com/ollama/ollama/pull/100.patch",
"merged_at": "2023-07-18T19:39:08"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/100/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5043 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5043/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5043/comments | https://api.github.com/repos/ollama/ollama/issues/5043/events | https://github.com/ollama/ollama/pull/5043 | 2,352,815,843 | PR_kwDOJ0Z1Ps5yc9AC | 5,043 | Adds an uninstall script to the installer | {
"login": "nibrahim",
"id": 69051,
"node_id": "MDQ6VXNlcjY5MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/69051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nibrahim",
"html_url": "https://github.com/nibrahim",
"followers_url": "https://api.github.com/users/nibrahim/foll... | [] | closed | false | null | [] | null | 4 | 2024-06-14T08:17:31 | 2024-09-05T05:36:07 | 2024-09-05T05:14:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5043",
"html_url": "https://github.com/ollama/ollama/pull/5043",
"diff_url": "https://github.com/ollama/ollama/pull/5043.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5043.patch",
"merged_at": null
} | A new script called ollama_uninstall.sh gets created as part of the installation process on Linux. Running this will remove the ollama installation. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5043/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5043/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7633 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7633/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7633/comments | https://api.github.com/repos/ollama/ollama/issues/7633/events | https://github.com/ollama/ollama/pull/7633 | 2,653,024,204 | PR_kwDOJ0Z1Ps6Bq-wq | 7,633 | runner.go: Fix off-by-one for num predicted | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-11-12T18:43:14 | 2024-11-12T19:35:59 | 2024-11-12T19:35:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7633",
"html_url": "https://github.com/ollama/ollama/pull/7633",
"diff_url": "https://github.com/ollama/ollama/pull/7633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7633.patch",
"merged_at": "2024-11-12T19:35:57"
} | null | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7633/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6553 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6553/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6553/comments | https://api.github.com/repos/ollama/ollama/issues/6553/events | https://github.com/ollama/ollama/issues/6553 | 2,494,210,904 | I_kwDOJ0Z1Ps6UqqNY | 6,553 | Cannot set custom folder for storing models | {
"login": "anonymux1",
"id": 138056943,
"node_id": "U_kgDOCDqU7w",
"avatar_url": "https://avatars.githubusercontent.com/u/138056943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anonymux1",
"html_url": "https://github.com/anonymux1",
"followers_url": "https://api.github.com/users/anonym... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-29T11:47:49 | 2024-09-02T12:03:19 | 2024-08-29T15:11:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have run sudo systemctl edit ollama.service, and added the following lines using the text editor.
[Service]
Environment = OLLAMA_MODELS = "/home/<username>/AI/ollama_models"
i ran systemctl daemon-reload
systemctl restart ollama
Also rebooted, but models are still being stored in... | {
"login": "anonymux1",
"id": 138056943,
"node_id": "U_kgDOCDqU7w",
"avatar_url": "https://avatars.githubusercontent.com/u/138056943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anonymux1",
"html_url": "https://github.com/anonymux1",
"followers_url": "https://api.github.com/users/anonym... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6553/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1239 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1239/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1239/comments | https://api.github.com/repos/ollama/ollama/issues/1239/events | https://github.com/ollama/ollama/pull/1239 | 2,006,261,843 | PR_kwDOJ0Z1Ps5gIG3t | 1,239 | Update README.md - Community Integrations - Obsidian BMO Chatbot plugin | {
"login": "longy2k",
"id": 40724177,
"node_id": "MDQ6VXNlcjQwNzI0MTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/40724177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longy2k",
"html_url": "https://github.com/longy2k",
"followers_url": "https://api.github.com/users/longy2... | [] | closed | false | null | [] | null | 0 | 2023-11-22T12:42:36 | 2023-11-22T19:32:31 | 2023-11-22T19:32:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1239",
"html_url": "https://github.com/ollama/ollama/pull/1239",
"diff_url": "https://github.com/ollama/ollama/pull/1239.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1239.patch",
"merged_at": "2023-11-22T19:32:30"
} | The simplicity and speed of Ollama is amazing!
I would like to add Obsidian's "BMO Chatbot" plugin to the 'Community Integrations' section :) | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1239/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6141 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6141/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6141/comments | https://api.github.com/repos/ollama/ollama/issues/6141/events | https://github.com/ollama/ollama/issues/6141 | 2,444,742,320 | I_kwDOJ0Z1Ps6Rt86w | 6,141 | Ollama stopped a available="", not loading | {
"login": "rohithbojja",
"id": 119781796,
"node_id": "U_kgDOByO5pA",
"avatar_url": "https://avatars.githubusercontent.com/u/119781796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohithbojja",
"html_url": "https://github.com/rohithbojja",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-08-02T11:17:54 | 2024-08-06T10:18:47 | 2024-08-02T21:01:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
time=2024-08-02T16:41:38.633+05:30 level=INFO source=images.go:781 msg="total blobs: 9"
time=2024-08-02T16:41:38.633+05:30 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-02T16:41:38.633+05:30 level=INFO source=routes.go:1156 msg="Listening on 127.0.0.1:11434 (... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6141/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7899 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7899/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7899/comments | https://api.github.com/repos/ollama/ollama/issues/7899/events | https://github.com/ollama/ollama/pull/7899 | 2,708,116,806 | PR_kwDOJ0Z1Ps6DpDXD | 7,899 | ci: skip go build for tests | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-11-30T22:11:35 | 2024-12-05T05:22:39 | 2024-12-05T05:22:37 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7899",
"html_url": "https://github.com/ollama/ollama/pull/7899",
"diff_url": "https://github.com/ollama/ollama/pull/7899.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7899.patch",
"merged_at": "2024-12-05T05:22:37"
} | `go build` largely repeats what's already happening in `go test`, and by reducing to `go test` my hope is we can speed it up even more | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7899/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6133 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6133/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6133/comments | https://api.github.com/repos/ollama/ollama/issues/6133/events | https://github.com/ollama/ollama/pull/6133 | 2,443,736,748 | PR_kwDOJ0Z1Ps53MMJo | 6,133 | Adjust arm cuda repo paths | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-08-02T00:24:03 | 2024-08-08T19:33:38 | 2024-08-08T19:33:35 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6133",
"html_url": "https://github.com/ollama/ollama/pull/6133",
"diff_url": "https://github.com/ollama/ollama/pull/6133.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6133.patch",
"merged_at": "2024-08-08T19:33:35"
} | Ubuntu distros fail to install cuda drivers since aarch64 isn't valid
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/
Fixes #5797 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6133/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6724 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6724/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6724/comments | https://api.github.com/repos/ollama/ollama/issues/6724/events | https://github.com/ollama/ollama/issues/6724 | 2,515,931,561 | I_kwDOJ0Z1Ps6V9hGp | 6,724 | Tools Tag with "ollama show" command | {
"login": "LilPiep",
"id": 81217865,
"node_id": "MDQ6VXNlcjgxMjE3ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/81217865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LilPiep",
"html_url": "https://github.com/LilPiep",
"followers_url": "https://api.github.com/users/LilPie... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-09-10T09:38:19 | 2024-09-10T11:50:06 | 2024-09-10T11:50:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey there,
It would be wonderfull to have the opportunity to check if a model is Tool compatible or not after pulling its manifest. It's pretty clear on the online model library but once the model is pulled, the information is lost.
Thanks for your attention :)
 to ollama similar to airllm?
It seems like being able to use this feature for ollama to switch out layers to VRAM (preferably from memory) for processing could lead to a nice per... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7175/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7175/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1917 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1917/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1917/comments | https://api.github.com/repos/ollama/ollama/issues/1917/events | https://github.com/ollama/ollama/issues/1917 | 2,075,705,116 | I_kwDOJ0Z1Ps57uL8c | 1,917 | GPU still used when offloading zero layers | {
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder54... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 4 | 2024-01-11T04:13:06 | 2024-01-11T23:10:56 | 2024-01-11T22:56:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | To try to work around https://github.com/jmorganca/ollama/issues/1907, I decided to create a Modelfile that offloads zero layers. I noticed that it still takes up a few gigabytes of RAM on the GPU and spins up the GPU, even though I can't imagine _what_ it is doing on the GPU when no layers are running on the GPU.
`... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1917/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1028 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1028/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1028/comments | https://api.github.com/repos/ollama/ollama/issues/1028/events | https://github.com/ollama/ollama/pull/1028 | 1,980,976,174 | PR_kwDOJ0Z1Ps5eyUTz | 1,028 | WIP: Apply a patch for building with CUDA on Linux | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | [] | closed | false | null | [] | null | 2 | 2023-11-07T09:59:45 | 2023-11-13T18:58:52 | 2023-11-07T23:44:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1028",
"html_url": "https://github.com/ollama/ollama/pull/1028",
"diff_url": "https://github.com/ollama/ollama/pull/1028.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1028.patch",
"merged_at": null
} | Might fix #1024, maybe.
The patch is from a llama.cpp commit: https://github.com/ggerganov/llama.cpp/commit/2833a6f63c1b87c7f4ac574bcf7a15a2f3bf3ede | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1028/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/357 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/357/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/357/comments | https://api.github.com/repos/ollama/ollama/issues/357/events | https://github.com/ollama/ollama/issues/357 | 1,852,552,749 | I_kwDOJ0Z1Ps5ua7Yt | 357 | Support multi-line input in CLI | {
"login": "charlesverdad",
"id": 382186,
"node_id": "MDQ6VXNlcjM4MjE4Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/382186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charlesverdad",
"html_url": "https://github.com/charlesverdad",
"followers_url": "https://api.github.co... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 5 | 2023-08-16T05:45:21 | 2024-11-07T12:25:26 | 2023-08-17T14:17:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm trying to copy-paste a multi-line query to ollama, but it treats my newlines as an end to my question.
```
❯ ollama run llama2
>>> I have something like this:
Sure, please provide the code you have so far, and I will be happy to assist you in resolving any issues or answering any questions you may have. ever... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/357/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5952 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5952/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5952/comments | https://api.github.com/repos/ollama/ollama/issues/5952/events | https://github.com/ollama/ollama/issues/5952 | 2,430,065,277 | I_kwDOJ0Z1Ps6Q19p9 | 5,952 | find system prompt encapsulation error in mistral-nemo 12b | {
"login": "map9",
"id": 38238468,
"node_id": "MDQ6VXNlcjM4MjM4NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38238468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/map9",
"html_url": "https://github.com/map9",
"followers_url": "https://api.github.com/users/map9/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-07-25T14:05:56 | 2024-07-25T15:24:43 | 2024-07-25T15:00:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I used autogen + Ollama + mistral-nemo 12b model,
I find Ollama missed system message or lost question message.
maybe mistral-nemo 12b model template defined error.
case 1:
-----------------------------------
extractor_system_message = "...extractor_system_message..."
extractor = Assista... | {
"login": "map9",
"id": 38238468,
"node_id": "MDQ6VXNlcjM4MjM4NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38238468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/map9",
"html_url": "https://github.com/map9",
"followers_url": "https://api.github.com/users/map9/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5952/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5728 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5728/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5728/comments | https://api.github.com/repos/ollama/ollama/issues/5728/events | https://github.com/ollama/ollama/issues/5728 | 2,411,981,789 | I_kwDOJ0Z1Ps6Pw-vd | 5,728 | Prompt Tokens for Image Chat | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-16T20:23:10 | 2024-08-13T17:49:12 | 2024-08-13T17:49:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="793" alt="Screenshot 2024-07-16 at 1 22 43 PM" src="https://github.com/user-attachments/assets/4d743995-26b6-463d-8848-38cc9623dfe3">
Image Chat returns 1 for prompt tokens
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5728/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6305 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6305/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6305/comments | https://api.github.com/repos/ollama/ollama/issues/6305/events | https://github.com/ollama/ollama/pull/6305 | 2,459,369,125 | PR_kwDOJ0Z1Ps54BcYY | 6,305 | add integration obook-summary | {
"login": "cognitivetech",
"id": 55156785,
"node_id": "MDQ6VXNlcjU1MTU2Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cognitivetech",
"html_url": "https://github.com/cognitivetech",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2024-08-11T01:38:11 | 2024-08-11T01:43:09 | 2024-08-11T01:43:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6305",
"html_url": "https://github.com/ollama/ollama/pull/6305",
"diff_url": "https://github.com/ollama/ollama/pull/6305.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6305.patch",
"merged_at": "2024-08-11T01:43:09"
} | an app which automatically splits e-books by section and chunks those sections one at a time. saved to csv, then you can also ask the same question of the entire book, one chunk at a time, in addition to the bulleted notes core-functionality. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6305/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/803 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/803/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/803/comments | https://api.github.com/repos/ollama/ollama/issues/803/events | https://github.com/ollama/ollama/issues/803 | 1,945,048,182 | I_kwDOJ0Z1Ps5z7xR2 | 803 | Feature request: pull multiple models with ollama pull | {
"login": "rickknowles-cognitant",
"id": 37247203,
"node_id": "MDQ6VXNlcjM3MjQ3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/37247203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rickknowles-cognitant",
"html_url": "https://github.com/rickknowles-cognitant",
"followers_... | [] | closed | false | null | [] | null | 3 | 2023-10-16T11:59:55 | 2024-09-16T13:09:21 | 2023-10-25T19:46:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would it be possible to request a feature allowing you to do the following on the command line:
```ollama pull mistral falcon orca-mini```
instead of having to do:
```
ollama pull mistral
ollama pull falcon
ollama pull orca-mini
```
Not a huge deal but it feels fairly natural to do this sort of approach... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/803/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/803/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/28 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/28/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/28/comments | https://api.github.com/repos/ollama/ollama/issues/28/events | https://github.com/ollama/ollama/issues/28 | 1,783,019,757 | I_kwDOJ0Z1Ps5qRrjt | 28 | autocomplete for `llama run` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2023-06-30T18:56:07 | 2023-07-01T21:51:54 | 2023-07-01T21:51:54 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Example: `ollama run or<tab>` should show orca, etc. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/28/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/28/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2629 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2629/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2629/comments | https://api.github.com/repos/ollama/ollama/issues/2629/events | https://github.com/ollama/ollama/pull/2629 | 2,146,346,567 | PR_kwDOJ0Z1Ps5ngEqZ | 2,629 | Configure `OLLAMA_ORIGINS` via settings.json | {
"login": "lovincyrus",
"id": 1021101,
"node_id": "MDQ6VXNlcjEwMjExMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lovincyrus",
"html_url": "https://github.com/lovincyrus",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-02-21T10:12:44 | 2024-08-05T20:00:35 | 2024-08-05T20:00:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2629",
"html_url": "https://github.com/ollama/ollama/pull/2629",
"diff_url": "https://github.com/ollama/ollama/pull/2629.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2629.patch",
"merged_at": null
} | Took a stab at these issues https://github.com/ollama/ollama/issues/2335, https://github.com/ollama/ollama/issues/2369
Added settings menu item in the Electron tray application. Also, hoisted the `OLLAMA_ORIGINS` environment variable to the settings.json file, ensuring routes.go retrieves origins from the file rathe... | {
"login": "lovincyrus",
"id": 1021101,
"node_id": "MDQ6VXNlcjEwMjExMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lovincyrus",
"html_url": "https://github.com/lovincyrus",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2629/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5247 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5247/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5247/comments | https://api.github.com/repos/ollama/ollama/issues/5247/events | https://github.com/ollama/ollama/issues/5247 | 2,369,087,234 | I_kwDOJ0Z1Ps6NNWcC | 5,247 | Recoll index RAG | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-06-24T02:56:33 | 2024-06-30T14:34:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would it be possible in any way to use the text database index created by the software Recoll with ollama?
Recoll indexes an extremely wide variety of text documents into a database that is then searchable via the software, making a veritable search engine out of your documents. It is one of my favourite softwares, a... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5247/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5247/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6500 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6500/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6500/comments | https://api.github.com/repos/ollama/ollama/issues/6500/events | https://github.com/ollama/ollama/issues/6500 | 2,485,184,552 | I_kwDOJ0Z1Ps6UIOgo | 6,500 | ibm-granite/granite-20b-functioncalling | {
"login": "andsty",
"id": 138453484,
"node_id": "U_kgDOCECh7A",
"avatar_url": "https://avatars.githubusercontent.com/u/138453484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andsty",
"html_url": "https://github.com/andsty",
"followers_url": "https://api.github.com/users/andsty/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-08-25T11:09:21 | 2024-10-24T03:37:53 | 2024-10-24T03:37:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can someone please add ibm-granite/granite-20b-functioncalling in ollama library? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6500/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/851 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/851/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/851/comments | https://api.github.com/repos/ollama/ollama/issues/851/events | https://github.com/ollama/ollama/issues/851 | 1,954,458,776 | I_kwDOJ0Z1Ps50fqyY | 851 | macOS: Installing CLI from DMG should NOT require administrator privileges | {
"login": "coolaj86",
"id": 122831,
"node_id": "MDQ6VXNlcjEyMjgzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/122831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolaj86",
"html_url": "https://github.com/coolaj86",
"followers_url": "https://api.github.com/users/coolaj8... | [] | closed | false | null | [] | null | 5 | 2023-10-20T14:54:48 | 2024-06-25T06:04:52 | 2023-10-25T19:07:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As a matter of security, would you adjust the Mac installer to install to the standard user location of `~/.local/bin/` and not require administrator privileges?
I'm not that familiar with DMG installers, but I can provide shell script examples (or write whatever is needed in full) for ensuring that the executable i... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/851/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/851/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7812 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7812/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7812/comments | https://api.github.com/repos/ollama/ollama/issues/7812/events | https://github.com/ollama/ollama/issues/7812 | 2,687,186,734 | I_kwDOJ0Z1Ps6gKzcu | 7,812 | fetching a list of available models for download? | {
"login": "itsPreto",
"id": 45348368,
"node_id": "MDQ6VXNlcjQ1MzQ4MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/45348368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itsPreto",
"html_url": "https://github.com/itsPreto",
"followers_url": "https://api.github.com/users/its... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-24T06:19:26 | 2024-11-24T21:09:00 | 2024-11-24T21:09:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | is there any way to fetch a list of models from the ollama registry or something?
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7812/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7812/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/184 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/184/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/184/comments | https://api.github.com/repos/ollama/ollama/issues/184/events | https://github.com/ollama/ollama/issues/184 | 1,817,201,183 | I_kwDOJ0Z1Ps5sUEof | 184 | Dictionary of common errors | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 1 | 2023-07-23T16:55:06 | 2023-09-07T11:18:48 | 2023-09-07T11:18:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ideally our errors should make sense. But sometimes that’s hard to figure out. Perhaps also have a dictionary or glossary of common errors and how to solve. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/184/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6381 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6381/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6381/comments | https://api.github.com/repos/ollama/ollama/issues/6381/events | https://github.com/ollama/ollama/pull/6381 | 2,469,052,749 | PR_kwDOJ0Z1Ps54g6nC | 6,381 | fix: Add tooltip to system tray icon | {
"login": "eust-w",
"id": 39115651,
"node_id": "MDQ6VXNlcjM5MTE1NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/39115651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eust-w",
"html_url": "https://github.com/eust-w",
"followers_url": "https://api.github.com/users/eust-w/fo... | [] | closed | false | null | [] | null | 1 | 2024-08-15T22:00:34 | 2024-08-15T22:31:15 | 2024-08-15T22:31:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6381",
"html_url": "https://github.com/ollama/ollama/pull/6381",
"diff_url": "https://github.com/ollama/ollama/pull/6381.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6381.patch",
"merged_at": "2024-08-15T22:31:15"
} | - Updated setIcon method to include tooltip text for the system tray icon.
- Added NIF_TIP flag and set the tooltip text using UTF16 encoding.
Resolves: #6372 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6381/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8502 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8502/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8502/comments | https://api.github.com/repos/ollama/ollama/issues/8502/events | https://github.com/ollama/ollama/issues/8502 | 2,799,404,429 | I_kwDOJ0Z1Ps6m24WN | 8,502 | Requesting support for DeepSeek-R1-Distill series models | {
"login": "CberYellowstone",
"id": 37031767,
"node_id": "MDQ6VXNlcjM3MDMxNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/37031767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CberYellowstone",
"html_url": "https://github.com/CberYellowstone",
"followers_url": "https://api... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 7 | 2025-01-20T14:23:40 | 2025-01-24T09:29:26 | 2025-01-24T09:29:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
source: https://github.com/deepseek-ai/DeepSeek-R1#deepseek-r1-distill-models | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8502/reactions",
"total_count": 41,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 0,
"heart": 6,
"rocket": 6,
"eyes": 7
} | https://api.github.com/repos/ollama/ollama/issues/8502/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3556 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3556/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3556/comments | https://api.github.com/repos/ollama/ollama/issues/3556/events | https://github.com/ollama/ollama/issues/3556 | 2,233,381,743 | I_kwDOJ0Z1Ps6FHrNv | 3,556 | CodeGemma by Google | {
"login": "smortezah",
"id": 19313488,
"node_id": "MDQ6VXNlcjE5MzEzNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19313488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smortezah",
"html_url": "https://github.com/smortezah",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2024-04-09T12:56:18 | 2024-04-10T18:21:21 | 2024-04-09T13:44:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
CodeGemma by Google has just been released:
https://huggingface.co/collections/google/codegemma-release-66152ac7b683e2667abdee11 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3556/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3161 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3161/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3161/comments | https://api.github.com/repos/ollama/ollama/issues/3161/events | https://github.com/ollama/ollama/pull/3161 | 2,187,778,939 | PR_kwDOJ0Z1Ps5ptPix | 3,161 | llm,readline: use errors.Is instead of simple == check | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-03-15T05:54:24 | 2024-03-15T14:14:13 | 2024-03-15T14:14:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3161",
"html_url": "https://github.com/ollama/ollama/pull/3161",
"diff_url": "https://github.com/ollama/ollama/pull/3161.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3161.patch",
"merged_at": "2024-03-15T14:14:12"
} | This fixes some brittle, simple equality checks to use errors.Is. Since go1.13, errors.Is is the idiomatic way to check for errors. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3161/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6562 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6562/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6562/comments | https://api.github.com/repos/ollama/ollama/issues/6562/events | https://github.com/ollama/ollama/pull/6562 | 2,495,539,586 | PR_kwDOJ0Z1Ps55422X | 6,562 | remove any unneeded build artifacts | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-08-29T20:41:43 | 2024-08-30T16:40:52 | 2024-08-30T16:40:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6562",
"html_url": "https://github.com/ollama/ollama/pull/6562",
"diff_url": "https://github.com/ollama/ollama/pull/6562.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6562.patch",
"merged_at": "2024-08-30T16:40:50"
} | metal lib is embedded so the file isn't necessary. this shaves off roughly 50KB | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6562/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3169 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3169/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3169/comments | https://api.github.com/repos/ollama/ollama/issues/3169/events | https://github.com/ollama/ollama/pull/3169 | 2,189,022,296 | PR_kwDOJ0Z1Ps5pxkU1 | 3,169 | feat: timeout between token generation | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2024-03-15T16:22:28 | 2024-05-09T18:19:02 | 2024-05-09T18:19:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3169",
"html_url": "https://github.com/ollama/ollama/pull/3169",
"diff_url": "https://github.com/ollama/ollama/pull/3169.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3169.patch",
"merged_at": null
} | - if 30 seconds pass since the last token generation abort the request
- stop the llama thread to flush any accumulated context
This is an attempt to mitigate server hangs as seen in #2805. It is not a complete solution since we still need to address the root cause of the hangs, but it will make them recoverable.
... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3169/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2685 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2685/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2685/comments | https://api.github.com/repos/ollama/ollama/issues/2685/events | https://github.com/ollama/ollama/issues/2685 | 2,149,436,210 | I_kwDOJ0Z1Ps6AHcsy | 2,685 | v0.1.26 and v0.1.25 do not use AMD GPU on Linux | {
"login": "TimTheBig",
"id": 132001783,
"node_id": "U_kgDOB94v9w",
"avatar_url": "https://avatars.githubusercontent.com/u/132001783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimTheBig",
"html_url": "https://github.com/TimTheBig",
"followers_url": "https://api.github.com/users/TimThe... | [] | closed | false | null | [] | null | 8 | 2024-02-22T16:17:45 | 2024-02-23T17:09:21 | 2024-02-23T17:09:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | v0.1.26 and v0.1.25 do not use GPU(7900xtx) on [Nobara Linux 39](https://nobaraproject.org) when I use the install script. https://github.com/ollama/ollama/issues/2502#issuecomment-1949514130 | {
"login": "TimTheBig",
"id": 132001783,
"node_id": "U_kgDOB94v9w",
"avatar_url": "https://avatars.githubusercontent.com/u/132001783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimTheBig",
"html_url": "https://github.com/TimTheBig",
"followers_url": "https://api.github.com/users/TimThe... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2685/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8116 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8116/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8116/comments | https://api.github.com/repos/ollama/ollama/issues/8116/events | https://github.com/ollama/ollama/issues/8116 | 2,742,123,379 | I_kwDOJ0Z1Ps6jcXtz | 8,116 | doc to use go example and apis | {
"login": "malv-c",
"id": 19170213,
"node_id": "MDQ6VXNlcjE5MTcwMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/19170213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malv-c",
"html_url": "https://github.com/malv-c",
"followers_url": "https://api.github.com/users/malv-c/fo... | [] | closed | false | null | [] | null | 5 | 2024-12-16T11:30:11 | 2024-12-23T08:13:00 | 2024-12-23T08:13:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | to use this i don't see how any doc
ie : https://github.com/ollama/ollama/blob/main/examples/go-http-generate/main.go
import (
"bytes"
"fmt"
"io"
"log"
"net/http"
"os"
)
where is the doc ?
was it used for llama3.2 before as it now refuse to open address ?
with "os" can i use linux software ?
if no... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8116/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/871 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/871/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/871/comments | https://api.github.com/repos/ollama/ollama/issues/871/events | https://github.com/ollama/ollama/pull/871 | 1,955,599,796 | PR_kwDOJ0Z1Ps5dc1Bf | 871 | fix: Add support for legacy CPU (no AVX2/FMA) on Linux | {
"login": "reynaldichernando",
"id": 12949382,
"node_id": "MDQ6VXNlcjEyOTQ5Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12949382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reynaldichernando",
"html_url": "https://github.com/reynaldichernando",
"followers_url": "https... | [] | closed | false | null | [] | null | 4 | 2023-10-21T17:47:40 | 2023-10-27T19:31:20 | 2023-10-27T19:31:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/871",
"html_url": "https://github.com/ollama/ollama/pull/871",
"diff_url": "https://github.com/ollama/ollama/pull/871.diff",
"patch_url": "https://github.com/ollama/ollama/pull/871.patch",
"merged_at": null
} | Fixes the illegal instruction error when running with CPU without AVX2 or FMA, by building another set of ollama runner with `-DLLAMA_AVX2=off -DLLAMA_FMA=off`.
By default, upon running the cmake for ggml/gguf, it will have these arguments set to ON. Setting it to OFF, allows older CPU that don't have these instruct... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/871/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1587 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1587/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1587/comments | https://api.github.com/repos/ollama/ollama/issues/1587/events | https://github.com/ollama/ollama/issues/1587 | 2,047,499,153 | I_kwDOJ0Z1Ps56CluR | 1,587 | Missing "ollama avail" command to show available models | {
"login": "dennisorlando",
"id": 47061464,
"node_id": "MDQ6VXNlcjQ3MDYxNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47061464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennisorlando",
"html_url": "https://github.com/dennisorlando",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 4 | 2023-12-18T21:37:21 | 2024-01-10T15:52:32 | 2023-12-19T18:54:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Self descriptory; I have to go to this github page to look at what models are available, which appear to not be all of them | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1587/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2254 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2254/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2254/comments | https://api.github.com/repos/ollama/ollama/issues/2254/events | https://github.com/ollama/ollama/issues/2254 | 2,105,502,080 | I_kwDOJ0Z1Ps59f2mA | 2,254 | No response from ollama | {
"login": "caibirdme",
"id": 8054803,
"node_id": "MDQ6VXNlcjgwNTQ4MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8054803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caibirdme",
"html_url": "https://github.com/caibirdme",
"followers_url": "https://api.github.com/users/ca... | [] | closed | false | null | [] | null | 7 | 2024-01-29T13:27:18 | 2024-09-22T19:57:31 | 2024-02-20T04:09:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | No response from ollama
```
curl -X POST -d '{"model":"llama2", "messages":[{"role":"user","content":"why the weather in winter is so cold?"}], "stream":false}' 127.0.0.1:11434/api/chat
```
Here's the `ollama list`
```
llama2:latest 78e26419b446 3.8 GB 4 hours ago
llava:latest cd3274b81a85 4.5 G... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2254/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3977 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3977/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3977/comments | https://api.github.com/repos/ollama/ollama/issues/3977/events | https://github.com/ollama/ollama/issues/3977 | 2,266,955,135 | I_kwDOJ0Z1Ps6HHv1_ | 3,977 | api/create inserts escape quotes \" for the last PARAMETER stop. | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-04-27T10:31:28 | 2024-05-03T14:51:20 | 2024-05-03T00:04:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
If you run the following python code to copy llama3 to test, it creates a modelfile with escape qotes for the last PARAMETER stop.
If I use `ollama create test -f test.modelfile`, it works fine.
First I thought [ollama/ollama-python](https://github.com/ollama/ollama-python/issues/136)... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3977/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4401 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4401/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4401/comments | https://api.github.com/repos/ollama/ollama/issues/4401/events | https://github.com/ollama/ollama/pull/4401 | 2,292,610,596 | PR_kwDOJ0Z1Ps5vP2UJ | 4,401 | update llama.cpp submodule to support jina embeddings v2 | {
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/fo... | [] | closed | false | null | [] | null | 2 | 2024-05-13T11:56:25 | 2024-05-14T06:41:40 | 2024-05-14T06:41:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4401",
"html_url": "https://github.com/ollama/ollama/pull/4401",
"diff_url": "https://github.com/ollama/ollama/pull/4401.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4401.patch",
"merged_at": null
} | Update the `lama.cpp` submodule so that `ollama` can run `Jina Embeddings V2` after it has been added to `llama.cpp` | {
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4401/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7509 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7509/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7509/comments | https://api.github.com/repos/ollama/ollama/issues/7509/events | https://github.com/ollama/ollama/issues/7509 | 2,635,375,713 | I_kwDOJ0Z1Ps6dFKRh | 7,509 | Support partial loads of LLaMA 3.2 Vision 11b on 6G GPUs | {
"login": "Romultra",
"id": 65618486,
"node_id": "MDQ6VXNlcjY1NjE4NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/65618486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Romultra",
"html_url": "https://github.com/Romultra",
"followers_url": "https://api.github.com/users/Rom... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 12 | 2024-11-05T12:54:20 | 2025-01-12T01:11:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Description:**
I encountered an issue where the **LLaMA 3.2 Vision 11b** model loads entirely in CPU RAM, without utilizing the GPU memory as expected. The issue occurs on my Windows-based laptop with 6GB VRAM, where models that exceed GPU memory capacity should load the rest into system RAM ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7509/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7509/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7997 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7997/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7997/comments | https://api.github.com/repos/ollama/ollama/issues/7997/events | https://github.com/ollama/ollama/issues/7997 | 2,725,326,484 | I_kwDOJ0Z1Ps6icS6U | 7,997 | Support loading models from multiple locations | {
"login": "i0ntempest",
"id": 16017904,
"node_id": "MDQ6VXNlcjE2MDE3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16017904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i0ntempest",
"html_url": "https://github.com/i0ntempest",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-12-08T15:14:31 | 2024-12-20T21:02:56 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Model files adds up real fast, and my internal disk is near full after pulling a few 72b models. It would be great if `OLLAMA_MODELS` can be a colon separated string with multiple paths. Then the `pull` and `run` command should automatically decide which folder to put new models to and allow overwritting with a switch. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7997/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7997/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6741 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6741/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6741/comments | https://api.github.com/repos/ollama/ollama/issues/6741/events | https://github.com/ollama/ollama/issues/6741 | 2,518,383,052 | I_kwDOJ0Z1Ps6WG3nM | 6,741 | Llama 3.1 70b 128k context not fitting 96Gb | {
"login": "dmatora",
"id": 647062,
"node_id": "MDQ6VXNlcjY0NzA2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/647062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmatora",
"html_url": "https://github.com/dmatora",
"followers_url": "https://api.github.com/users/dmatora/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | open | false | null | [] | null | 2 | 2024-09-11T03:38:29 | 2024-09-11T17:32:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Not only it doesn't fit 96Gb (offloading only 10 layers out of 81), but processing actual ~128k request crashes with `CUDA error: out of memory` on 160Gb (will all layers offloaded)
As mentioned here https://github.com/ollama/ollama/issues/6279#issuecomment-2342546437_
this is obviously a ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6741/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6741/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8518 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8518/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8518/comments | https://api.github.com/repos/ollama/ollama/issues/8518/events | https://github.com/ollama/ollama/issues/8518 | 2,801,928,484 | I_kwDOJ0Z1Ps6nAgkk | 8,518 | cannot find ROCM files/tools in docker image | {
"login": "nicoKoehler",
"id": 53008522,
"node_id": "MDQ6VXNlcjUzMDA4NTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/53008522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicoKoehler",
"html_url": "https://github.com/nicoKoehler",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 0 | 2025-01-21T13:55:49 | 2025-01-21T13:55:49 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was trying to figure out which ROCM version the image was using (trying to get faster-whisper to run in docker too, so I thought this could be a lead) and could not find anything. Where would they be? Why arent there any of the typical files and folders?
System: Docker on Debian 12.5, 16GB, AMD RX 550 4GB
Docker im... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8518/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5826 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5826/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5826/comments | https://api.github.com/repos/ollama/ollama/issues/5826/events | https://github.com/ollama/ollama/issues/5826 | 2,421,274,073 | I_kwDOJ0Z1Ps6QUbXZ | 5,826 | Azurefile (NFS) causes very slow model loads - Mixtral 22B isn't loaded on an A100 (80GB VRAM) | {
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 10 | 2024-07-21T07:44:23 | 2024-07-23T07:26:39 | 2024-07-23T07:26:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Trying to load Mixtral 8x22B model using an A100 GPU as a deployment in Kubernetes, but it isn't loaded after 6 minutes.
Mistral 7B model is loaded fine.
Here is the debug log:
`time=2024-07-21T07:39:07.407Z level=DEBUG source=gpu.go:358 msg="updating system memory data" before.total=... | {
"login": "juangon",
"id": 1306127,
"node_id": "MDQ6VXNlcjEzMDYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juangon",
"html_url": "https://github.com/juangon",
"followers_url": "https://api.github.com/users/juangon/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5826/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1382 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1382/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1382/comments | https://api.github.com/repos/ollama/ollama/issues/1382/events | https://github.com/ollama/ollama/issues/1382 | 2,024,910,714 | I_kwDOJ0Z1Ps54sa96 | 1,382 | litellm leaves defunct processes behind | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [] | closed | false | null | [] | null | 4 | 2023-12-04T22:42:10 | 2023-12-08T23:21:41 | 2023-12-06T20:01:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm not sure who's at fault here.
https://github.com/BerriAI/litellm/issues/992
litellm -m ollama/alfred
litellm -m ollama/mistral
run an autogen application that uses these guys
The autogen get's stuck, so you must ctrl-c out.
The ollama models you started are now defunct
If on a linux system you do
ps au... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1382/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5523 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5523/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5523/comments | https://api.github.com/repos/ollama/ollama/issues/5523/events | https://github.com/ollama/ollama/pull/5523 | 2,393,814,997 | PR_kwDOJ0Z1Ps50mdwU | 5,523 | sched: don't error if paging to disk on Windows and macOS | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-07T01:04:14 | 2024-07-08T16:49:49 | 2024-07-07T02:01:53 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5523",
"html_url": "https://github.com/ollama/ollama/pull/5523",
"diff_url": "https://github.com/ollama/ollama/pull/5523.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5523.patch",
"merged_at": "2024-07-07T02:01:53"
} | macOS and Windows don't error when paging to disk, so loosen this check for now to not return an error to users that could still run the model (albeit a little slowly). It also stops us from double counting memory on Apple Silicon Macs.
In the future, we should still select an upper limit on memory for macOS and Win... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5523/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8617 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8617/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8617/comments | https://api.github.com/repos/ollama/ollama/issues/8617/events | https://github.com/ollama/ollama/issues/8617 | 2,814,009,755 | I_kwDOJ0Z1Ps6numGb | 8,617 | Support Request for jonatasgrosman/wav2vec2-large-xlsr-53-italian | {
"login": "raphael10-collab",
"id": 70313067,
"node_id": "MDQ6VXNlcjcwMzEzMDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/70313067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raphael10-collab",
"html_url": "https://github.com/raphael10-collab",
"followers_url": "https://... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2025-01-27T20:37:55 | 2025-01-27T20:44:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | (.venv) raphy@raohy:~/llama.cpp$ git clone https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian
Cloning into 'wav2vec2-large-xlsr-53-italian'...
remote: Enumerating objects: 99, done.
remote: Total 99 (delta 0), reused 0 (delta 0), pack-reused 99 (from 1)
Unpacking objects: 100% (99/... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8617/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6106 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6106/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6106/comments | https://api.github.com/repos/ollama/ollama/issues/6106/events | https://github.com/ollama/ollama/pull/6106 | 2,441,007,324 | PR_kwDOJ0Z1Ps53CzDK | 6,106 | patches: phi3 optional sliding window attention | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-31T21:48:27 | 2024-07-31T23:47:39 | 2024-07-31T23:12:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6106",
"html_url": "https://github.com/ollama/ollama/pull/6106",
"diff_url": "https://github.com/ollama/ollama/pull/6106.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6106.patch",
"merged_at": "2024-07-31T23:12:06"
} | this change allows models that do not set `phi3.attention.sliding_window` to revert to the previous behaviour instead of segfaulting
resolves #5956 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6106/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6106/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1972 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1972/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1972/comments | https://api.github.com/repos/ollama/ollama/issues/1972/events | https://github.com/ollama/ollama/pull/1972 | 2,080,175,593 | PR_kwDOJ0Z1Ps5j_xQE | 1,972 | use g++ to build `libext_server.so` on linux | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-01-13T08:12:36 | 2024-01-13T15:55:10 | 2024-01-13T08:12:43 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1972",
"html_url": "https://github.com/ollama/ollama/pull/1972",
"diff_url": "https://github.com/ollama/ollama/pull/1972.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1972.patch",
"merged_at": "2024-01-13T08:12:42"
} | Fixes the build error:
```
Error: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama3730278603/cpu/libext_server.so: undefined symbol: _ZTVN10cxxabiv117class
```
cc @dhiltgen | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1972/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5239 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5239/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5239/comments | https://api.github.com/repos/ollama/ollama/issues/5239/events | https://github.com/ollama/ollama/issues/5239 | 2,368,601,525 | I_kwDOJ0Z1Ps6NLf21 | 5,239 | Mutli-GPU asymmetric VRAM with smaller first causes ordering bug and incorrect tensor split - cudaMalloc failed: out of memory | {
"login": "chrisoutwright",
"id": 27736055,
"node_id": "MDQ6VXNlcjI3NzM2MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/27736055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisoutwright",
"html_url": "https://github.com/chrisoutwright",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2024-06-23T14:55:56 | 2024-11-05T23:16:40 | 2024-11-05T23:16:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After going to 0.1.45 from 0.1.43 version I get out of memory, I did try as well
Set-ItemProperty -Path 'HKCU:\Environment' -Name 'OLLAMA_SCHED_SPREAD' -Value 1
and
Set-ItemProperty -Path 'HKCU:\Environment' -Name 'CUDA_VISIBLE_DEVICES' -Value "0,1"
But still it is happening.
```
l... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5239/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/121 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/121/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/121/comments | https://api.github.com/repos/ollama/ollama/issues/121/events | https://github.com/ollama/ollama/issues/121 | 1,811,380,099 | I_kwDOJ0Z1Ps5r93eD | 121 | Performance question? | {
"login": "kosecki123",
"id": 5417665,
"node_id": "MDQ6VXNlcjU0MTc2NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5417665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kosecki123",
"html_url": "https://github.com/kosecki123",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 4 | 2023-07-19T07:56:17 | 2023-08-23T17:43:32 | 2023-08-23T17:43:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is just request for info rather than a bug.
What's kind of performance / latency on prompts we should expect running on M2 Pro ? Seems like takes up to 10s to generate the answers using `llama2` model. Is that something that can improve in the future?
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/121/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6148 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6148/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6148/comments | https://api.github.com/repos/ollama/ollama/issues/6148/events | https://github.com/ollama/ollama/issues/6148 | 2,446,237,133 | I_kwDOJ0Z1Ps6Rzp3N | 6,148 | Model unloaded each request if OLLAMA_NUM_PARALLEL > 1 | {
"login": "abes200",
"id": 177388421,
"node_id": "U_kgDOCpK7hQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177388421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abes200",
"html_url": "https://github.com/abes200",
"followers_url": "https://api.github.com/users/abes200/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 29 | 2024-08-03T08:19:14 | 2025-01-03T05:08:47 | 2024-08-19T18:07:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I did see an issue this was mentioned in it but it was closed and said as fixed in version 0.2.1
I wasn't having this issue when I was using 0.3.0. I missed a few updates, but updated to the most recent and now if I have OLLAMA_NUM_PARALLEL in my system variables or use it as an option in pytho... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6148/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4399 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4399/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4399/comments | https://api.github.com/repos/ollama/ollama/issues/4399/events | https://github.com/ollama/ollama/pull/4399 | 2,292,514,059 | PR_kwDOJ0Z1Ps5vPgvY | 4,399 | fix embedding by adding fixes from llama.cpp upstream | {
"login": "deadbeef84",
"id": 961178,
"node_id": "MDQ6VXNlcjk2MTE3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/961178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deadbeef84",
"html_url": "https://github.com/deadbeef84",
"followers_url": "https://api.github.com/users/d... | [] | closed | false | null | [] | null | 11 | 2024-05-13T11:13:07 | 2024-06-09T01:14:27 | 2024-06-09T01:14:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4399",
"html_url": "https://github.com/ollama/ollama/pull/4399",
"diff_url": "https://github.com/ollama/ollama/pull/4399.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4399.patch",
"merged_at": null
} | Embedding appears broken since v0.1.32
See #3777 #4207 for details.
This PR applies fixes based on https://github.com/ggerganov/llama.cpp/commit/1b67731e184e27a465b8c5476061294a4af668ea#diff-87355a1a297a9f0fdc86af5e2a59cae153290f58d68822cd10c30fee4f7f7076.
I've tested it and embedding vectors looks correct aft... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4399/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4399/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8240 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8240/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8240/comments | https://api.github.com/repos/ollama/ollama/issues/8240/events | https://github.com/ollama/ollama/issues/8240 | 2,758,744,333 | I_kwDOJ0Z1Ps6kbxkN | 8,240 | Realtime API | {
"login": "GitOguz",
"id": 23114578,
"node_id": "MDQ6VXNlcjIzMTE0NTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/23114578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GitOguz",
"html_url": "https://github.com/GitOguz",
"followers_url": "https://api.github.com/users/GitOgu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-12-25T11:24:08 | 2024-12-29T18:37:45 | 2024-12-29T18:37:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please add realtime API capabilities. Websocket/WebRTC. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8240/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2256 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2256/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2256/comments | https://api.github.com/repos/ollama/ollama/issues/2256/events | https://github.com/ollama/ollama/pull/2256 | 2,105,955,023 | PR_kwDOJ0Z1Ps5lWb8C | 2,256 | Add container hints for troubleshooting | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-29T16:53:54 | 2024-01-30T16:12:52 | 2024-01-30T16:12:48 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2256",
"html_url": "https://github.com/ollama/ollama/pull/2256",
"diff_url": "https://github.com/ollama/ollama/pull/2256.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2256.patch",
"merged_at": "2024-01-30T16:12:48"
} | Some users are new to containers and unsure where the server logs go | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2256/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4521 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4521/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4521/comments | https://api.github.com/repos/ollama/ollama/issues/4521/events | https://github.com/ollama/ollama/pull/4521 | 2,304,687,654 | PR_kwDOJ0Z1Ps5v5B6N | 4,521 | implement tunable registry defaults for registry and update mirrors | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | closed | false | null | [] | null | 0 | 2024-05-19T16:32:32 | 2024-08-09T20:07:31 | 2024-08-09T20:07:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4521",
"html_url": "https://github.com/ollama/ollama/pull/4521",
"diff_url": "https://github.com/ollama/ollama/pull/4521.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4521.patch",
"merged_at": null
} | # What is the problem this change solves?
In large environments with many cloud instances are running `ollama serve`, accidentally pushing code to run `ollama pull llama3` can result in 100's of cloud instances are trying to download from `ollama.ai`.
The correct change for production should have been `ollama pull... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4521/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4521/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2736 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2736/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2736/comments | https://api.github.com/repos/ollama/ollama/issues/2736/events | https://github.com/ollama/ollama/issues/2736 | 2,152,498,068 | I_kwDOJ0Z1Ps6ATIOU | 2,736 | Windows version "/api/generate" 404 not found | {
"login": "t41372",
"id": 36402030,
"node_id": "MDQ6VXNlcjM2NDAyMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/36402030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t41372",
"html_url": "https://github.com/t41372",
"followers_url": "https://api.github.com/users/t41372/fo... | [] | closed | false | null | [] | null | 33 | 2024-02-24T21:59:52 | 2025-01-08T12:47:53 | 2024-03-12T04:34:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | <img width="1310" alt="截圖 2024-02-24 下午2 48 29" src="https://github.com/ollama/ollama/assets/36402030/8d1aac17-75f5-4a5c-8f27-a6569db7256c">
<img width="431" alt="截圖 2024-02-24 下午2 54 17" src="https://github.com/ollama/ollama/assets/36402030/99030d6f-9393-4eb5-b617-e04c369fdefe">
The "/api/generate" is not function... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2736/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2736/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.