url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2285 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2285/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2285/comments | https://api.github.com/repos/ollama/ollama/issues/2285/events | https://github.com/ollama/ollama/issues/2285 | 2,109,201,868 | I_kwDOJ0Z1Ps59t93M | 2,285 | EOF Error When Running A Model | {
"login": "meminens",
"id": 42714627,
"node_id": "MDQ6VXNlcjQyNzE0NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/42714627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meminens",
"html_url": "https://github.com/meminens",
"followers_url": "https://api.github.com/users/mem... | [
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 10 | 2024-01-31T03:18:09 | 2024-03-06T21:45:34 | 2024-01-31T18:47:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Running the command `ollama run mistral` results in the error `Error: Post "http://127.0.0.1:11434/api/chat": EOF`
Output of `journal -u ollama`:
```
Jan 30 22:13:35 arch ollama[14727]: 2024/01/30 22:13:35 cpu_common.go:11: INFO CPU has AVX2
Jan 30 22:13:35 arch ollama[14727]: 2024/01/30 22:13:35 dyn_ext_serve... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2285/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2285/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2497 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2497/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2497/comments | https://api.github.com/repos/ollama/ollama/issues/2497/events | https://github.com/ollama/ollama/issues/2497 | 2,134,930,064 | I_kwDOJ0Z1Ps5_QHKQ | 2,497 | [Linux] Ran out of space while installing llama2 model, can't delete or find | {
"login": "saamerm",
"id": 8262287,
"node_id": "MDQ6VXNlcjgyNjIyODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8262287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saamerm",
"html_url": "https://github.com/saamerm",
"followers_url": "https://api.github.com/users/saamerm/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2024-02-14T18:28:19 | 2024-05-09T00:44:52 | 2024-05-09T00:44:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I installed ollama on my Linux EC2 machine with 8GB of Hard disk space and 4GB of free disk space.
I ran `ollama run llama2` by mistake before checking the space, but it was too quick to download before I could react and I ran out of space, with this error "no space left on device"
Now can't delete or find the mode... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2497/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/879 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/879/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/879/comments | https://api.github.com/repos/ollama/ollama/issues/879/events | https://github.com/ollama/ollama/issues/879 | 1,957,430,737 | I_kwDOJ0Z1Ps50rAXR | 879 | Support image inputs | {
"login": "tmc",
"id": 3977,
"node_id": "MDQ6VXNlcjM5Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmc",
"html_url": "https://github.com/tmc",
"followers_url": "https://api.github.com/users/tmc/followers",
"following_u... | [] | closed | false | null | [] | null | 2 | 2023-10-23T15:22:05 | 2023-10-24T23:10:42 | 2023-10-24T23:10:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | With llama.cpp gaining multi-modality and llava support, it'd be nice to enable image inputs to compatible models in Ollama. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/879/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/879/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/69 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/69/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/69/comments | https://api.github.com/repos/ollama/ollama/issues/69/events | https://github.com/ollama/ollama/pull/69 | 1,799,737,310 | PR_kwDOJ0Z1Ps5VPrN- | 69 | Use embeddings to give the chat client session memory | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 1 | 2023-07-11T20:53:44 | 2023-09-08T15:13:20 | 2023-07-20T18:04:37 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/69",
"html_url": "https://github.com/ollama/ollama/pull/69",
"diff_url": "https://github.com/ollama/ollama/pull/69.diff",
"patch_url": "https://github.com/ollama/ollama/pull/69.patch",
"merged_at": null
} | Store previous questions and answers in the client during a chat session. Use embeddings to look-up what is relevant to the current context.
This is an initial implementation. We will need to iterate to improve this experience through more dynamic prompts and possibly weighting recency in the conversation too. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/69/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/69/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4715 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4715/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4715/comments | https://api.github.com/repos/ollama/ollama/issues/4715/events | https://github.com/ollama/ollama/pull/4715 | 2,324,695,303 | PR_kwDOJ0Z1Ps5w9Wm4 | 4,715 | proper utf16 support | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-30T04:50:54 | 2024-06-10T18:41:29 | 2024-06-10T18:41:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4715",
"html_url": "https://github.com/ollama/ollama/pull/4715",
"diff_url": "https://github.com/ollama/ollama/pull/4715.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4715.patch",
"merged_at": "2024-06-10T18:41:29"
} | instead of relying on unreadable runes which can be appear for other reasons, check the header and adjust the scanner and decoder | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4715/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1006 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1006/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1006/comments | https://api.github.com/repos/ollama/ollama/issues/1006/events | https://github.com/ollama/ollama/issues/1006 | 1,977,742,083 | I_kwDOJ0Z1Ps514fMD | 1,006 | Mobile support | {
"login": "mikestaub",
"id": 1254558,
"node_id": "MDQ6VXNlcjEyNTQ1NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikestaub",
"html_url": "https://github.com/mikestaub",
"followers_url": "https://api.github.com/users/mi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 18 | 2023-11-05T11:17:35 | 2024-10-25T04:49:51 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there a plan to deploy this to iOS or Android so users can run models locally on their mobile devices?
What would it take to achieve this? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1006/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1006/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/984 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/984/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/984/comments | https://api.github.com/repos/ollama/ollama/issues/984/events | https://github.com/ollama/ollama/pull/984 | 1,975,328,306 | PR_kwDOJ0Z1Ps5efar6 | 984 | Remove grammar mistake: duplicate "install" in GPU support warning message | {
"login": "noahgitsham",
"id": 73707948,
"node_id": "MDQ6VXNlcjczNzA3OTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/73707948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahgitsham",
"html_url": "https://github.com/noahgitsham",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2023-11-03T01:50:32 | 2023-11-03T07:45:14 | 2023-11-03T07:45:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/984",
"html_url": "https://github.com/ollama/ollama/pull/984",
"diff_url": "https://github.com/ollama/ollama/pull/984.diff",
"patch_url": "https://github.com/ollama/ollama/pull/984.patch",
"merged_at": "2023-11-03T07:45:14"
} | Just realised another grammar mistake in the exact same error I just "fixed" 😸 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/984/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/401 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/401/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/401/comments | https://api.github.com/repos/ollama/ollama/issues/401/events | https://github.com/ollama/ollama/pull/401 | 1,862,370,219 | PR_kwDOJ0Z1Ps5YjCK8 | 401 | subprocess llama.cpp server | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-08-23T00:26:33 | 2023-08-30T20:35:05 | 2023-08-30T20:35:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/401",
"html_url": "https://github.com/ollama/ollama/pull/401",
"diff_url": "https://github.com/ollama/ollama/pull/401.diff",
"patch_url": "https://github.com/ollama/ollama/pull/401.patch",
"merged_at": "2023-08-30T20:35:03"
} | This is a pretty big change that moves llama.cpp from a library within cgo to an external process that we manage.
Why?
- This makes building for multiple platforms easier (no more windows cgo incompatibilities)
- We can fallback to non-gpu runners when needed
- Approximately ~200ms faster on average in my tests
... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/401/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/401/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8558 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8558/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8558/comments | https://api.github.com/repos/ollama/ollama/issues/8558/events | https://github.com/ollama/ollama/pull/8558 | 2,808,419,398 | PR_kwDOJ0Z1Ps6I1_OP | 8,558 | Fix build for loongarch64, go arch is not same with uname -m | {
"login": "ideal",
"id": 261698,
"node_id": "MDQ6VXNlcjI2MTY5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/261698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ideal",
"html_url": "https://github.com/ideal",
"followers_url": "https://api.github.com/users/ideal/followers"... | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2025-01-24T03:36:14 | 2025-01-27T17:32:29 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8558",
"html_url": "https://github.com/ollama/ollama/pull/8558",
"diff_url": "https://github.com/ollama/ollama/pull/8558.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8558.patch",
"merged_at": null
} | go tool dist list: linux/loong64, but the result of uname -m is loongarch64
More info: https://areweloongyet.com/docs/loong-or-loongarch/
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8558/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8558/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6451 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6451/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6451/comments | https://api.github.com/repos/ollama/ollama/issues/6451/events | https://github.com/ollama/ollama/issues/6451 | 2,476,655,008 | I_kwDOJ0Z1Ps6TnsGg | 6,451 | cannot unmarshal array into Go struct field ChatRequest.messages of type string | {
"login": "McCannDahl",
"id": 19883817,
"node_id": "MDQ6VXNlcjE5ODgzODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/19883817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McCannDahl",
"html_url": "https://github.com/McCannDahl",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-20T22:43:20 | 2024-08-21T13:26:25 | 2024-08-21T13:26:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I believe support for images via openAPI format has already been added https://github.com/ollama/ollama/pull/5208, but when I do this rest call, I get the following error.
Curl Command
```
curl -X POST -H "Content-Type: application/json" "http://localhost:11434/api/chat" -d '{"model": "llav... | {
"login": "McCannDahl",
"id": 19883817,
"node_id": "MDQ6VXNlcjE5ODgzODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/19883817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McCannDahl",
"html_url": "https://github.com/McCannDahl",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6451/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/280 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/280/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/280/comments | https://api.github.com/repos/ollama/ollama/issues/280/events | https://github.com/ollama/ollama/issues/280 | 1,836,763,417 | I_kwDOJ0Z1Ps5teskZ | 280 | Non-interactive mode for batching inputs | {
"login": "jmthackett",
"id": 844469,
"node_id": "MDQ6VXNlcjg0NDQ2OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/844469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmthackett",
"html_url": "https://github.com/jmthackett",
"followers_url": "https://api.github.com/users/j... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-08-04T13:38:13 | 2023-12-04T19:09:14 | 2023-12-04T19:09:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Just something along these lines:
```
ollama run <my model> -f input.txt -n <number of runs> -o output.txt
```
Not essential by any stretch of the imagination but it'd be handy. My use case is being able to batch process prompts by just iterating over a list of text files. At the moment I'm just looking at how ... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/280/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7030 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7030/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7030/comments | https://api.github.com/repos/ollama/ollama/issues/7030/events | https://github.com/ollama/ollama/pull/7030 | 2,554,801,732 | PR_kwDOJ0Z1Ps59BUP9 | 7,030 | server: add "Cache-Control: max-age=0" response header | {
"login": "justincranford",
"id": 2488888,
"node_id": "MDQ6VXNlcjI0ODg4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2488888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justincranford",
"html_url": "https://github.com/justincranford",
"followers_url": "https://api.gith... | [] | open | false | null | [] | null | 0 | 2024-09-29T08:13:41 | 2024-09-29T08:13:41 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7030",
"html_url": "https://github.com/ollama/ollama/pull/7030",
"diff_url": "https://github.com/ollama/ollama/pull/7030.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7030.patch",
"merged_at": null
} | Response header `Cache-Control` is missing for APIs such as /api/tags and /api/ps are missing.
Adding `Cache-Control: max-age=0` directive to HTTP headers tells clients the response is considered stale immediately after receiving it. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7030/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8342 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8342/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8342/comments | https://api.github.com/repos/ollama/ollama/issues/8342/events | https://github.com/ollama/ollama/issues/8342 | 2,773,989,214 | I_kwDOJ0Z1Ps6lV7de | 8,342 | CORS error x-stainless-helper-method | {
"login": "isamu",
"id": 231763,
"node_id": "MDQ6VXNlcjIzMTc2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/231763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isamu",
"html_url": "https://github.com/isamu",
"followers_url": "https://api.github.com/users/isamu/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2025-01-08T00:33:11 | 2025-01-17T01:58:44 | 2025-01-17T01:58:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am making requests to a local instance of Ollama using the OpenAI npm package from browser. When I add `stream: true` to OpenAI npm, I encounter the following CORS error:
```
Access to fetch at 'http://127.0.0.1:11434/v1/chat/completions' from origin 'http://localhost:5174' has been block... | {
"login": "isamu",
"id": 231763,
"node_id": "MDQ6VXNlcjIzMTc2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/231763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isamu",
"html_url": "https://github.com/isamu",
"followers_url": "https://api.github.com/users/isamu/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8342/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6188 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6188/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6188/comments | https://api.github.com/repos/ollama/ollama/issues/6188/events | https://github.com/ollama/ollama/pull/6188 | 2,449,535,600 | PR_kwDOJ0Z1Ps53f-rI | 6,188 | Allow singular array for CompletionRequest prompt field | {
"login": "igor-drozdov",
"id": 3660805,
"node_id": "MDQ6VXNlcjM2NjA4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3660805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igor-drozdov",
"html_url": "https://github.com/igor-drozdov",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 5 | 2024-08-05T21:47:12 | 2024-12-24T03:57:37 | 2024-12-24T03:57:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6188",
"html_url": "https://github.com/ollama/ollama/pull/6188",
"diff_url": "https://github.com/ollama/ollama/pull/6188.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6188.patch",
"merged_at": null
} | ## Overview
OpenAI `v1/completions` to handle `[]string`, `[]int` and `[][]int`, in addition to just a `string` according to https://platform.openai.com/docs/api-reference/completions/create#completions-create-prompt
Also some aggregators (like litellm) send a list of prompts as a prompt: https://github.com/ollam... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6188/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/387 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/387/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/387/comments | https://api.github.com/repos/ollama/ollama/issues/387/events | https://github.com/ollama/ollama/issues/387 | 1,857,908,587 | I_kwDOJ0Z1Ps5uvW9r | 387 | Client can't connect to server | {
"login": "freeqaz",
"id": 4573221,
"node_id": "MDQ6VXNlcjQ1NzMyMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4573221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freeqaz",
"html_url": "https://github.com/freeqaz",
"followers_url": "https://api.github.com/users/freeqaz/... | [] | closed | false | null | [] | null | 8 | 2023-08-20T00:25:51 | 2024-01-08T17:44:35 | 2023-08-22T21:41:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Following the readme on my Arch linux setup yields the following error:
```sh
$ ./ollama run llama2
Error: could not connect to ollama server, run 'ollama serve' to start it
```
Steps to reproduce:
```sh
git clone git@github.com:jmorganca/ollama.git
cd ollama
git build .
./ollama serve &
./ollama run l... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/387/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1222 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1222/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1222/comments | https://api.github.com/repos/ollama/ollama/issues/1222/events | https://github.com/ollama/ollama/pull/1222 | 2,004,767,209 | PR_kwDOJ0Z1Ps5gDBMM | 1,222 | fix relative path on create | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-11-21T17:06:12 | 2023-11-21T20:43:18 | 2023-11-21T20:43:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1222",
"html_url": "https://github.com/ollama/ollama/pull/1222",
"diff_url": "https://github.com/ollama/ollama/pull/1222.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1222.patch",
"merged_at": "2023-11-21T20:43:18"
} | This fixes a regression in the API. Previously calling the API directly with a modelfile that has a relative file would work.
Ex:
```
FROM nous-capybara-34b.Q4_0.gguf
TEMPLATE "USER: { .Prompt } ASSISTANT: "
```
```
curl -X POST http://localhost:11434/api/create -d '{
"name": "bruce/nous-capybara",
... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1222/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3916 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3916/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3916/comments | https://api.github.com/repos/ollama/ollama/issues/3916/events | https://github.com/ollama/ollama/issues/3916 | 2,264,122,788 | I_kwDOJ0Z1Ps6G88Wk | 3,916 | Error: The parameter is incorrect. | {
"login": "aaamoon",
"id": 25700476,
"node_id": "MDQ6VXNlcjI1NzAwNDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/25700476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaamoon",
"html_url": "https://github.com/aaamoon",
"followers_url": "https://api.github.com/users/aaamoo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 21 | 2024-04-25T17:27:29 | 2025-01-24T01:50:07 | 2024-05-13T08:43:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
[GIN] 2024/04/26 - 01:24:28 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/26 - 01:24:28 | 200 | 1.1779ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/26 - 01:24:28 | 200 | 1.4496ms | 127.0.0.1 | POST "/api/show"
{"function":"process_single_... | {
"login": "aaamoon",
"id": 25700476,
"node_id": "MDQ6VXNlcjI1NzAwNDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/25700476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaamoon",
"html_url": "https://github.com/aaamoon",
"followers_url": "https://api.github.com/users/aaamoo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3916/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7939 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7939/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7939/comments | https://api.github.com/repos/ollama/ollama/issues/7939/events | https://github.com/ollama/ollama/pull/7939 | 2,719,102,345 | PR_kwDOJ0Z1Ps6EHPp1 | 7,939 | Add generate endpoint for structured outputs | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-12-05T01:22:12 | 2024-12-05T01:37:14 | 2024-12-05T01:37:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7939",
"html_url": "https://github.com/ollama/ollama/pull/7939",
"diff_url": "https://github.com/ollama/ollama/pull/7939.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7939.patch",
"merged_at": "2024-12-05T01:37:12"
} | Follow up to #7900 | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7939/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6430 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6430/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6430/comments | https://api.github.com/repos/ollama/ollama/issues/6430/events | https://github.com/ollama/ollama/pull/6430 | 2,474,353,323 | PR_kwDOJ0Z1Ps54ygVE | 6,430 | Linux Doc cosmetic fixes. | {
"login": "fujitatomoya",
"id": 43395114,
"node_id": "MDQ6VXNlcjQzMzk1MTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/43395114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fujitatomoya",
"html_url": "https://github.com/fujitatomoya",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 4 | 2024-08-19T22:31:20 | 2024-09-04T18:45:09 | 2024-09-04T18:45:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6430",
"html_url": "https://github.com/ollama/ollama/pull/6430",
"diff_url": "https://github.com/ollama/ollama/pull/6430.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6430.patch",
"merged_at": "2024-09-04T18:45:09"
} | minor doc update for Linux Users. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6430/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/399 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/399/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/399/comments | https://api.github.com/repos/ollama/ollama/issues/399/events | https://github.com/ollama/ollama/issues/399 | 1,862,248,922 | I_kwDOJ0Z1Ps5u_6na | 399 | Images for Readmes | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 6 | 2023-08-22T21:44:08 | 2023-08-29T18:42:50 | 2023-08-22T21:44:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/399/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5108 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5108/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5108/comments | https://api.github.com/repos/ollama/ollama/issues/5108/events | https://github.com/ollama/ollama/issues/5108 | 2,358,749,805 | I_kwDOJ0Z1Ps6Ml6pt | 5,108 | ollama run loading a long time | {
"login": "wangzi2124",
"id": 13045190,
"node_id": "MDQ6VXNlcjEzMDQ1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/13045190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangzi2124",
"html_url": "https://github.com/wangzi2124",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-06-18T02:52:50 | 2024-06-19T19:47:54 | 2024-06-19T08:15:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="1178" alt="20240618105223" src="https://github.com/ollama/ollama/assets/13045190/24009f4a-46ee-4ec4-a5c5-582a80208aeb">
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | {
"login": "wangzi2124",
"id": 13045190,
"node_id": "MDQ6VXNlcjEzMDQ1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/13045190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangzi2124",
"html_url": "https://github.com/wangzi2124",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5108/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6566 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6566/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6566/comments | https://api.github.com/repos/ollama/ollama/issues/6566/events | https://github.com/ollama/ollama/issues/6566 | 2,496,508,417 | I_kwDOJ0Z1Ps6UzbIB | 6,566 | Ollama can't import safetensor of mistral 7B v0.1 | {
"login": "ZhoraZhang",
"id": 48072946,
"node_id": "MDQ6VXNlcjQ4MDcyOTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/48072946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhoraZhang",
"html_url": "https://github.com/ZhoraZhang",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2024-08-30T07:53:43 | 2024-09-02T07:46:28 | 2024-09-01T23:11:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have written modelfile:
FROM /data/models/Mistral-7B-v0.1 (path of mistral's safetensor files)
I tried to import model using /api/create but failed.
{"error":"read /data/models/Mistral-7B-v0.1: is a directory"}
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
_N... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6566/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2642 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2642/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2642/comments | https://api.github.com/repos/ollama/ollama/issues/2642/events | https://github.com/ollama/ollama/issues/2642 | 2,147,190,008 | I_kwDOJ0Z1Ps5_-4T4 | 2,642 | 🚀🔍 GPU Mystery: Unleashing the Power on Small Models but Stuck on Idle with Giants like MiXtral8x7B & Llama 70B on Ubuntu 22 🧩💡 | {
"login": "jaifar530",
"id": 31308766,
"node_id": "MDQ6VXNlcjMxMzA4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/31308766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaifar530",
"html_url": "https://github.com/jaifar530",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 4 | 2024-02-21T16:28:40 | 2024-03-12T02:00:28 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi
Using Ubuntu 22.
both commands nvcc --version and nvidia-smi are showing valied outputs.
I've noticed that the GPU is not utilized when running larger models (e.g., MiXtral8x7B, Llama 70B), yet it functions well with smaller models like Mistral and Llama 7B. Is this issue known to others, or is it just me exp... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2642/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2642/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5856 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5856/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5856/comments | https://api.github.com/repos/ollama/ollama/issues/5856/events | https://github.com/ollama/ollama/pull/5856 | 2,423,402,616 | PR_kwDOJ0Z1Ps52Hrb1 | 5,856 | template: disable func checking | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | open | false | null | [] | null | 1 | 2024-07-22T17:36:09 | 2024-12-17T19:51:20 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5856",
"html_url": "https://github.com/ollama/ollama/pull/5856",
"diff_url": "https://github.com/ollama/ollama/pull/5856.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5856.patch",
"merged_at": null
} | func checking will return error during parsing if a function is undefined even if it doesn't end up being used. disabling this enables additional functions to be defined in the future without breaking older versions assuming the function usage is properly guarded | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5856/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6831 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6831/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6831/comments | https://api.github.com/repos/ollama/ollama/issues/6831/events | https://github.com/ollama/ollama/pull/6831 | 2,529,500,473 | PR_kwDOJ0Z1Ps57rRWD | 6,831 | cache: Clear old KV cache entries when evicting a slot | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-09-16T21:06:24 | 2024-09-16T21:15:57 | 2024-09-16T21:15:56 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6831",
"html_url": "https://github.com/ollama/ollama/pull/6831",
"diff_url": "https://github.com/ollama/ollama/pull/6831.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6831.patch",
"merged_at": "2024-09-16T21:15:56"
} | When forking a cache entry, if no empty slots are available we evict the least recently used one and copy over the KV entries from the closest match. However, this copy does not overwrite existing values but only adds new ones. Therefore, we need to clear the old slot first.
This change fixes two issues:
- The KV ... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6831/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4806 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4806/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4806/comments | https://api.github.com/repos/ollama/ollama/issues/4806/events | https://github.com/ollama/ollama/issues/4806 | 2,332,560,619 | I_kwDOJ0Z1Ps6LCAzr | 4,806 | codegemma broken on releases after v0.1.39 | {
"login": "evertjr",
"id": 13040196,
"node_id": "MDQ6VXNlcjEzMDQwMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13040196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evertjr",
"html_url": "https://github.com/evertjr",
"followers_url": "https://api.github.com/users/evertj... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 21 | 2024-06-04T05:05:04 | 2024-12-13T23:51:59 | 2024-11-12T01:42:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I use codegemma with continue.dev extension on vscode, it works fine on version 0.1.39. but on the last two releases it doesn't generate completions and behave very strangely in terminal.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4806/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2358 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2358/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2358/comments | https://api.github.com/repos/ollama/ollama/issues/2358/events | https://github.com/ollama/ollama/issues/2358 | 2,117,754,882 | I_kwDOJ0Z1Ps5-OmAC | 2,358 | Models autodelete? | {
"login": "SinanAkkoyun",
"id": 43215895,
"node_id": "MDQ6VXNlcjQzMjE1ODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/43215895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinanAkkoyun",
"html_url": "https://github.com/SinanAkkoyun",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVw... | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 15 | 2024-02-05T06:09:15 | 2024-09-30T19:12:24 | 2024-09-30T19:12:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi! I noticed, as soon as I kill ollama (because one can not unload models from VRAM manually) and start ollama serve on my own, all models delete themselves.
Is that a bug or a feature (perhaps ensuring non-corrupted files)? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2358/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6647 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6647/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6647/comments | https://api.github.com/repos/ollama/ollama/issues/6647/events | https://github.com/ollama/ollama/issues/6647 | 2,506,588,739 | I_kwDOJ0Z1Ps6VZ4JD | 6,647 | can't use nvidia GPU only after sleep | {
"login": "brookate",
"id": 171191880,
"node_id": "U_kgDOCjQuSA",
"avatar_url": "https://avatars.githubusercontent.com/u/171191880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brookate",
"html_url": "https://github.com/brookate",
"followers_url": "https://api.github.com/users/brookate/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-09-05T01:35:17 | 2024-09-05T16:21:35 | 2024-09-05T16:21:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
after gentoo linux sleep, ollama only use cpu
turn on OOLAMA_DEBUG, I find such line
```
time=2024-09-05T09:20:35.622+08:00 level=DEBUG source=gpu.go:521 msg="discovered GPU libraries" paths="[/tmp/ollama786265597/runners/cuda_v11/libcudart.so.11.0 /opt/cuda/lib64/libcudart.so.12.6.37]"
cu... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6647/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7434 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7434/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7434/comments | https://api.github.com/repos/ollama/ollama/issues/7434/events | https://github.com/ollama/ollama/pull/7434 | 2,625,652,718 | PR_kwDOJ0Z1Ps6AdqfH | 7,434 | Remove server.cpp compatibility code | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-10-31T00:26:44 | 2024-11-06T21:32:20 | 2024-11-06T21:32:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7434",
"html_url": "https://github.com/ollama/ollama/pull/7434",
"diff_url": "https://github.com/ollama/ollama/pull/7434.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7434.patch",
"merged_at": "2024-11-06T21:32:19"
} | Some interfaces in the Go runner were kept the same as server.cpp for compatibility, we can now start to make things more natural.
The one user-facing impact of this change is that multimodal models other than mllama can now support parallel requests. | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7434/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2174 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2174/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2174/comments | https://api.github.com/repos/ollama/ollama/issues/2174/events | https://github.com/ollama/ollama/pull/2174 | 2,098,860,070 | PR_kwDOJ0Z1Ps5k-_kP | 2,174 | More logging for gpu management | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-01-24T18:35:14 | 2024-01-24T19:09:20 | 2024-01-24T19:09:18 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2174",
"html_url": "https://github.com/ollama/ollama/pull/2174",
"diff_url": "https://github.com/ollama/ollama/pull/2174.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2174.patch",
"merged_at": "2024-01-24T19:09:17"
} | Fix an ordering glitch of dlerr/dlclose and add more logging to help root cause some crashes users are hitting. This also refines the function pointer names to use the underlying function names instead of simplified names for readability. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2174/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/505 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/505/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/505/comments | https://api.github.com/repos/ollama/ollama/issues/505/events | https://github.com/ollama/ollama/issues/505 | 1,889,361,502 | I_kwDOJ0Z1Ps5wnV5e | 505 | Go library fails to compile | {
"login": "JayNakrani",
"id": 6269279,
"node_id": "MDQ6VXNlcjYyNjkyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6269279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JayNakrani",
"html_url": "https://github.com/JayNakrani",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 5 | 2023-09-10T22:27:08 | 2023-10-03T18:21:02 | 2023-09-11T17:51:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am trying to use the [Ollama Go library](https://pkg.go.dev/github.com/jmorganca/ollama/server) in my own project, and running into the following error:
```shell
% go build .
../../go/pkg/mod/github.com/jmorganca/ollama@v0.0.18/llm/ggml_llama.go:31:12: pattern llama.cpp/ggml/build/*/bin/*: no matching files fo... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/505/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/505/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/2991 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2991/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2991/comments | https://api.github.com/repos/ollama/ollama/issues/2991/events | https://github.com/ollama/ollama/pull/2991 | 2,174,627,374 | PR_kwDOJ0Z1Ps5pAdn5 | 2,991 | fix ci | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-07T19:34:00 | 2024-03-07T19:35:06 | 2024-03-07T19:35:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2991",
"html_url": "https://github.com/ollama/ollama/pull/2991",
"diff_url": "https://github.com/ollama/ollama/pull/2991.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2991.patch",
"merged_at": "2024-03-07T19:35:06"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2991/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2264 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2264/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2264/comments | https://api.github.com/repos/ollama/ollama/issues/2264/events | https://github.com/ollama/ollama/pull/2264 | 2,106,894,561 | PR_kwDOJ0Z1Ps5lZpJ2 | 2,264 | Add support for MIG mode detection and use | {
"login": "waTeim",
"id": 5779395,
"node_id": "MDQ6VXNlcjU3NzkzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5779395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waTeim",
"html_url": "https://github.com/waTeim",
"followers_url": "https://api.github.com/users/waTeim/foll... | [] | closed | false | null | [] | null | 18 | 2024-01-30T03:47:30 | 2024-05-25T15:37:06 | 2024-05-25T15:37:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2264",
"html_url": "https://github.com/ollama/ollama/pull/2264",
"diff_url": "https://github.com/ollama/ollama/pull/2264.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2264.patch",
"merged_at": null
} | The issue here is that when the startup code checks for the capabilities of the GPU so it can allocate resources (in particular memory), it mistakenly uses the host GPU for its check rather than the MIG instance. This PR modifies the algorithm of cuda GPU detection. Essentially for each host GPU, check it that GPU s... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2264/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2264/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7216 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7216/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7216/comments | https://api.github.com/repos/ollama/ollama/issues/7216/events | https://github.com/ollama/ollama/pull/7216 | 2,589,972,279 | PR_kwDOJ0Z1Ps5-vGyM | 7,216 | Update README.md | {
"login": "tcsenpai",
"id": 153772003,
"node_id": "U_kgDOCSpf4w",
"avatar_url": "https://avatars.githubusercontent.com/u/153772003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tcsenpai",
"html_url": "https://github.com/tcsenpai",
"followers_url": "https://api.github.com/users/tcsenpai/... | [] | closed | false | null | [] | null | 3 | 2024-10-15T21:29:27 | 2024-11-29T20:45:25 | 2024-11-28T23:16:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7216",
"html_url": "https://github.com/ollama/ollama/pull/7216",
"diff_url": "https://github.com/ollama/ollama/pull/7216.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7216.patch",
"merged_at": "2024-11-28T23:16:28"
} | added three projects | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7216/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6385 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6385/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6385/comments | https://api.github.com/repos/ollama/ollama/issues/6385/events | https://github.com/ollama/ollama/issues/6385 | 2,469,368,937 | I_kwDOJ0Z1Ps6TL5Rp | 6,385 | Significant Drop in Prompt Adherence in Updated Gemma2 Model | {
"login": "shzhou12",
"id": 26590783,
"node_id": "MDQ6VXNlcjI2NTkwNzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/26590783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shzhou12",
"html_url": "https://github.com/shzhou12",
"followers_url": "https://api.github.com/users/shz... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-08-16T03:27:57 | 2024-08-16T05:49:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I recently noticed that the Gemma2 model was updated 5 weeks ago, resulting in a new version of gemma2:9b-instruct-fp16:
- Older Version (6 weeks ago): gemma2:9b-instruct-fp16 - **9de55d4bf6ae** - 18 GB
- Updated Version (5 weeks ago): gemma2:9b-instruct-fp16 - **28e6684b0850** - 18... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6385/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6656 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6656/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6656/comments | https://api.github.com/repos/ollama/ollama/issues/6656/events | https://github.com/ollama/ollama/pull/6656 | 2,507,674,135 | PR_kwDOJ0Z1Ps56hQ7W | 6,656 | Fixed redirect check if direct URL is already Present | {
"login": "Tobix99",
"id": 22603015,
"node_id": "MDQ6VXNlcjIyNjAzMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/22603015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tobix99",
"html_url": "https://github.com/Tobix99",
"followers_url": "https://api.github.com/users/Tobix9... | [] | closed | false | null | [] | null | 1 | 2024-09-05T12:36:50 | 2024-09-05T17:48:27 | 2024-09-05T17:48:27 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6656",
"html_url": "https://github.com/ollama/ollama/pull/6656",
"diff_url": "https://github.com/ollama/ollama/pull/6656.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6656.patch",
"merged_at": "2024-09-05T17:48:27"
} | This is a fix regarding #6308 where the redirect check would fail with
`unexpected status code 200`.
The problem is, that if you try to pull a Model from an internal Registry, there would be no redirect, but the current logic expects at least one redirect. So i've added a the StatusCode 200 - OK to the check and ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6656/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6656/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5678 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5678/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5678/comments | https://api.github.com/repos/ollama/ollama/issues/5678/events | https://github.com/ollama/ollama/pull/5678 | 2,407,096,074 | PR_kwDOJ0Z1Ps51TTQ1 | 5,678 | Add API integration tests | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 3 | 2024-07-13T18:46:23 | 2025-01-16T17:36:31 | 2025-01-16T17:36:31 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5678",
"html_url": "https://github.com/ollama/ollama/pull/5678",
"diff_url": "https://github.com/ollama/ollama/pull/5678.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5678.patch",
"merged_at": null
} | These tests try to validate fields in the response payloads to catch regressions if we drop any. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5678/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4056 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4056/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4056/comments | https://api.github.com/repos/ollama/ollama/issues/4056/events | https://github.com/ollama/ollama/issues/4056 | 2,271,980,393 | I_kwDOJ0Z1Ps6Ha6tp | 4,056 | How to change model store path on disk? | {
"login": "Leonard-Li777",
"id": 16662626,
"node_id": "MDQ6VXNlcjE2NjYyNjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/16662626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leonard-Li777",
"html_url": "https://github.com/Leonard-Li777",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-04-30T16:28:47 | 2024-04-30T18:06:23 | 2024-04-30T18:06:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There is not enough space on the disk. | {
"login": "Leonard-Li777",
"id": 16662626,
"node_id": "MDQ6VXNlcjE2NjYyNjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/16662626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leonard-Li777",
"html_url": "https://github.com/Leonard-Li777",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4056/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5534/comments | https://api.github.com/repos/ollama/ollama/issues/5534/events | https://github.com/ollama/ollama/pull/5534 | 2,394,139,803 | PR_kwDOJ0Z1Ps50nfzP | 5,534 | llm: allow gemma 2 to context shift | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-07T17:15:01 | 2024-07-07T17:41:53 | 2024-07-07T17:41:51 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5534",
"html_url": "https://github.com/ollama/ollama/pull/5534",
"diff_url": "https://github.com/ollama/ollama/pull/5534.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5534.patch",
"merged_at": "2024-07-07T17:41:51"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5534/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4468 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4468/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4468/comments | https://api.github.com/repos/ollama/ollama/issues/4468/events | https://github.com/ollama/ollama/issues/4468 | 2,299,552,120 | I_kwDOJ0Z1Ps6JEGF4 | 4,468 | Ollama speed dropped with setting OLLAMA_NUM_PARALLEL | {
"login": "hugefrog",
"id": 83398604,
"node_id": "MDQ6VXNlcjgzMzk4NjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/83398604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugefrog",
"html_url": "https://github.com/hugefrog",
"followers_url": "https://api.github.com/users/hug... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-05-16T07:17:02 | 2024-06-24T15:13:57 | 2024-06-21T23:27:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After setting OLLAMA_NUM_PARALLEL in Ollama 0.1.38, the speed of single user access has dropped by half, and the GPU utilization rate is only about 50%."
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.38 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4468/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7782 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7782/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7782/comments | https://api.github.com/repos/ollama/ollama/issues/7782/events | https://github.com/ollama/ollama/pull/7782 | 2,680,294,447 | PR_kwDOJ0Z1Ps6CtBbB | 7,782 | tests: fix max queue integration test | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-11-21T17:32:27 | 2024-11-22T16:05:49 | 2024-11-22T16:05:46 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7782",
"html_url": "https://github.com/ollama/ollama/pull/7782",
"diff_url": "https://github.com/ollama/ollama/pull/7782.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7782.patch",
"merged_at": "2024-11-22T16:05:46"
} | This had fallen out of sync with the envconfig behavior, where max queue default was not zero. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7782/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4543 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4543/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4543/comments | https://api.github.com/repos/ollama/ollama/issues/4543/events | https://github.com/ollama/ollama/pull/4543 | 2,306,518,363 | PR_kwDOJ0Z1Ps5v_N-V | 4,543 | simplify safetensors reading | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-20T18:18:01 | 2024-05-21T21:43:56 | 2024-05-21T21:43:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4543",
"html_url": "https://github.com/ollama/ollama/pull/4543",
"diff_url": "https://github.com/ollama/ollama/pull/4543.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4543.patch",
"merged_at": "2024-05-21T21:43:55"
} | mapstructure is unnecessary and the safetensors header can be read directly into a struct | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4543/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6366 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6366/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6366/comments | https://api.github.com/repos/ollama/ollama/issues/6366/events | https://github.com/ollama/ollama/issues/6366 | 2,467,123,490 | I_kwDOJ0Z1Ps6TDVEi | 6,366 | Unable to Pull Model Manifest - "Get https://registry.ollama.ai/v2/library/llama3/manifests/latest: EOF" | {
"login": "uestcxt",
"id": 58102372,
"node_id": "MDQ6VXNlcjU4MTAyMzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/58102372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uestcxt",
"html_url": "https://github.com/uestcxt",
"followers_url": "https://api.github.com/users/uestcx... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 4 | 2024-08-15T01:29:38 | 2024-09-17T15:34:01 | 2024-09-17T15:34:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
# Description
I am experiencing an issue when trying to pull the llama3 model using the ollama CLI. The process fails with an "EOF" error. I have also tried pulling other models, but the same error occurs.
I have verified that DNS resolution works correctly, as I can resolve the registry.ollam... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6366/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1578 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1578/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1578/comments | https://api.github.com/repos/ollama/ollama/issues/1578/events | https://github.com/ollama/ollama/issues/1578 | 2,046,155,280 | I_kwDOJ0Z1Ps559doQ | 1,578 | Ollama order of magnitude slower on Apple M1 vs Llama.cpp | {
"login": "svilupp",
"id": 49557684,
"node_id": "MDQ6VXNlcjQ5NTU3Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/49557684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svilupp",
"html_url": "https://github.com/svilupp",
"followers_url": "https://api.github.com/users/svilup... | [] | closed | false | null | [] | null | 7 | 2023-12-18T09:27:56 | 2024-12-06T22:08:09 | 2023-12-18T14:46:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | First of all, thank you for the amazing app!
**Observation**: When I run the same prompt via latest Ollama vs Llama.cpp I get order of magnitude slower generation on Ollama.
- With Ollama in generation, GPU usage is 0% and from time to time it jumps to 40%
- With llama.cpp in generation, GPU usage constantly sit... | {
"login": "svilupp",
"id": 49557684,
"node_id": "MDQ6VXNlcjQ5NTU3Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/49557684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svilupp",
"html_url": "https://github.com/svilupp",
"followers_url": "https://api.github.com/users/svilup... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1578/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1907 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1907/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1907/comments | https://api.github.com/repos/ollama/ollama/issues/1907/events | https://github.com/ollama/ollama/issues/1907 | 2,075,150,578 | I_kwDOJ0Z1Ps57sEjy | 1,907 | Mixtral OOM | {
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder54... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 12 | 2024-01-10T20:29:40 | 2024-01-14T22:14:18 | 2024-01-14T22:14:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I’ve been enjoying the new auto-VRAM implementation for the most part, but when trying to use Mixtral at very large context sizes (~30000) to process a 25k token document, I’m still getting OOMs, repeatedly. (So, not when changing context sizes, which I see is an existing ticket.)
I tried different context sizes bet... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1907/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3933/comments | https://api.github.com/repos/ollama/ollama/issues/3933/events | https://github.com/ollama/ollama/pull/3933 | 2,265,000,754 | PR_kwDOJ0Z1Ps5tzRL2 | 3,933 | Move cuda/rocm dependency gathering into generate script | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-26T05:38:55 | 2024-04-26T14:01:30 | 2024-04-26T14:01:24 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3933",
"html_url": "https://github.com/ollama/ollama/pull/3933",
"diff_url": "https://github.com/ollama/ollama/pull/3933.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3933.patch",
"merged_at": "2024-04-26T14:01:24"
} | This will make it simpler for CI to accumulate artifacts from prior steps | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3933/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1542 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1542/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1542/comments | https://api.github.com/repos/ollama/ollama/issues/1542/events | https://github.com/ollama/ollama/issues/1542 | 2,043,545,495 | I_kwDOJ0Z1Ps55zgeX | 1,542 | API endpoint to query models supported by ollama | {
"login": "gmaijoe",
"id": 7184919,
"node_id": "MDQ6VXNlcjcxODQ5MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7184919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmaijoe",
"html_url": "https://github.com/gmaijoe",
"followers_url": "https://api.github.com/users/gmaijoe/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2023-12-15T11:20:23 | 2023-12-24T21:59:52 | 2023-12-24T21:59:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/ollama-webui/ollama-webui is looking to support a deeper integration with ollama. is there any way to expose an api of all model names supported by ollama? could manually scrape but an API would be easier for ingestion | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1542/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1542/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6495 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6495/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6495/comments | https://api.github.com/repos/ollama/ollama/issues/6495/events | https://github.com/ollama/ollama/pull/6495 | 2,484,938,971 | PR_kwDOJ0Z1Ps55Vyyt | 6,495 | Detect running in a container | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-25T00:01:39 | 2024-09-05T20:37:09 | 2024-09-05T20:24:51 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6495",
"html_url": "https://github.com/ollama/ollama/pull/6495",
"diff_url": "https://github.com/ollama/ollama/pull/6495.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6495.patch",
"merged_at": "2024-09-05T20:24:51"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6495/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5142 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5142/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5142/comments | https://api.github.com/repos/ollama/ollama/issues/5142/events | https://github.com/ollama/ollama/issues/5142 | 2,362,524,800 | I_kwDOJ0Z1Ps6M0USA | 5,142 | `Segmentation fault` on Ubuntu 24.04 LXC container | {
"login": "MmDawN",
"id": 40926229,
"node_id": "MDQ6VXNlcjQwOTI2MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/40926229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MmDawN",
"html_url": "https://github.com/MmDawN",
"followers_url": "https://api.github.com/users/MmDawN/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-06-19T14:25:57 | 2024-06-27T03:13:22 | 2024-06-27T03:13:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
My runtime environment is based on an LXC container running Ubuntu 24.04 LTS.
After the installation of ollama v0.1.44, running `ollama` in bash returns a `Segmentation fault` error.
The `journalctl -u ollama` command reveals the following recurring error and indicates constant restarting... | {
"login": "MmDawN",
"id": 40926229,
"node_id": "MDQ6VXNlcjQwOTI2MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/40926229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MmDawN",
"html_url": "https://github.com/MmDawN",
"followers_url": "https://api.github.com/users/MmDawN/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5142/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1327 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1327/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1327/comments | https://api.github.com/repos/ollama/ollama/issues/1327/events | https://github.com/ollama/ollama/issues/1327 | 2,018,043,425 | I_kwDOJ0Z1Ps54SOYh | 1,327 | Modelfile prompt should support chat / multiturn. | {
"login": "ehartford",
"id": 1117701,
"node_id": "MDQ6VXNlcjExMTc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1117701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehartford",
"html_url": "https://github.com/ehartford",
"followers_url": "https://api.github.com/users/eh... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 1 | 2023-11-30T07:28:38 | 2023-12-04T23:23:03 | 2023-12-04T23:23:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ERROR: type should be string, got "https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#template\r\n\r\nSo basically all that's coming in is .Prompt which is just a string.\r\n\r\nBut that can't handle chat and multi turn.\r\n\r\nWhat's coming in should look a messages array. then this template should format that into a prompt.\r\n```\r\n[\r\n { \"role\": \"system\", \"content\": \"You are a helpful AI assistant\" },\r\n { \"role\": \"user\", \"content\": \"Hello AI, How are you today?\" },\r\n { \"role\": \"assistant\", \"content\": \"I have no notion of time. State your question?\" },\r\n { \"role\": \"user\", \"content\": \"Oh ok then, tell me the 38th state\" }\r\n]\r\n```\r\n\r\nthen the template in the modelfile would look something like\r\n\r\n```\r\n{% for message in messages %}{{'<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>' + '\\n'}}{% endfor %}<|im_start|>assistant\r\n\r\n```\r\n\r\nBasically the idea that a prompt consists of a single system message and a single user message, isn't how most models actually work." | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1327/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1327/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5819 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5819/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5819/comments | https://api.github.com/repos/ollama/ollama/issues/5819/events | https://github.com/ollama/ollama/pull/5819 | 2,421,080,552 | PR_kwDOJ0Z1Ps51_zto | 5,819 | Track and Expose GPU discovery failure information | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-20T22:24:45 | 2024-07-20T22:25:07 | 2024-07-20T22:25:07 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5819",
"html_url": "https://github.com/ollama/ollama/pull/5819",
"diff_url": "https://github.com/ollama/ollama/pull/5819.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5819.patch",
"merged_at": null
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5819/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8346 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8346/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8346/comments | https://api.github.com/repos/ollama/ollama/issues/8346/events | https://github.com/ollama/ollama/issues/8346 | 2,775,653,112 | I_kwDOJ0Z1Ps6lcRr4 | 8,346 | Unable to run llama on IPv6 Single Stack env | {
"login": "chaturvedi-kna",
"id": 63336082,
"node_id": "MDQ6VXNlcjYzMzM2MDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/63336082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaturvedi-kna",
"html_url": "https://github.com/chaturvedi-kna",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2025-01-08T15:15:22 | 2025-01-11T08:37:24 | 2025-01-11T08:37:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi Guys,
I am using Ollama on OpenShift (v4.16), Open Data Hub followed below guide and used the same image mentioned there
https://github.com/rh-aiservices-bu/llm-on-openshift/tree/main/serving-runtimes/ollama_runtime
In the [ollama-runtime.yaml](https://github.com/rh-aiservices-bu/llm-o... | {
"login": "chaturvedi-kna",
"id": 63336082,
"node_id": "MDQ6VXNlcjYzMzM2MDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/63336082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaturvedi-kna",
"html_url": "https://github.com/chaturvedi-kna",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8346/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2149 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2149/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2149/comments | https://api.github.com/repos/ollama/ollama/issues/2149/events | https://github.com/ollama/ollama/pull/2149 | 2,095,052,458 | PR_kwDOJ0Z1Ps5kyE7x | 2,149 | Use all layers for metal on macOS if model is small enough | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-01-23T01:05:05 | 2024-01-23T01:40:07 | 2024-01-23T01:40:07 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2149",
"html_url": "https://github.com/ollama/ollama/pull/2149",
"diff_url": "https://github.com/ollama/ollama/pull/2149.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2149.patch",
"merged_at": "2024-01-23T01:40:07"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2149/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5895 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5895/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5895/comments | https://api.github.com/repos/ollama/ollama/issues/5895/events | https://github.com/ollama/ollama/pull/5895 | 2,426,214,454 | PR_kwDOJ0Z1Ps52RRo9 | 5,895 | Better explain multi-gpu behavior | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-23T22:16:45 | 2024-07-29T21:25:44 | 2024-07-29T21:25:42 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5895",
"html_url": "https://github.com/ollama/ollama/pull/5895",
"diff_url": "https://github.com/ollama/ollama/pull/5895.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5895.patch",
"merged_at": "2024-07-29T21:25:41"
} | Fixes #5635 #5455
This topic seems to come up ~weekly, so lets explain it more clearly in the docs, and expose the existing env var to force spreading over all GPUs. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5895/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4639 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4639/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4639/comments | https://api.github.com/repos/ollama/ollama/issues/4639/events | https://github.com/ollama/ollama/issues/4639 | 2,317,162,519 | I_kwDOJ0Z1Ps6KHRgX | 4,639 | Prompt caching causes reproducible outputs to be inconsistent | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2024-05-25T18:10:35 | 2024-06-11T21:29:47 | 2024-06-11T21:29:46 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When trying to generate [reproducible outputs](https://github.com/ollama/ollama/blob/main/docs/api.md#request-reproducible-outputs), changing prompts causes results to be inconsistent
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4639/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5443 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5443/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5443/comments | https://api.github.com/repos/ollama/ollama/issues/5443/events | https://github.com/ollama/ollama/pull/5443 | 2,387,121,395 | PR_kwDOJ0Z1Ps50PwQF | 5,443 | add conversion for microsoft phi 3 mini/medium 4k, 128k | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-02T20:35:53 | 2024-08-12T22:48:00 | 2024-08-12T22:47:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5443",
"html_url": "https://github.com/ollama/ollama/pull/5443",
"diff_url": "https://github.com/ollama/ollama/pull/5443.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5443.patch",
"merged_at": "2024-08-12T22:47:58"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5443/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4213 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4213/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4213/comments | https://api.github.com/repos/ollama/ollama/issues/4213/events | https://github.com/ollama/ollama/pull/4213 | 2,281,920,211 | PR_kwDOJ0Z1Ps5usSyJ | 4,213 | Close server on receiving signal | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-06T22:51:37 | 2024-05-06T23:01:38 | 2024-05-06T23:01:37 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4213",
"html_url": "https://github.com/ollama/ollama/pull/4213",
"diff_url": "https://github.com/ollama/ollama/pull/4213.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4213.patch",
"merged_at": "2024-05-06T23:01:37"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4213/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2562 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2562/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2562/comments | https://api.github.com/repos/ollama/ollama/issues/2562/events | https://github.com/ollama/ollama/issues/2562 | 2,140,125,367 | I_kwDOJ0Z1Ps5_j7i3 | 2,562 | Inconsistent OCR Results with LLaVA 1.6 and Ollama vs. LLaVA Online Demo | {
"login": "arcaweb-ch",
"id": 43749906,
"node_id": "MDQ6VXNlcjQzNzQ5OTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/43749906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcaweb-ch",
"html_url": "https://github.com/arcaweb-ch",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | open | false | null | [] | null | 8 | 2024-02-17T14:11:32 | 2024-05-17T11:33:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey there, I've posted this issue on [LLaVA repo](https://github.com/haotian-liu/LLaVA/issues/1116) already, not sure if this problem refers to an implementation issue in Ollama. Any idea? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2562/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3195 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3195/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3195/comments | https://api.github.com/repos/ollama/ollama/issues/3195/events | https://github.com/ollama/ollama/issues/3195 | 2,190,712,706 | I_kwDOJ0Z1Ps6Ck5-C | 3,195 | Modified /systemd/system/ollama.service but it didn't take effect | {
"login": "michelle-chou25",
"id": 71402902,
"node_id": "MDQ6VXNlcjcxNDAyOTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/71402902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michelle-chou25",
"html_url": "https://github.com/michelle-chou25",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-03-17T15:04:00 | 2024-05-10T15:57:22 | 2024-03-18T07:49:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama service file's modification didn't take effect
### What did you expect to see?
Make the modification of the service file effective?
### Steps to reproduce
1. I tried to start ollama service but failed it, used "sudo journalctl -u ollama --reverse --lines=100" to check the ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3195/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7958 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7958/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7958/comments | https://api.github.com/repos/ollama/ollama/issues/7958/events | https://github.com/ollama/ollama/issues/7958 | 2,721,506,915 | I_kwDOJ0Z1Ps6iNuZj | 7,958 | Model request: HunyuanVideo text-to-video | {
"login": "artem-zinnatullin",
"id": 967132,
"node_id": "MDQ6VXNlcjk2NzEzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/967132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artem-zinnatullin",
"html_url": "https://github.com/artem-zinnatullin",
"followers_url": "https://a... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-12-05T21:51:24 | 2024-12-14T15:42:32 | 2024-12-14T15:42:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It's an "open-source" rich text-to-video model by Tencent:
>HunyuanVideo represents the most parameter-rich and high-performce text-to-video model currently available in the open-source domain. With 13 billion parameters, it is capable of generating videos that exhibit high physical accuracy and scene consistency, t... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7958/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/5151 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5151/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5151/comments | https://api.github.com/repos/ollama/ollama/issues/5151/events | https://github.com/ollama/ollama/pull/5151 | 2,363,189,410 | PR_kwDOJ0Z1Ps5zAdxa | 5,151 | Update OpenAI Compatibility Docs with /v1/models | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-06-19T22:09:40 | 2024-08-01T22:48:45 | 2024-08-01T22:48:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5151",
"html_url": "https://github.com/ollama/ollama/pull/5151",
"diff_url": "https://github.com/ollama/ollama/pull/5151.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5151.patch",
"merged_at": "2024-08-01T22:48:44"
} | null | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5151/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2874 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2874/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2874/comments | https://api.github.com/repos/ollama/ollama/issues/2874/events | https://github.com/ollama/ollama/issues/2874 | 2,164,662,099 | I_kwDOJ0Z1Ps6BBh9T | 2,874 | Support Qwen VL | {
"login": "thesby",
"id": 10773886,
"node_id": "MDQ6VXNlcjEwNzczODg2",
"avatar_url": "https://avatars.githubusercontent.com/u/10773886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesby",
"html_url": "https://github.com/thesby",
"followers_url": "https://api.github.com/users/thesby/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 49 | 2024-03-02T06:53:40 | 2025-01-28T15:07:16 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Could you please support Qwen VL model | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2874/reactions",
"total_count": 77,
"+1": 76,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2874/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4493 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4493/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4493/comments | https://api.github.com/repos/ollama/ollama/issues/4493/events | https://github.com/ollama/ollama/issues/4493 | 2,302,119,638 | I_kwDOJ0Z1Ps6JN47W | 4,493 | How can we make model calls faster | {
"login": "userandpass",
"id": 26294920,
"node_id": "MDQ6VXNlcjI2Mjk0OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/26294920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/userandpass",
"html_url": "https://github.com/userandpass",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjg... | closed | false | null | [] | null | 3 | 2024-05-17T08:25:22 | 2025-01-12T00:53:21 | 2025-01-12T00:53:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I used docker to load multiple ollama images and distribute them using nginx, which was much slower than calling the deployed model directly
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.34 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4493/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7766 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7766/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7766/comments | https://api.github.com/repos/ollama/ollama/issues/7766/events | https://github.com/ollama/ollama/issues/7766 | 2,676,975,457 | I_kwDOJ0Z1Ps6fj2dh | 7,766 | ollama hangs randomly and sometimes responds with G's | {
"login": "Pho3niX90",
"id": 7858187,
"node_id": "MDQ6VXNlcjc4NTgxODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7858187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pho3niX90",
"html_url": "https://github.com/Pho3niX90",
"followers_url": "https://api.github.com/users/Ph... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 11 | 2024-11-20T19:32:34 | 2024-11-23T19:47:55 | 2024-11-23T19:47:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am starting my journey into ollama, so my info below might not align 100% to what you need, but can provide as needed.
After the prompts "hang", I need to reboot the service to get it going again.
Short generation relatively seems OK,
Asking for longer responses typically hang it mid ... | {
"login": "Pho3niX90",
"id": 7858187,
"node_id": "MDQ6VXNlcjc4NTgxODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7858187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pho3niX90",
"html_url": "https://github.com/Pho3niX90",
"followers_url": "https://api.github.com/users/Ph... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7766/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1859 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1859/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1859/comments | https://api.github.com/repos/ollama/ollama/issues/1859/events | https://github.com/ollama/ollama/issues/1859 | 2,071,016,279 | I_kwDOJ0Z1Ps57cTNX | 1,859 | Pull model menifest connect timed out | {
"login": "shivrajjadhav733",
"id": 35407279,
"node_id": "MDQ6VXNlcjM1NDA3Mjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/35407279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shivrajjadhav733",
"html_url": "https://github.com/shivrajjadhav733",
"followers_url": "https://... | [] | closed | false | null | [] | null | 24 | 2024-01-08T18:36:59 | 2024-07-25T03:35:49 | 2024-03-11T20:24:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | OS - Apple M1 Pro chip
I tried to install ollama on machine. Installation was successful. I can see Ollama icon in menu bar at the top.
when I try to run model using command -
ollama run laama2
Or
ollama run mistral
I get attached error of operation timed out.
.
- I am able to run the model converted to `.gguf`.
- However, I would like to import and run directly from... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8598/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8598/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3610 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3610/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3610/comments | https://api.github.com/repos/ollama/ollama/issues/3610/events | https://github.com/ollama/ollama/pull/3610 | 2,238,984,165 | PR_kwDOJ0Z1Ps5sbbAo | 3,610 | Added Solar example at README.md | {
"login": "hunkim",
"id": 901975,
"node_id": "MDQ6VXNlcjkwMTk3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/901975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hunkim",
"html_url": "https://github.com/hunkim",
"followers_url": "https://api.github.com/users/hunkim/follow... | [] | closed | false | null | [] | null | 1 | 2024-04-12T03:37:07 | 2024-04-15T23:54:23 | 2024-04-15T23:54:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3610",
"html_url": "https://github.com/ollama/ollama/pull/3610",
"diff_url": "https://github.com/ollama/ollama/pull/3610.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3610.patch",
"merged_at": "2024-04-15T23:54:23"
} | Added just one line
| Solar | 10.7B | 6.1GB | `ollama run solar` | | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3610/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5007 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5007/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5007/comments | https://api.github.com/repos/ollama/ollama/issues/5007/events | https://github.com/ollama/ollama/pull/5007 | 2,349,673,042 | PR_kwDOJ0Z1Ps5ySS7K | 5,007 | OpenAI: /v1/models and /v1/models/{model} compatibility | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 2 | 2024-06-12T20:59:37 | 2024-07-02T18:51:00 | 2024-07-02T18:50:56 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5007",
"html_url": "https://github.com/ollama/ollama/pull/5007",
"diff_url": "https://github.com/ollama/ollama/pull/5007.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5007.patch",
"merged_at": "2024-07-02T18:50:56"
} | This PR adds compatibility with the /v1/models and /v1/models/{model} endpoint for listing models.
E.g.
`curl http://localhost:11434/v1/models`
```
{
"object": "list",
"data": [
{
"id": "mario:latest",
"object": "model",
"created": 1718141294,
"owned_by": "ollama"
}... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5007/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5007/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6139 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6139/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6139/comments | https://api.github.com/repos/ollama/ollama/issues/6139/events | https://github.com/ollama/ollama/issues/6139 | 2,444,415,836 | I_kwDOJ0Z1Ps6RstNc | 6,139 | error: llama runner process has terminated: CUDA error: CUBLAS_STATUS_ALLOC_FAILED | {
"login": "trixtipsfix",
"id": 69011613,
"node_id": "MDQ6VXNlcjY5MDExNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/69011613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trixtipsfix",
"html_url": "https://github.com/trixtipsfix",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 22 | 2024-08-02T08:41:15 | 2024-10-31T18:18:28 | 2024-10-31T18:18:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm getting the following error when I try to run the Ollama model:

Additionally, sometimes it behaves like this and disappears upon system restart:
, so other projects are unable to implement it in their proxy.
Here is the Ollama logs... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1464/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2159 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2159/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2159/comments | https://api.github.com/repos/ollama/ollama/issues/2159/events | https://github.com/ollama/ollama/issues/2159 | 2,096,434,733 | I_kwDOJ0Z1Ps589Q4t | 2,159 | Do we have a Go client | {
"login": "liliang-cn",
"id": 20553741,
"node_id": "MDQ6VXNlcjIwNTUzNzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/20553741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liliang-cn",
"html_url": "https://github.com/liliang-cn",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2024-01-23T16:09:04 | 2024-03-31T02:44:33 | 2024-03-11T19:16:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm know there is an HTTP API, but can I utilize this API in a similar manner like [ollama-python?](https://github.com/jmorganca/ollama-python) | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2159/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3784 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3784/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3784/comments | https://api.github.com/repos/ollama/ollama/issues/3784/events | https://github.com/ollama/ollama/pull/3784 | 2,254,686,971 | PR_kwDOJ0Z1Ps5tQT79 | 3,784 | Allow whitespace within objects and arrays, but remove trailing possibly infinite whitespace | {
"login": "hughescr",
"id": 46348,
"node_id": "MDQ6VXNlcjQ2MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/46348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hughescr",
"html_url": "https://github.com/hughescr",
"followers_url": "https://api.github.com/users/hughescr/foll... | [] | closed | false | null | [] | null | 6 | 2024-04-20T19:28:13 | 2024-12-05T00:51:35 | 2024-12-05T00:51:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3784",
"html_url": "https://github.com/ollama/ollama/pull/3784",
"diff_url": "https://github.com/ollama/ollama/pull/3784.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3784.patch",
"merged_at": null
} | The PR tweaks the JSON grammar to improve use of whitespace (though doesn't remove it entirely); it prevents trailing whitespace on grammatical elements, but does allow whitespace inside of `{}` or `[]`. This reduces the likelihood that a model might spit out a complete JSON object/array/primitive/literal, and then app... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3784/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3784/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6841 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6841/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6841/comments | https://api.github.com/repos/ollama/ollama/issues/6841/events | https://github.com/ollama/ollama/pull/6841 | 2,531,641,164 | PR_kwDOJ0Z1Ps57yfoV | 6,841 | Add python examples for `bespoke-minicheck` | {
"login": "RyanMarten",
"id": 18333503,
"node_id": "MDQ6VXNlcjE4MzMzNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/18333503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMarten",
"html_url": "https://github.com/RyanMarten",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2024-09-17T16:57:14 | 2024-09-18T16:35:25 | 2024-09-18T16:35:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6841",
"html_url": "https://github.com/ollama/ollama/pull/6841",
"diff_url": "https://github.com/ollama/ollama/pull/6841.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6841.patch",
"merged_at": "2024-09-18T16:35:25"
} | Adds two examples `python-grounded-factuality-rag-check` and `python-grounded-factuality-simple-check` which showcase the `bespoke-minicheck` model. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6841/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6882 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6882/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6882/comments | https://api.github.com/repos/ollama/ollama/issues/6882/events | https://github.com/ollama/ollama/issues/6882 | 2,537,360,886 | I_kwDOJ0Z1Ps6XPQ32 | 6,882 | Core Dump | applicationError | {
"login": "nPHYN1T3",
"id": 38122105,
"node_id": "MDQ6VXNlcjM4MTIyMTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/38122105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nPHYN1T3",
"html_url": "https://github.com/nPHYN1T3",
"followers_url": "https://api.github.com/users/nPH... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-09-19T21:01:14 | 2024-09-20T22:17:09 | 2024-09-19T22:14:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been trying to do something with deepseek-coder-v2 but it takes constant prompt revisions of which it then core dumps and I've got to start all over trying to figure out how to get it back to the understanding it had before it died.
None of my GPU's are overly taxed and have plenty o... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6882/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6101 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6101/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6101/comments | https://api.github.com/repos/ollama/ollama/issues/6101/events | https://github.com/ollama/ollama/issues/6101 | 2,440,541,009 | I_kwDOJ0Z1Ps6Rd7NR | 6,101 | Ollama is unable to resume interrupted pulls | {
"login": "nviraj",
"id": 8409854,
"node_id": "MDQ6VXNlcjg0MDk4NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8409854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nviraj",
"html_url": "https://github.com/nviraj",
"followers_url": "https://api.github.com/users/nviraj/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-07-31T17:25:10 | 2024-07-31T23:57:33 | 2024-07-31T23:57:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Previously if I interrupted a pull and started it again (Usually after the speed dropped or wouldn't go to completion after 95%), it was able to resume.
However today when I pulled the gemma2:2b model, it was unable to do so and started from scratch. This happened multiple times.
Not su... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6101/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/435 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/435/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/435/comments | https://api.github.com/repos/ollama/ollama/issues/435/events | https://github.com/ollama/ollama/issues/435 | 1,869,531,192 | I_kwDOJ0Z1Ps5vbsg4 | 435 | Incorrect size displayed for codellama:34b-code-q4_0 on ollama.ai | {
"login": "spqw",
"id": 101190846,
"node_id": "U_kgDOBggMvg",
"avatar_url": "https://avatars.githubusercontent.com/u/101190846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spqw",
"html_url": "https://github.com/spqw",
"followers_url": "https://api.github.com/users/spqw/followers",
"f... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-08-28T11:02:05 | 2023-08-28T14:00:43 | 2023-08-28T13:58:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It seems like there is a typo on the following url https://ollama.ai/library/codellama/tags, where the displayed model size for `34b-code-q4_0` is 6.7 GB. When downloading it, it appears to be 19 GB instead.
<img width="996" alt="image" src="https://github.com/jmorganca/ollama/assets/101190846/0d32d491-6498-4b1f-9a65... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/435/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4062 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4062/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4062/comments | https://api.github.com/repos/ollama/ollama/issues/4062/events | https://github.com/ollama/ollama/issues/4062 | 2,272,582,725 | I_kwDOJ0Z1Ps6HdNxF | 4,062 | llama3-gradient going crazy | {
"login": "DuckyBlender",
"id": 42645784,
"node_id": "MDQ6VXNlcjQyNjQ1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuckyBlender",
"html_url": "https://github.com/DuckyBlender",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2024-04-30T22:26:07 | 2024-11-10T22:33:23 | 2024-04-30T22:32:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Just ran `ollama run llama3-gradient` and this happened. Used the default (2k) context size

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4062/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1087 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1087/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1087/comments | https://api.github.com/repos/ollama/ollama/issues/1087/events | https://github.com/ollama/ollama/issues/1087 | 1,989,020,115 | I_kwDOJ0Z1Ps52jgnT | 1,087 | System Performance Benchmarking | {
"login": "K1ngjulien",
"id": 16562333,
"node_id": "MDQ6VXNlcjE2NTYyMzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/16562333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/K1ngjulien",
"html_url": "https://github.com/K1ngjulien",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
... | open | false | null | [] | null | 3 | 2023-11-11T16:09:44 | 2024-04-01T09:31:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi!
In threads like #738, I see a lot of people trying different hardware and software setups, followed by checking the logs for the `llama_print_timings` output to see performance results.
From my (admittedly short) time playing around with my own hardware, I've noticed a lot of inconsistency between runs, makin... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1087/reactions",
"total_count": 18,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1087/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5449 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5449/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5449/comments | https://api.github.com/repos/ollama/ollama/issues/5449/events | https://github.com/ollama/ollama/issues/5449 | 2,387,350,854 | I_kwDOJ0Z1Ps6OTBVG | 5,449 | Validate templates on `ollama create` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.... | null | 0 | 2024-07-02T23:45:14 | 2024-07-19T22:24:30 | 2024-07-19T22:24:30 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`ollama create` should validate templates by parsing it if one is provided
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5449/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2347 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2347/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2347/comments | https://api.github.com/repos/ollama/ollama/issues/2347/events | https://github.com/ollama/ollama/issues/2347 | 2,117,130,379 | I_kwDOJ0Z1Ps5-MNiL | 2,347 | parser/parser.go:9:2: package log/slog is not in GOROOT | {
"login": "kenorb",
"id": 266306,
"node_id": "MDQ6VXNlcjI2NjMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/266306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenorb",
"html_url": "https://github.com/kenorb",
"followers_url": "https://api.github.com/users/kenorb/follow... | [] | closed | false | null | [] | null | 3 | 2024-02-04T14:39:12 | 2024-02-20T04:04:07 | 2024-02-20T04:04:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I've tried to build the project on Ubuntu 22.04 according to instructions, however I've got the error (`master` branch):
> parser/parser.go:9:2: package log/slog is not in GOROOT
```
$ go generate ./...
...
Finished compression
+ '[' -z '' ']'
+ ROCM_PATH=/opt/rocm
+ '[' -z '' ']'
+ '[' -d /usr/lib/cmake/C... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2347/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2347/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5580 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5580/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5580/comments | https://api.github.com/repos/ollama/ollama/issues/5580/events | https://github.com/ollama/ollama/pull/5580 | 2,398,906,842 | PR_kwDOJ0Z1Ps503weR | 5,580 | Detect CUDA OS overhead | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-09T18:44:26 | 2024-07-10T19:47:34 | 2024-07-10T19:47:31 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5580",
"html_url": "https://github.com/ollama/ollama/pull/5580",
"diff_url": "https://github.com/ollama/ollama/pull/5580.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5580.patch",
"merged_at": "2024-07-10T19:47:31"
} | This adds logic to detect skew between the driver and
management library which can be attributed to OS overhead
and records that so we can adjust subsequent management
library free VRAM updates and avoid OOM scenarios.
Fixes #5504 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5580/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6217 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6217/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6217/comments | https://api.github.com/repos/ollama/ollama/issues/6217/events | https://github.com/ollama/ollama/issues/6217 | 2,451,988,607 | I_kwDOJ0Z1Ps6SJmB_ | 6,217 | batch embed 500 error: no slots available after 10 retries | {
"login": "Schumpeterx",
"id": 29852284,
"node_id": "MDQ6VXNlcjI5ODUyMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/29852284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Schumpeterx",
"html_url": "https://github.com/Schumpeterx",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-08-07T01:39:05 | 2024-08-07T03:20:50 | 2024-08-07T03:20:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I try to use 10 threads to send batch embedding request to `/api/embed`, and sometime got the below error:
```
Aug 07 09:19:18 *-gpu ollama[345349]: time=2024-08-07T09:19:18.764+08:00 level=ERROR source=routes.go:368 msg="embedding generation failed" error="no slots available after 10 retrie... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6217/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2338 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2338/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2338/comments | https://api.github.com/repos/ollama/ollama/issues/2338/events | https://github.com/ollama/ollama/issues/2338 | 2,116,472,733 | I_kwDOJ0Z1Ps5-Js-d | 2,338 | Very nice to have: capabilities info for multimodal models | {
"login": "da-z",
"id": 3681019,
"node_id": "MDQ6VXNlcjM2ODEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/da-z",
"html_url": "https://github.com/da-z",
"followers_url": "https://api.github.com/users/da-z/followers",
... | [] | closed | false | null | [] | null | 2 | 2024-02-03T13:00:29 | 2024-02-03T19:33:21 | 2024-02-03T19:33:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Not sure if this is done already, I checked the llava info and it does not mention capabilities anywhere. Would be nice to detect via ollama show or API model info that this model supports `vision`.
API Example
`GET /api/tags`
```js
{
//...
"details": {
"parent_model": "",
"format": "gguf",
... | {
"login": "da-z",
"id": 3681019,
"node_id": "MDQ6VXNlcjM2ODEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/da-z",
"html_url": "https://github.com/da-z",
"followers_url": "https://api.github.com/users/da-z/followers",
... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2338/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7920 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7920/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7920/comments | https://api.github.com/repos/ollama/ollama/issues/7920/events | https://github.com/ollama/ollama/issues/7920 | 2,715,963,182 | I_kwDOJ0Z1Ps6h4k8u | 7,920 | Custom context size not being respected. | {
"login": "luisbrandao",
"id": 25795753,
"node_id": "MDQ6VXNlcjI1Nzk1NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25795753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisbrandao",
"html_url": "https://github.com/luisbrandao",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-03T20:57:31 | 2024-12-03T22:59:58 | 2024-12-03T22:59:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
I noticed an inconsistent behavior.
I had edited my model parameters on admin view and set its context size.
however, i should be able to change it "again" in the chat windows:

in this im... | {
"login": "luisbrandao",
"id": 25795753,
"node_id": "MDQ6VXNlcjI1Nzk1NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25795753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luisbrandao",
"html_url": "https://github.com/luisbrandao",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7920/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2668/comments | https://api.github.com/repos/ollama/ollama/issues/2668/events | https://github.com/ollama/ollama/issues/2668 | 2,148,432,888 | I_kwDOJ0Z1Ps6ADnv4 | 2,668 | Error: Unable to load dynamic library: Unable to load dynamic server library: | {
"login": "123124-1",
"id": 88172698,
"node_id": "MDQ6VXNlcjg4MTcyNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/88172698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/123124-1",
"html_url": "https://github.com/123124-1",
"followers_url": "https://api.github.com/users/123... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-02-22T07:47:21 | 2024-05-02T22:10:53 | 2024-05-02T22:10:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | time=2024-02-22T15:43:18.086+08:00 level=INFO source=images.go:710 msg="total blobs: 11"
time=2024-02-22T15:43:18.117+08:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-02-22T15:43:18.120+08:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.26)"
time=... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2668/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6154 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6154/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6154/comments | https://api.github.com/repos/ollama/ollama/issues/6154/events | https://github.com/ollama/ollama/pull/6154 | 2,446,518,858 | PR_kwDOJ0Z1Ps53VuaY | 6,154 | Disable paging for journalctl | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 5 | 2024-08-03T17:33:43 | 2024-08-05T04:10:54 | 2024-08-05T04:10:53 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6154",
"html_url": "https://github.com/ollama/ollama/pull/6154",
"diff_url": "https://github.com/ollama/ollama/pull/6154.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6154.patch",
"merged_at": "2024-08-05T04:10:53"
} | Users using `journalctl` to get logs for issue logging sometimes don't realize that paging is causing information to be missed. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6154/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4404 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4404/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4404/comments | https://api.github.com/repos/ollama/ollama/issues/4404/events | https://github.com/ollama/ollama/issues/4404 | 2,292,915,977 | I_kwDOJ0Z1Ps6Iqx8J | 4,404 | error loading model vocabulary: unknown pre-tokenizer type: 'qwen2' | {
"login": "HouseYeung",
"id": 70836781,
"node_id": "MDQ6VXNlcjcwODM2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/70836781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HouseYeung",
"html_url": "https://github.com/HouseYeung",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2024-05-13T14:00:24 | 2025-01-29T14:09:09 | 2024-06-04T06:54:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
llama runner process has terminated: signal: abort trap error:error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
i was running qwen1.5-8B-chat
the old version of ollama can run this model properly.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.37 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4404/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/ollama/ollama/issues/4404/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1751 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1751/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1751/comments | https://api.github.com/repos/ollama/ollama/issues/1751/events | https://github.com/ollama/ollama/issues/1751 | 2,061,030,874 | I_kwDOJ0Z1Ps562NXa | 1,751 | [FEATURE] add more options while chatting like `/bye` (e.g `/clear_context` or `/new_chat`) | {
"login": "tikendraw",
"id": 68785366,
"node_id": "MDQ6VXNlcjY4Nzg1MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/68785366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tikendraw",
"html_url": "https://github.com/tikendraw",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 10 | 2023-12-31T11:33:59 | 2024-01-26T16:25:56 | 2024-01-25T22:56:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | While chatting with the model, you necessarily do not need to have the context, or you just want a new chat. Well, there are no options for this, rather than just cancelling this chat and restarting it.
So, similar to the `/bye` option, there can be other options for the ease of using llm.
* `/clear_context` or `/... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1751/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1751/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/312 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/312/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/312/comments | https://api.github.com/repos/ollama/ollama/issues/312/events | https://github.com/ollama/ollama/pull/312 | 1,843,943,830 | PR_kwDOJ0Z1Ps5Xks3E | 312 | add embed docs for modelfile | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-08-09T20:15:56 | 2023-08-17T17:37:43 | 2023-08-17T17:37:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/312",
"html_url": "https://github.com/ollama/ollama/pull/312",
"diff_url": "https://github.com/ollama/ollama/pull/312.diff",
"patch_url": "https://github.com/ollama/ollama/pull/312.patch",
"merged_at": "2023-08-17T17:37:43"
} | I removed the embed instruction from our model documentation since its not in a release. Staging it here for a release. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/312/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4520/comments | https://api.github.com/repos/ollama/ollama/issues/4520/events | https://github.com/ollama/ollama/issues/4520 | 2,304,663,711 | I_kwDOJ0Z1Ps6JXmCf | 4,520 | llama3:70B pull error | {
"login": "DimIsaev",
"id": 11172642,
"node_id": "MDQ6VXNlcjExMTcyNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11172642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DimIsaev",
"html_url": "https://github.com/DimIsaev",
"followers_url": "https://api.github.com/users/Dim... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 25 | 2024-05-19T15:31:58 | 2025-01-28T14:01:55 | 2024-05-25T00:21:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

`Error: max retries exceeded: unexpected EOF`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4520/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4520/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5832 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5832/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5832/comments | https://api.github.com/repos/ollama/ollama/issues/5832/events | https://github.com/ollama/ollama/issues/5832 | 2,421,490,146 | I_kwDOJ0Z1Ps6QVQHi | 5,832 | rx6800xt is 4xfaster as my new rtx4070ti super | {
"login": "konian71",
"id": 176228734,
"node_id": "U_kgDOCoEJfg",
"avatar_url": "https://avatars.githubusercontent.com/u/176228734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konian71",
"html_url": "https://github.com/konian71",
"followers_url": "https://api.github.com/users/konian71/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-07-21T15:39:51 | 2024-09-05T20:00:23 | 2024-09-05T20:00:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have a setup with the following specifications:
CPU: AMD Ryzen 5700X
RAM: 128GB DDR4-3200, CL16
Old GPU: AMD RX6800XT
New GPU: Nvidia RTX4070Ti Super
I am running large language models, specifically Gemma2:32b-fp16 and LLaMA3:70b. All drivers are up to date, and the system was cleaned... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5832/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/437 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/437/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/437/comments | https://api.github.com/repos/ollama/ollama/issues/437/events | https://github.com/ollama/ollama/issues/437 | 1,870,626,242 | I_kwDOJ0Z1Ps5vf33C | 437 | Error downloading manifest with `llama2-uncensored:70b` | {
"login": "satvikpendem",
"id": 42670561,
"node_id": "MDQ6VXNlcjQyNjcwNTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/42670561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/satvikpendem",
"html_url": "https://github.com/satvikpendem",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 2 | 2023-08-28T23:05:37 | 2023-08-29T15:04:32 | 2023-08-29T15:04:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am on macOS. I run `ollama run llama2-uncensored:70b` and get the following:
```sh
pulling manifest
pulling 47f73cb430c8... 100% |██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| (39/39 GB, 60 MB/s)
pulling 750599e5d655... 100% |████... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/437/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/645 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/645/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/645/comments | https://api.github.com/repos/ollama/ollama/issues/645/events | https://github.com/ollama/ollama/issues/645 | 1,919,383,449 | I_kwDOJ0Z1Ps5yZ3eZ | 645 | Allow global Ollama settings configuration | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-09-29T14:33:32 | 2024-03-06T01:56:16 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In some cases (specifically related to hardware usually) it makes sense to have some global Ollama configuration rather than binding the setting to the Modelfile.
For example if I am running many different servers with different hardware capabilities I don't want to create and load Modelfiles for each machine to set... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/645/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/645/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/789 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/789/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/789/comments | https://api.github.com/repos/ollama/ollama/issues/789/events | https://github.com/ollama/ollama/issues/789 | 1,943,035,684 | I_kwDOJ0Z1Ps5z0F8k | 789 | How to disable streaming output in Rest API | {
"login": "ajasingh",
"id": 15189049,
"node_id": "MDQ6VXNlcjE1MTg5MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajasingh",
"html_url": "https://github.com/ajasingh",
"followers_url": "https://api.github.com/users/aja... | [] | closed | false | null | [] | null | 2 | 2023-10-14T07:07:37 | 2023-10-16T18:06:40 | 2023-10-16T18:06:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am trying to get structured information like json back from model , so i am not looking at streamed output . I have tried setting content-type:application/json as mentioned in one of the issues but is still get back streamed output . Can somebody help me how to disable streamed output | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/789/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/789/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6494 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6494/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6494/comments | https://api.github.com/repos/ollama/ollama/issues/6494/events | https://github.com/ollama/ollama/issues/6494 | 2,484,900,335 | I_kwDOJ0Z1Ps6UHJHv | 6,494 | igpu | {
"login": "ayttop",
"id": 178673810,
"node_id": "U_kgDOCqZYkg",
"avatar_url": "https://avatars.githubusercontent.com/u/178673810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayttop",
"html_url": "https://github.com/ayttop",
"followers_url": "https://api.github.com/users/ayttop/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 6 | 2024-08-24T22:25:26 | 2024-08-28T02:53:23 | 2024-08-27T21:21:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ollama with igpu intel
how to run ollama on igpu intel | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6494/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7611 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7611/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7611/comments | https://api.github.com/repos/ollama/ollama/issues/7611/events | https://github.com/ollama/ollama/issues/7611 | 2,648,163,131 | I_kwDOJ0Z1Ps6d18M7 | 7,611 | with_structured_output support for ollama.chat() | {
"login": "ChmHsm",
"id": 12183061,
"node_id": "MDQ6VXNlcjEyMTgzMDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/12183061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChmHsm",
"html_url": "https://github.com/ChmHsm",
"followers_url": "https://api.github.com/users/ChmHsm/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-11-11T04:59:38 | 2025-01-11T02:19:50 | 2024-12-02T15:23:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Thank you for the llama 3.2 vision integration!
I was using llama3.2-3b with ChatOllama(model="llama3.2:latest").with_structured_output() to get a structured response from the model and I was hoping to be able to do the same with llama3.2-vision.
But it turns out, at least to my knowledge, I can't for now since i... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7611/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1037 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1037/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1037/comments | https://api.github.com/repos/ollama/ollama/issues/1037/events | https://github.com/ollama/ollama/issues/1037 | 1,982,465,027 | I_kwDOJ0Z1Ps52KgQD | 1,037 | run a multi-file model | {
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/follow... | [] | closed | false | null | [] | null | 3 | 2023-11-08T00:06:10 | 2023-11-13T17:55:15 | 2023-11-13T17:55:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How to import a model which is multiple files like the image bellow

I tried but It gave me error
```parsing modelfile
looking for model
⠋ creating model layer Error: invalid file magic``` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1037/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.