url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4503 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4503/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4503/comments | https://api.github.com/repos/ollama/ollama/issues/4503/events | https://github.com/ollama/ollama/issues/4503 | 2,303,361,162 | I_kwDOJ0Z1Ps6JSoCK | 4,503 | Ollama create fails when using a utf16 Modelfile | {
"login": "dehlong",
"id": 112163027,
"node_id": "U_kgDOBq940w",
"avatar_url": "https://avatars.githubusercontent.com/u/112163027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dehlong",
"html_url": "https://github.com/dehlong",
"followers_url": "https://api.github.com/users/dehlong/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 22 | 2024-05-17T18:35:36 | 2024-12-13T23:37:39 | 2024-05-20T18:26:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
I try to create a new model and mo matter what the model file is, 90% of the time I get:
Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"
Is there any solution to this?
This is my modelfile:
FROM llama3
PARAMETER tempera... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4503/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2482 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2482/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2482/comments | https://api.github.com/repos/ollama/ollama/issues/2482/events | https://github.com/ollama/ollama/pull/2482 | 2,133,232,444 | PR_kwDOJ0Z1Ps5mzPR0 | 2,482 | add support for json files and to allow for more than 41666 embeddings | {
"login": "donbr",
"id": 7340008,
"node_id": "MDQ6VXNlcjczNDAwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7340008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donbr",
"html_url": "https://github.com/donbr",
"followers_url": "https://api.github.com/users/donbr/follower... | [] | closed | false | null | [] | null | 1 | 2024-02-13T22:31:31 | 2024-11-21T03:03:31 | 2024-11-21T03:03:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2482",
"html_url": "https://github.com/ollama/ollama/pull/2482",
"diff_url": "https://github.com/ollama/ollama/pull/2482.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2482.patch",
"merged_at": null
} | Added support for support for json files and larger batch size based on embedding limitations. Ran into issues with syntax of JSONLoader arguments so went with TextLoader for now. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2482/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5350 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5350/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5350/comments | https://api.github.com/repos/ollama/ollama/issues/5350/events | https://github.com/ollama/ollama/issues/5350 | 2,379,405,545 | I_kwDOJ0Z1Ps6N0tjp | 5,350 | Gemma 2 9B cannot run | {
"login": "Forevery1",
"id": 19872771,
"node_id": "MDQ6VXNlcjE5ODcyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19872771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forevery1",
"html_url": "https://github.com/Forevery1",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-06-28T01:59:29 | 2024-07-03T16:42:09 | 2024-06-29T14:21:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="732" alt="image" src="https://github.com/ollama/ollama/assets/19872771/e28dac56-9a8b-4310-84d3-97bf3b2594f4">
### OS
Ubuntu 22.04.4 LTS
### GPU
Nvidia 4060
### CPU
Intel
### Ollama version
0.1.47 | {
"login": "Forevery1",
"id": 19872771,
"node_id": "MDQ6VXNlcjE5ODcyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19872771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forevery1",
"html_url": "https://github.com/Forevery1",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5350/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5350/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/802 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/802/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/802/comments | https://api.github.com/repos/ollama/ollama/issues/802/events | https://github.com/ollama/ollama/issues/802 | 1,944,932,035 | I_kwDOJ0Z1Ps5z7U7D | 802 | Relative API link in the readme doesn't work | {
"login": "richawo",
"id": 35015261,
"node_id": "MDQ6VXNlcjM1MDE1MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/35015261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richawo",
"html_url": "https://github.com/richawo",
"followers_url": "https://api.github.com/users/richaw... | [] | closed | false | null | [] | null | 1 | 2023-10-16T10:51:26 | 2023-10-25T23:22:13 | 2023-10-25T23:22:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It opens up:
https://github.com/jmorganca/docs/api.md
Rather than:
https://github.com/jmorganca/ollama/blob/main/docs/api.md | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/802/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7081 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7081/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7081/comments | https://api.github.com/repos/ollama/ollama/issues/7081/events | https://github.com/ollama/ollama/issues/7081 | 2,562,164,972 | I_kwDOJ0Z1Ps6Yt4js | 7,081 | Ollama performs *much* slower via API than CLI on M1 Mac | {
"login": "bigxalx",
"id": 511330,
"node_id": "MDQ6VXNlcjUxMTMzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/511330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigxalx",
"html_url": "https://github.com/bigxalx",
"followers_url": "https://api.github.com/users/bigxalx/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-10-02T16:45:09 | 2024-10-04T09:48:01 | 2024-10-03T19:45:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### CLI
When I run **codestral:22b-v0.1-q2_K** on my M1 Macbook Air via the CLI with `ollama run codestral:22b-v0.1-q2_K` it performs a little slowly, but usable. When I look at `ollama ps` it prints the following:
| NAME | ID | SIZE | PROCESSOR | UNTI... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7081/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/447 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/447/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/447/comments | https://api.github.com/repos/ollama/ollama/issues/447/events | https://github.com/ollama/ollama/issues/447 | 1,875,408,937 | I_kwDOJ0Z1Ps5vyHgp | 447 | commit 8bbff2df986629e5481547e913ab4de0245afb37 stops "ollama ls" from working here | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 2 | 2023-08-31T12:14:51 | 2023-09-04T08:27:27 | 2023-09-04T08:27:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | `go generate ./... && go build . && ./ollama ls` worked fine for previous versions, but does not work for the latest commit.
Using `git bisect`, the commit that creates this problem seems to be 8bbff2df986629e5481547e913ab4de0245afb37 (from the 28th of Aug).
Here is the error message for when it is not working:
... | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/447/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4894 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4894/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4894/comments | https://api.github.com/repos/ollama/ollama/issues/4894/events | https://github.com/ollama/ollama/issues/4894 | 2,339,536,264 | I_kwDOJ0Z1Ps6Lcn2I | 4,894 | Feature: Allow setting OLLAMA_NUM_PARALLEL per model | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-06-07T03:55:36 | 2024-10-24T18:17:13 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be great if you could set OLLAMA_NUM_PARALLEL per model.
Example use case:
- You have one large "smart" model you only ever want one request at a time going to to avoid using all your memory.
- You have a smaller "fast" fast model (or just one with a smaller context) that you might want to allow a numbe... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4894/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4894/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5748 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5748/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5748/comments | https://api.github.com/repos/ollama/ollama/issues/5748/events | https://github.com/ollama/ollama/issues/5748 | 2,413,697,254 | I_kwDOJ0Z1Ps6P3hjm | 5,748 | ShipIt folder taking 1GB | {
"login": "cliffordh",
"id": 1755156,
"node_id": "MDQ6VXNlcjE3NTUxNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1755156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cliffordh",
"html_url": "https://github.com/cliffordh",
"followers_url": "https://api.github.com/users/cl... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A... | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 0 | 2024-07-17T13:56:58 | 2024-07-17T18:54:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Running a junk file scanner it detected the folder com.electron.ollama.ShipIt taking almost 1GB in ~/Library/Caches. This should be automatically cleared.
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5748/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3688 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3688/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3688/comments | https://api.github.com/repos/ollama/ollama/issues/3688/events | https://github.com/ollama/ollama/pull/3688 | 2,247,213,761 | PR_kwDOJ0Z1Ps5s3a41 | 3,688 | exmaple error:ollama list models with raw-Name | {
"login": "KevinLiangX",
"id": 40968187,
"node_id": "MDQ6VXNlcjQwOTY4MTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/40968187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KevinLiangX",
"html_url": "https://github.com/KevinLiangX",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-04-17T02:41:18 | 2024-05-09T02:50:11 | 2024-05-09T02:50:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3688",
"html_url": "https://github.com/ollama/ollama/pull/3688",
"diff_url": "https://github.com/ollama/ollama/pull/3688.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3688.patch",
"merged_at": null
} |

修改之后:

| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3688/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/332 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/332/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/332/comments | https://api.github.com/repos/ollama/ollama/issues/332/events | https://github.com/ollama/ollama/issues/332 | 1,847,107,942 | I_kwDOJ0Z1Ps5uGKFm | 332 | only regenerate diff of embedding layer | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-08-11T16:25:43 | 2023-08-15T19:10:25 | 2023-08-15T19:10:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/332/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2609 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2609/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2609/comments | https://api.github.com/repos/ollama/ollama/issues/2609/events | https://github.com/ollama/ollama/issues/2609 | 2,143,713,954 | I_kwDOJ0Z1Ps5_xnqi | 2,609 | [Question\Suggestion] Result of function calling. | {
"login": "gerwintmg",
"id": 17082189,
"node_id": "MDQ6VXNlcjE3MDgyMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17082189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerwintmg",
"html_url": "https://github.com/gerwintmg",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1... | open | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 1 | 2024-02-20T07:32:17 | 2024-11-06T18:55:04 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently i am experimenting with function calling (getting a json result), and returning the result from the function call to the LLM
when using the chat api you can specify system, user and assistent. I was wondering if we would be able to add the option of **function Result**
getting the following chat
```js... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2609/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1635 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1635/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1635/comments | https://api.github.com/repos/ollama/ollama/issues/1635/events | https://github.com/ollama/ollama/issues/1635 | 2,050,936,308 | I_kwDOJ0Z1Ps56Ps30 | 1,635 | [Request] Reduce Gocyclo | {
"login": "H0llyW00dzZ",
"id": 17626300,
"node_id": "MDQ6VXNlcjE3NjI2MzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17626300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/H0llyW00dzZ",
"html_url": "https://github.com/H0llyW00dzZ",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2023-12-20T16:53:17 | 2024-05-07T07:57:03 | 2024-05-06T23:33:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |

not good for AI when it's too complex
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1635/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4959 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4959/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4959/comments | https://api.github.com/repos/ollama/ollama/issues/4959/events | https://github.com/ollama/ollama/pull/4959 | 2,343,393,852 | PR_kwDOJ0Z1Ps5x8yAc | 4,959 | Add new community integration (TypingMind) | {
"login": "trungdq88",
"id": 4214509,
"node_id": "MDQ6VXNlcjQyMTQ1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4214509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trungdq88",
"html_url": "https://github.com/trungdq88",
"followers_url": "https://api.github.com/users/tr... | [] | closed | false | null | [] | null | 1 | 2024-06-10T09:27:45 | 2024-11-21T10:45:03 | 2024-11-21T10:45:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4959",
"html_url": "https://github.com/ollama/ollama/pull/4959",
"diff_url": "https://github.com/ollama/ollama/pull/4959.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4959.patch",
"merged_at": null
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4959/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1148 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1148/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1148/comments | https://api.github.com/repos/ollama/ollama/issues/1148/events | https://github.com/ollama/ollama/issues/1148 | 1,996,063,151 | I_kwDOJ0Z1Ps52-YGv | 1,148 | running any model crashes my Ubuntu 22.04 LTS system with 2 nvidia GPUs RTX 3060 | {
"login": "pexus",
"id": 1809523,
"node_id": "MDQ6VXNlcjE4MDk1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pexus",
"html_url": "https://github.com/pexus",
"followers_url": "https://api.github.com/users/pexus/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2023-11-16T05:01:12 | 2023-11-18T18:35:39 | 2023-11-18T18:35:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It used to work before. The latest version just crashes my system. I tried running xwinlm, mistral and llama2.
I have an AMD FX 830, 2 nvidia GPU RTX 3060 with 12GB each and CPU mem of 32GB. Running on Ubuntu 22.04 LTS.
I am using the latest CUDA toolkit 12.3 | {
"login": "pexus",
"id": 1809523,
"node_id": "MDQ6VXNlcjE4MDk1MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pexus",
"html_url": "https://github.com/pexus",
"followers_url": "https://api.github.com/users/pexus/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1148/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4075 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4075/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4075/comments | https://api.github.com/repos/ollama/ollama/issues/4075/events | https://github.com/ollama/ollama/issues/4075 | 2,273,307,461 | I_kwDOJ0Z1Ps6Hf-tF | 4,075 | invalid file magic while importing llama3 70b into ollama | {
"login": "SakuraEntropia",
"id": 61424969,
"node_id": "MDQ6VXNlcjYxNDI0OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/61424969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SakuraEntropia",
"html_url": "https://github.com/SakuraEntropia",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-05-01T10:41:38 | 2024-06-25T23:36:28 | 2024-06-25T23:36:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
the model i used is from https://hf-mirror.com/mradermacher/llama-3-70B-instruct-uncensored-i1-GGUF,
the issues are like this
`PS D:\Ollama> ollama create llama3:70b -f Modelfile
transferring model data
creating model layer
Error: invalid file magic`
the model couldn't be successfully boot... | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4075/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3721 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3721/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3721/comments | https://api.github.com/repos/ollama/ollama/issues/3721/events | https://github.com/ollama/ollama/issues/3721 | 2,249,625,523 | I_kwDOJ0Z1Ps6GFo-z | 3,721 | NEED WizardLM-2-8*22B Q6 | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-04-18T02:46:06 | 2024-04-20T08:56:00 | 2024-04-20T08:56:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | NEED WizardLM-2-8*22B Q6 | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3721/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2038 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2038/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2038/comments | https://api.github.com/repos/ollama/ollama/issues/2038/events | https://github.com/ollama/ollama/issues/2038 | 2,087,322,756 | I_kwDOJ0Z1Ps58agSE | 2,038 | Minimal use of GPU in Docker (windows) with 10/33 layers loaded | {
"login": "sumitsodhi88",
"id": 149290101,
"node_id": "U_kgDOCOX8dQ",
"avatar_url": "https://avatars.githubusercontent.com/u/149290101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumitsodhi88",
"html_url": "https://github.com/sumitsodhi88",
"followers_url": "https://api.github.com/use... | [
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "htt... | closed | false | null | [] | null | 9 | 2024-01-18T01:53:21 | 2024-03-11T18:31:58 | 2024-03-11T18:31:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | my GPU is being used 23% while cpu is at 100% while using a docker image in windows environment. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2038/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7910 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7910/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7910/comments | https://api.github.com/repos/ollama/ollama/issues/7910/events | https://github.com/ollama/ollama/issues/7910 | 2,711,933,320 | I_kwDOJ0Z1Ps6hpNGI | 7,910 | tool parsing issues with "'" | {
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 4 | 2024-12-02T13:30:09 | 2024-12-09T21:22:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
difficult to see in the title: ' is the problem.
when i ask my ai to "execute a python example" it generates something like "print('...')" but truncates at the 1st ':
"model": "llama3.1:8b-instruct-fp16",
"created_at": "2024-12-02T13:26:55.1045197Z",
"message": {
... | {
"login": "fce2",
"id": 16529960,
"node_id": "MDQ6VXNlcjE2NTI5OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16529960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fce2",
"html_url": "https://github.com/fce2",
"followers_url": "https://api.github.com/users/fce2/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7910/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/2552 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2552/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2552/comments | https://api.github.com/repos/ollama/ollama/issues/2552/events | https://github.com/ollama/ollama/pull/2552 | 2,139,634,632 | PR_kwDOJ0Z1Ps5nJJHA | 2,552 | Fix duplicate menus on update and exit on signals | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-02-16T23:35:09 | 2024-02-17T01:23:40 | 2024-02-17T01:23:37 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2552",
"html_url": "https://github.com/ollama/ollama/pull/2552",
"diff_url": "https://github.com/ollama/ollama/pull/2552.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2552.patch",
"merged_at": "2024-02-17T01:23:37"
} | Also fixes a few fit-and-finish items for better developer experience
Fixes #2521
Fixes #2522 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2552/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/352 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/352/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/352/comments | https://api.github.com/repos/ollama/ollama/issues/352/events | https://github.com/ollama/ollama/issues/352 | 1,851,591,225 | I_kwDOJ0Z1Ps5uXQo5 | 352 | crash on allocated size greater than the recommended max working set size | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-08-15T14:57:48 | 2023-09-07T13:35:01 | 2023-09-07T13:35:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When trying to load a large context window ollama crashed due to llama.cpp throwing an exception:
```
size = 160.00 MB, (12018.69 / 10922.67), warning: current allocated size is greater than the recommended max working set size
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: ggml-meta... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/352/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/352/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5673 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5673/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5673/comments | https://api.github.com/repos/ollama/ollama/issues/5673/events | https://github.com/ollama/ollama/issues/5673 | 2,406,889,988 | I_kwDOJ0Z1Ps6PdjoE | 5,673 | Ollama spins up USB HDD | {
"login": "bkev",
"id": 10973030,
"node_id": "MDQ6VXNlcjEwOTczMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/10973030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkev",
"html_url": "https://github.com/bkev",
"followers_url": "https://api.github.com/users/bkev/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2024-07-13T12:56:21 | 2024-09-30T22:55:49 | 2024-09-30T22:55:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When ever I run an Ollama model, Ollama now spins up my external USB hard drive despite not needing to as all the models are on the internal drive.
I can't say I've always noticed it doing this, although it has always spun up the hard drive when upgrading as it seems to scan USB?
Is there... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5673/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6265 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6265/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6265/comments | https://api.github.com/repos/ollama/ollama/issues/6265/events | https://github.com/ollama/ollama/issues/6265 | 2,456,705,221 | I_kwDOJ0Z1Ps6SbljF | 6,265 | Not a feature request, not a bug, problem with LLama3.1 | {
"login": "airdogvan",
"id": 31630759,
"node_id": "MDQ6VXNlcjMxNjMwNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31630759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airdogvan",
"html_url": "https://github.com/airdogvan",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 8 | 2024-08-08T22:01:09 | 2024-08-11T20:17:23 | 2024-08-09T22:33:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have several models all running fine on ollama, including Llama3. Lama3.1 gives very long answers, then repeats them and finally prints random characters and if I didn't use the interface to stop seemingly would go on forever.
Running llama3.1 with the same parameters that seem to be ok with all other models.
... | {
"login": "airdogvan",
"id": 31630759,
"node_id": "MDQ6VXNlcjMxNjMwNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31630759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airdogvan",
"html_url": "https://github.com/airdogvan",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6265/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1258 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1258/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1258/comments | https://api.github.com/repos/ollama/ollama/issues/1258/events | https://github.com/ollama/ollama/pull/1258 | 2,008,889,413 | PR_kwDOJ0Z1Ps5gQ9B8 | 1,258 | warn if running a ggml model file | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 1 | 2023-11-23T23:29:53 | 2023-12-06T23:54:34 | 2023-11-24T19:02:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1258",
"html_url": "https://github.com/ollama/ollama/pull/1258",
"diff_url": "https://github.com/ollama/ollama/pull/1258.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1258.patch",
"merged_at": null
} | If the model a user is running will the use ggml runtime log a warning that prompts them to check for update to try and pull the gguf version of the model.
```
ollama run orca-mini
This model requires an update to work in future versions of Ollama. Check for update now? (y/n) y
pulling manifest
pulling 4de14feaa... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1258/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2846 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2846/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2846/comments | https://api.github.com/repos/ollama/ollama/issues/2846/events | https://github.com/ollama/ollama/issues/2846 | 2,162,262,359 | I_kwDOJ0Z1Ps6A4YFX | 2,846 | /read {filename} command to read a prompt from a file | {
"login": "nyimbi",
"id": 2156185,
"node_id": "MDQ6VXNlcjIxNTYxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2156185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nyimbi",
"html_url": "https://github.com/nyimbi",
"followers_url": "https://api.github.com/users/nyimbi/foll... | [] | open | false | null | [] | null | 0 | 2024-02-29T23:24:38 | 2024-02-29T23:24:38 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be super useful to be able to read a prompt from a file and execute it. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2846/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5404 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5404/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5404/comments | https://api.github.com/repos/ollama/ollama/issues/5404/events | https://github.com/ollama/ollama/issues/5404 | 2,383,291,047 | I_kwDOJ0Z1Ps6ODiKn | 5,404 | ollama create model success but ps command returns empty | {
"login": "tammypi",
"id": 4264858,
"node_id": "MDQ6VXNlcjQyNjQ4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4264858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tammypi",
"html_url": "https://github.com/tammypi",
"followers_url": "https://api.github.com/users/tammypi/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-01T09:13:32 | 2024-07-02T11:19:00 | 2024-07-02T11:19:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I used `ollama create emailphishing -f emailphishing.mf ` command, and it printed "success":

When I used command `ollama ps`, and it returned empty list:
 | {
"login": "uneuro",
"id": 5337885,
"node_id": "MDQ6VXNlcjUzMzc4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5337885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uneuro",
"html_url": "https://github.com/uneuro",
"followers_url": "https://api.github.com/users/uneuro/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 6 | 2024-02-23T17:50:56 | 2024-12-19T21:37:07 | 2024-12-19T21:37:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
I have 2x7900xtx
if I close ollama after each requests and specify only 1 gpu it's running well.
I tried 8 times to run ollama server and close after a request, at some point it was broken too cause closing wasn't cleari... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2713/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7706 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7706/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7706/comments | https://api.github.com/repos/ollama/ollama/issues/7706/events | https://github.com/ollama/ollama/pull/7706 | 2,665,911,664 | PR_kwDOJ0Z1Ps6CKKAi | 7,706 | feat: add VT chat app to README | {
"login": "vinhnx",
"id": 1097578,
"node_id": "MDQ6VXNlcjEwOTc1Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1097578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinhnx",
"html_url": "https://github.com/vinhnx",
"followers_url": "https://api.github.com/users/vinhnx/foll... | [] | closed | false | null | [] | null | 1 | 2024-11-17T14:07:38 | 2024-11-18T03:54:29 | 2024-11-17T22:35:41 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7706",
"html_url": "https://github.com/ollama/ollama/pull/7706",
"diff_url": "https://github.com/ollama/ollama/pull/7706.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7706.patch",
"merged_at": "2024-11-17T22:35:41"
} | Add VT app, a minimal multimodal AI chat app with dynamic conversation routing, support both models backend by Ollama.
Thank you! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7706/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3627 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3627/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3627/comments | https://api.github.com/repos/ollama/ollama/issues/3627/events | https://github.com/ollama/ollama/pull/3627 | 2,241,641,473 | PR_kwDOJ0Z1Ps5skjK8 | 3,627 | Update llama.cpp submodule to `4bd0f93` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2024-04-13T16:16:29 | 2024-04-15T11:55:02 | 2024-04-13T17:43:02 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3627",
"html_url": "https://github.com/ollama/ollama/pull/3627",
"diff_url": "https://github.com/ollama/ollama/pull/3627.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3627.patch",
"merged_at": "2024-04-13T17:43:02"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3627/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2528 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2528/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2528/comments | https://api.github.com/repos/ollama/ollama/issues/2528/events | https://github.com/ollama/ollama/pull/2528 | 2,137,579,266 | PR_kwDOJ0Z1Ps5nCGJN | 2,528 | Explicitly disable AVX2 on GPU builds | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-02-15T22:52:34 | 2024-02-19T21:13:08 | 2024-02-16T00:06:34 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2528",
"html_url": "https://github.com/ollama/ollama/pull/2528",
"diff_url": "https://github.com/ollama/ollama/pull/2528.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2528.patch",
"merged_at": null
} | Even though we weren't setting it to on, somewhere in the cmake config it was getting toggled on. By explicitly setting it to off, we get `/arch:AVX` as intended.
Fixes #2527
Input:
```
generating config with: cmake -S ../llama.cpp -B ../llama.cpp/build/windows/amd64/cuda_v11.3 -DBUILD_SHARED_LIBS=on -DLLAMA_... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2528/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3446 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3446/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3446/comments | https://api.github.com/repos/ollama/ollama/issues/3446/events | https://github.com/ollama/ollama/issues/3446 | 2,219,564,146 | I_kwDOJ0Z1Ps6ES9xy | 3,446 | ollama not using AMD GPU on linux | {
"login": "jab416171",
"id": 345752,
"node_id": "MDQ6VXNlcjM0NTc1Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/345752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jab416171",
"html_url": "https://github.com/jab416171",
"followers_url": "https://api.github.com/users/jab4... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-04-02T04:57:17 | 2024-05-05T18:17:07 | 2024-05-05T18:17:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama is only using my CPU. I've tried running it with `ROCR_VISIBLE_DEVICES=0 ollama serve` but that doesn't seem to change anything.
```
time=2024-04-01T22:37:03.207-06:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-01T22:37:03.207-... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3446/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1659 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1659/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1659/comments | https://api.github.com/repos/ollama/ollama/issues/1659/events | https://github.com/ollama/ollama/issues/1659 | 2,052,724,559 | I_kwDOJ0Z1Ps56WhdP | 1,659 | Ollama push fails on slower downloads with a 403 | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 2 | 2023-12-21T16:25:44 | 2024-03-11T22:40:01 | 2024-03-11T22:40:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have a model I want to push but at only a 35Mbps upload speed (thanks Xfinity Gigabit), it takes about 25 minutes to upload. The problem is that when it takes longer than 20 minutes, it fails with this error:
```
ollama push mattw/gpt4-x-alpaca:latest
retrieving manifest
pushing 6bccfcf77d21... 31% ▕█████████... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1659/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8055 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8055/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8055/comments | https://api.github.com/repos/ollama/ollama/issues/8055/events | https://github.com/ollama/ollama/pull/8055 | 2,734,291,036 | PR_kwDOJ0Z1Ps6E7lK- | 8,055 | llama: enable JSON schema key ordering for generating grammars | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-12-11T23:57:04 | 2024-12-12T01:17:38 | 2024-12-12T01:17:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8055",
"html_url": "https://github.com/ollama/ollama/pull/8055",
"diff_url": "https://github.com/ollama/ollama/pull/8055.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8055.patch",
"merged_at": "2024-12-12T01:17:36"
} | Will do a follow up PR for updates to the command line with format | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8055/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7806 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7806/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7806/comments | https://api.github.com/repos/ollama/ollama/issues/7806/events | https://github.com/ollama/ollama/issues/7806 | 2,685,230,463 | I_kwDOJ0Z1Ps6gDV1_ | 7,806 | Context length not being updated | {
"login": "landoncrabtree",
"id": 34496757,
"node_id": "MDQ6VXNlcjM0NDk2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/34496757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/landoncrabtree",
"html_url": "https://github.com/landoncrabtree",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-11-23T03:35:49 | 2024-11-23T17:19:15 | 2024-11-23T17:19:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```bash
ollama show llama3.2
Model
architecture llama
parameters 3.2B
context length 131072
embedding length 3072
quantization Q4_K_M
Parameters
stop "<|start_header_id|>"
stop "<|end_header_id|>"
stop "<|... | {
"login": "landoncrabtree",
"id": 34496757,
"node_id": "MDQ6VXNlcjM0NDk2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/34496757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/landoncrabtree",
"html_url": "https://github.com/landoncrabtree",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7806/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2145 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2145/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2145/comments | https://api.github.com/repos/ollama/ollama/issues/2145/events | https://github.com/ollama/ollama/issues/2145 | 2,094,713,641 | I_kwDOJ0Z1Ps582ssp | 2,145 | Streaming response with `text/event-stream` | {
"login": "radames",
"id": 102277,
"node_id": "MDQ6VXNlcjEwMjI3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/102277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radames",
"html_url": "https://github.com/radames",
"followers_url": "https://api.github.com/users/radames/fo... | [] | closed | false | null | [] | null | 2 | 2024-01-22T20:41:37 | 2024-03-11T19:22:55 | 2024-03-11T19:20:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Are you still considering adding `text/event-stream` for the Streaming Response ? Reading #294, it might make sense to have that option for browser-only clients.
For reference, here is a JavaScript client for text streaming that works on both the browser and Node.js.
https://github.com/huggingface/huggingface.js/bl... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2145/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2145/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/2845 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2845/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2845/comments | https://api.github.com/repos/ollama/ollama/issues/2845/events | https://github.com/ollama/ollama/issues/2845 | 2,162,137,372 | I_kwDOJ0Z1Ps6A35kc | 2,845 | Multiple requests at once | {
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-02-29T21:31:55 | 2024-03-01T01:01:04 | 2024-03-01T01:01:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would take more computing power on the users machine, but to allow Ollama to be able to make multiple requests at once.
Lets say you have two terminal windows running and you ask the AI in Window 1 to do X and ask the AI in Windows 2 which will either be using the same model or a different model and ask it to do Y a... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2845/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5212/comments | https://api.github.com/repos/ollama/ollama/issues/5212/events | https://github.com/ollama/ollama/pull/5212 | 2,367,611,285 | PR_kwDOJ0Z1Ps5zPXlQ | 5,212 | build: add source label to Dockerfile | {
"login": "umglurf",
"id": 15076744,
"node_id": "MDQ6VXNlcjE1MDc2NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15076744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umglurf",
"html_url": "https://github.com/umglurf",
"followers_url": "https://api.github.com/users/umglur... | [] | closed | false | null | [] | null | 2 | 2024-06-22T06:25:02 | 2024-11-22T09:49:59 | 2024-11-21T11:16:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5212",
"html_url": "https://github.com/ollama/ollama/pull/5212",
"diff_url": "https://github.com/ollama/ollama/pull/5212.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5212.patch",
"merged_at": null
} | This allows tools such as dependabot and renovate
to find the source and changelog | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5212/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4790 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4790/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4790/comments | https://api.github.com/repos/ollama/ollama/issues/4790/events | https://github.com/ollama/ollama/issues/4790 | 2,329,844,537 | I_kwDOJ0Z1Ps6K3ps5 | 4,790 | command-r:35b uses too much memory | {
"login": "Zig1375",
"id": 2699034,
"node_id": "MDQ6VXNlcjI2OTkwMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2699034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zig1375",
"html_url": "https://github.com/Zig1375",
"followers_url": "https://api.github.com/users/Zig1375/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-06-02T20:15:38 | 2024-06-25T17:11:14 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
My PC configuration is:
- GPU - Nvidia RTX 4070 (12Gb)
- 64 GB RAM
- When I do not use Ollama: 11.9Gb RAM is used-
- When I use Ollama with the default settings: 33.7 GB RAM is used
- `num_ctx` = 4k (4,096), then **35.1** GB RAM is used
- `num_ctx` = 8k (8,192), then **39.9** GB RAM is u... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4790/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4790/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8353 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8353/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8353/comments | https://api.github.com/repos/ollama/ollama/issues/8353/events | https://github.com/ollama/ollama/issues/8353 | 2,776,531,304 | I_kwDOJ0Z1Ps6lfoFo | 8,353 | FROM path resolution uses working directory instead of Modelfile location | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 0 | 2025-01-08T23:28:17 | 2025-01-11T00:14:09 | 2025-01-11T00:14:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Description:**
When using a relative path in a FROM statement within a Modelfile, the path is resolved relative to the current working directory where the ollama command is executed, rather than relative to the Modelfile's location. This makes it difficult to create portable Modelfiles that... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8353/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3154 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3154/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3154/comments | https://api.github.com/repos/ollama/ollama/issues/3154/events | https://github.com/ollama/ollama/issues/3154 | 2,187,316,395 | I_kwDOJ0Z1Ps6CX8yr | 3,154 | Why Ollama is so terribly slow when I set format="json" | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 14 | 2024-03-14T21:43:14 | 2024-09-05T23:47:21 | 2024-03-16T15:08:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use format="json" the speed is extremely slow. However, I just tried llamafile with JSON output with the same prompt. What takes Ollama to response in two minutes, takes llamafile of the same model a few seconds. Please advise, if this issue is not to be sorted, obviously Ollama is not a suitable choice for d... | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3154/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8630 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8630/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8630/comments | https://api.github.com/repos/ollama/ollama/issues/8630/events | https://github.com/ollama/ollama/issues/8630 | 2,815,542,115 | I_kwDOJ0Z1Ps6n0cNj | 8,630 | loss of speech | {
"login": "oguzhanet",
"id": 77545698,
"node_id": "MDQ6VXNlcjc3NTQ1Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/77545698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oguzhanet",
"html_url": "https://github.com/oguzhanet",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2025-01-28T12:37:29 | 2025-01-28T14:04:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello, I am using llama3.1:8b. When I stop and reopen the application, the old chat disappears. How can I prevent this?
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8630/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3879 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3879/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3879/comments | https://api.github.com/repos/ollama/ollama/issues/3879/events | https://github.com/ollama/ollama/pull/3879 | 2,261,507,878 | PR_kwDOJ0Z1Ps5tnUnV | 3,879 | Use ReadFull over CopyN when decoding GGUFs | {
"login": "brycereitano",
"id": 1928691,
"node_id": "MDQ6VXNlcjE5Mjg2OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1928691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brycereitano",
"html_url": "https://github.com/brycereitano",
"followers_url": "https://api.github.com... | [] | open | false | null | [] | null | 1 | 2024-04-24T14:57:36 | 2024-04-26T00:17:10 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3879",
"html_url": "https://github.com/ollama/ollama/pull/3879",
"diff_url": "https://github.com/ollama/ollama/pull/3879.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3879.patch",
"merged_at": null
} | Opting to use `io.ReadFull` with preallocated `[]byte` slices as `bytes.Buffer` requires multiple grows/allocs to read in long strings.
In addition, preallocating the array slices to prevent underlying allocs to append to the slice.
I observed real world performance improvements as well as by using a small micr... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3879/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7954 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7954/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7954/comments | https://api.github.com/repos/ollama/ollama/issues/7954/events | https://github.com/ollama/ollama/pull/7954 | 2,721,208,460 | PR_kwDOJ0Z1Ps6EOfng | 7,954 | wip: next ollama runner build updates | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-12-05T19:15:47 | 2025-01-16T17:34:46 | 2025-01-16T17:34:46 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7954",
"html_url": "https://github.com/ollama/ollama/pull/7954",
"diff_url": "https://github.com/ollama/ollama/pull/7954.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7954.patch",
"merged_at": null
} | Carries #7499 and adjusts the layout for the new runner | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7954/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5078 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5078/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5078/comments | https://api.github.com/repos/ollama/ollama/issues/5078/events | https://github.com/ollama/ollama/pull/5078 | 2,355,788,027 | PR_kwDOJ0Z1Ps5ynHod | 5,078 | Add Chinese translation of README | {
"login": "sumingcheng",
"id": 21992204,
"node_id": "MDQ6VXNlcjIxOTkyMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21992204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumingcheng",
"html_url": "https://github.com/sumingcheng",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-06-16T13:59:57 | 2024-06-16T14:00:48 | 2024-06-16T14:00:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5078",
"html_url": "https://github.com/ollama/ollama/pull/5078",
"diff_url": "https://github.com/ollama/ollama/pull/5078.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5078.patch",
"merged_at": null
} | This pull request adds a Chinese translation of the README file to help native Chinese speakers better understand the project.
| {
"login": "sumingcheng",
"id": 21992204,
"node_id": "MDQ6VXNlcjIxOTkyMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21992204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumingcheng",
"html_url": "https://github.com/sumingcheng",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5078/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7033 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7033/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7033/comments | https://api.github.com/repos/ollama/ollama/issues/7033/events | https://github.com/ollama/ollama/issues/7033 | 2,554,854,330 | I_kwDOJ0Z1Ps6YR_u6 | 7,033 | Using smaller context size shows CUDA error: CUBLAS_STATUS_NOT_INITIALIZED | {
"login": "aamsur-933",
"id": 74174455,
"node_id": "MDQ6VXNlcjc0MTc0NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/74174455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aamsur-933",
"html_url": "https://github.com/aamsur-933",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-29T10:15:36 | 2025-01-06T07:40:36 | 2025-01-06T07:40:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Helo, i have PC with NVIDIA 10GB Vram, i have installed ollama and **deepseek-coder-v2:16b** model on it, i will use continue.dev in my vscode to communicate with ollama.
i have problem when i set ctx size in the plugin to 4K like so `"contextLength": 4096, ` with deepseek-coder-v2:16b i go... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7033/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1038 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1038/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1038/comments | https://api.github.com/repos/ollama/ollama/issues/1038/events | https://github.com/ollama/ollama/pull/1038 | 1,982,633,921 | PR_kwDOJ0Z1Ps5e4AeT | 1,038 | Response preamble for interactive terminal | {
"login": "eyelight",
"id": 225149,
"node_id": "MDQ6VXNlcjIyNTE0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/225149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyelight",
"html_url": "https://github.com/eyelight",
"followers_url": "https://api.github.com/users/eyeligh... | [] | closed | false | null | [] | null | 1 | 2023-11-08T03:03:25 | 2023-11-09T00:50:39 | 2023-11-09T00:50:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1038",
"html_url": "https://github.com/ollama/ollama/pull/1038",
"diff_url": "https://github.com/ollama/ollama/pull/1038.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1038.patch",
"merged_at": null
} | This PR updates the interactive terminal experience to:
- print the active model just above the model's output
- provide `/set preamble` and `/set nopreamble` to turn this behavior on & off
- in both cases, adds an extra line to separate prompt & response
 ollama model download will not keep on downloading when reopen ollama | {
"login": "waldolin",
"id": 20750014,
"node_id": "MDQ6VXNlcjIwNzUwMDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/20750014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waldolin",
"html_url": "https://github.com/waldolin",
"followers_url": "https://api.github.com/users/wal... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-05-31T17:33:13 | 2024-05-31T18:35:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ## ollama model download will not keep on downloading when reopen ollama or close ollama accidentally.
```
C:\Users\lin\AppData\Local\Ollama>ollama run gemma:7b
pulling manifest
pulling ef311de6af9d... 70% ▕███████████████████████████████████████ ▏ 3.5 GB/5.0 GB 3.5 MB/s 7m9s
Error: Post "... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4755/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7590 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7590/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7590/comments | https://api.github.com/repos/ollama/ollama/issues/7590/events | https://github.com/ollama/ollama/issues/7590 | 2,646,538,915 | I_kwDOJ0Z1Ps6dvvqj | 7,590 | GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed | {
"login": "Volker-Weissmann",
"id": 39418860,
"node_id": "MDQ6VXNlcjM5NDE4ODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/39418860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Volker-Weissmann",
"html_url": "https://github.com/Volker-Weissmann",
"followers_url": "https://... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 19 | 2024-11-09T20:56:01 | 2024-12-08T07:50:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
```
`ollama run llama3.2` and `ollama run llava` works fine.
I h... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7590/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7590/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8414 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8414/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8414/comments | https://api.github.com/repos/ollama/ollama/issues/8414/events | https://github.com/ollama/ollama/issues/8414 | 2,786,257,669 | I_kwDOJ0Z1Ps6mEusF | 8,414 | [Feature] Support Intel GPUs | {
"login": "NeoZhangJianyu",
"id": 46982523,
"node_id": "MDQ6VXNlcjQ2OTgyNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/46982523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NeoZhangJianyu",
"html_url": "https://github.com/NeoZhangJianyu",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2025-01-14T04:58:03 | 2025-01-14T06:12:47 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama had supported by the PR https://github.com/ollama/ollama/pull/2458 merged to support Intel GPU.
But the function disappears now.
I see there are several issues and opened PRs for Intel GPU. But they are too old.
I want to draft PRs to support Intel GPU: dGPU & iGPU (since 11th Core) by including llama.cpp... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8414/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8414/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7618 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7618/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7618/comments | https://api.github.com/repos/ollama/ollama/issues/7618/events | https://github.com/ollama/ollama/issues/7618 | 2,648,549,764 | I_kwDOJ0Z1Ps6d3amE | 7,618 | llama runner process has terminated: signal: segmentation fault (core dumped) | {
"login": "Dhruv-1212",
"id": 132161275,
"node_id": "U_kgDOB-Ce-w",
"avatar_url": "https://avatars.githubusercontent.com/u/132161275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dhruv-1212",
"html_url": "https://github.com/Dhruv-1212",
"followers_url": "https://api.github.com/users/Dhr... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-11-11T08:24:55 | 2024-11-12T09:36:11 | 2024-11-12T09:36:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
segmentation fault (core dumped) error for snowflake-arctic-embed:latest, other models are working fine
these are the system logs
Nov 11 08:26:04 dev-_-aiml-reco ollama.listener[1455]: time=2024-11-11T08:26:04.252Z level=INFO source=server.go:108 msg="system memory" total="29.4 GiB" fr... | {
"login": "Dhruv-1212",
"id": 132161275,
"node_id": "U_kgDOB-Ce-w",
"avatar_url": "https://avatars.githubusercontent.com/u/132161275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dhruv-1212",
"html_url": "https://github.com/Dhruv-1212",
"followers_url": "https://api.github.com/users/Dhr... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7618/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7531 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7531/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7531/comments | https://api.github.com/repos/ollama/ollama/issues/7531/events | https://github.com/ollama/ollama/issues/7531 | 2,639,156,967 | I_kwDOJ0Z1Ps6dTlbn | 7,531 | Poor acceleration choices with mixed GPUs | {
"login": "cobrafast",
"id": 3317555,
"node_id": "MDQ6VXNlcjMzMTc1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3317555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cobrafast",
"html_url": "https://github.com/cobrafast",
"followers_url": "https://api.github.com/users/co... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-11-06T20:31:57 | 2024-11-08T19:28:09 | 2024-11-08T19:28:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've noticed is that Ollama makes poor decisions about acceleration in setups with heterogenous GPUs. I for example have a 16GB VRAM and a 3GB VRAM dGPU in my desktop PC and Ollama seems to only consider the smaller VRAM GPU, even if I set up `CUDA_VISIBLE_DEVICES=0` to only let it compute on th... | {
"login": "cobrafast",
"id": 3317555,
"node_id": "MDQ6VXNlcjMzMTc1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3317555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cobrafast",
"html_url": "https://github.com/cobrafast",
"followers_url": "https://api.github.com/users/co... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7531/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2608 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2608/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2608/comments | https://api.github.com/repos/ollama/ollama/issues/2608/events | https://github.com/ollama/ollama/issues/2608 | 2,143,657,320 | I_kwDOJ0Z1Ps5_xZ1o | 2,608 | How to identify multimodal models? | {
"login": "gluonfield",
"id": 5672094,
"node_id": "MDQ6VXNlcjU2NzIwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gluonfield",
"html_url": "https://github.com/gluonfield",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-02-20T06:49:04 | 2024-02-20T17:48:41 | 2024-02-20T17:48:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi guys, incredible work with Ollama!
I'm building client for Ollama and wondering what is the best way to identify multimodal models like `llava`, `bakllava` from the API? I want to display additional UI if model supports images.
It seems that both `llava` and `bakllava` returns `/api/tags` response containing... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2608/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3165 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3165/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3165/comments | https://api.github.com/repos/ollama/ollama/issues/3165/events | https://github.com/ollama/ollama/issues/3165 | 2,188,108,288 | I_kwDOJ0Z1Ps6Ca-IA | 3,165 | Support "tool" role in messages | {
"login": "lebrunel",
"id": 124721263,
"node_id": "U_kgDOB28Ybw",
"avatar_url": "https://avatars.githubusercontent.com/u/124721263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lebrunel",
"html_url": "https://github.com/lebrunel",
"followers_url": "https://api.github.com/users/lebrunel/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2024-03-15T09:42:21 | 2024-07-26T00:46:25 | 2024-07-26T00:46:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
The new Herme 2 Pro model recommends results from function calling to come in messages with the role "tool", eg:
```
<|im_start|>tool
<tool_response>...result here...</tool_response>
<|im_end|>
```
The chat api doesn't support messages with a role "tool" - treats it as a bad re... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3165/reactions",
"total_count": 35,
"+1": 32,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/3165/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3862 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3862/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3862/comments | https://api.github.com/repos/ollama/ollama/issues/3862/events | https://github.com/ollama/ollama/issues/3862 | 2,260,214,188 | I_kwDOJ0Z1Ps6GuCGs | 3,862 | Please use comfyUI-like to realize Automatic Programming? | {
"login": "qwas982",
"id": 10122306,
"node_id": "MDQ6VXNlcjEwMTIyMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10122306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qwas982",
"html_url": "https://github.com/qwas982",
"followers_url": "https://api.github.com/users/qwas98... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-04-24T03:20:59 | 2024-04-24T03:20:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | auto coding.
[https://raw.githubusercontent.com/comfyanonymous/ComfyUI/master/comfyui_screenshot.png](url)
like Andrew Ng say, The Agentic Workflow much stronger than the original GPT3.5,
I know that ollama can be called using API. The problem now is how to implement the agent workflow on the UI and complete t... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3862/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2772 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2772/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2772/comments | https://api.github.com/repos/ollama/ollama/issues/2772/events | https://github.com/ollama/ollama/pull/2772 | 2,155,466,408 | PR_kwDOJ0Z1Ps5n_CFa | 2,772 | Refine container image build script | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-02-27T01:26:58 | 2024-02-27T19:29:11 | 2024-02-27T19:29:08 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2772",
"html_url": "https://github.com/ollama/ollama/pull/2772",
"diff_url": "https://github.com/ollama/ollama/pull/2772.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2772.patch",
"merged_at": "2024-02-27T19:29:08"
} | Allow overriding the platform, image name, and tag latest for standard and rocm images.
Fixes #2721 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2772/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7576 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7576/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7576/comments | https://api.github.com/repos/ollama/ollama/issues/7576/events | https://github.com/ollama/ollama/issues/7576 | 2,644,502,171 | I_kwDOJ0Z1Ps6dn-ab | 7,576 | num_ctx causes 100% CPU with no GPU usage | {
"login": "aaronbolton",
"id": 18211890,
"node_id": "MDQ6VXNlcjE4MjExODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/18211890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronbolton",
"html_url": "https://github.com/aaronbolton",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 6 | 2024-11-08T16:22:48 | 2024-12-29T06:16:17 | 2024-11-12T18:43:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ive tried to create a new model recently with only the parameter num_ctx, when I run the models it shows 100% CPU with no GPU usage, even if the model was too big I would assume it would report GPU/CPU 100%/???%
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0 | {
"login": "aaronbolton",
"id": 18211890,
"node_id": "MDQ6VXNlcjE4MjExODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/18211890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronbolton",
"html_url": "https://github.com/aaronbolton",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7576/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/7576/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/724 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/724/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/724/comments | https://api.github.com/repos/ollama/ollama/issues/724/events | https://github.com/ollama/ollama/pull/724 | 1,930,906,854 | PR_kwDOJ0Z1Ps5cJeKg | 724 | improve vram safety with 5% vram memory buffer | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 4 | 2023-10-06T20:39:38 | 2023-10-13T13:27:28 | 2023-10-10T20:16:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/724",
"html_url": "https://github.com/ollama/ollama/pull/724",
"diff_url": "https://github.com/ollama/ollama/pull/724.diff",
"patch_url": "https://github.com/ollama/ollama/pull/724.patch",
"merged_at": "2023-10-10T20:16:09"
} | In testing how much VRAM should be allocated we typically used a model which could be entirely loaded into VRAM. This masked an issue when a model is larger than the available VRAM it is possible to consume all available VRAM and fail with an error:
```
Error: llama runner failed: out of memory
```
This change le... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/724/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/724/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7579 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7579/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7579/comments | https://api.github.com/repos/ollama/ollama/issues/7579/events | https://github.com/ollama/ollama/pull/7579 | 2,644,647,728 | PR_kwDOJ0Z1Ps6BWVAQ | 7,579 | Set macos min version for all architectures | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-11-08T17:16:32 | 2024-11-08T17:27:07 | 2024-11-08T17:27:04 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7579",
"html_url": "https://github.com/ollama/ollama/pull/7579",
"diff_url": "https://github.com/ollama/ollama/pull/7579.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7579.patch",
"merged_at": "2024-11-08T17:27:04"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7579/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/911 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/911/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/911/comments | https://api.github.com/repos/ollama/ollama/issues/911/events | https://github.com/ollama/ollama/issues/911 | 1,962,679,881 | I_kwDOJ0Z1Ps50_B5J | 911 | When out of disk space, Ollama still retries to download | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2023-10-26T04:27:09 | 2023-10-26T19:24:22 | 2023-10-26T19:24:22 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
OLLAMA_HOST=https://redacted:443 ollama pull llama2:13b
pulling manifest
pulling 29fdb92e57cf... 5% |█ | (408 MB/7.4 GB, 99 MB/s) [4s:1m10s]Error: max retries exceeded
```
```
OLLAMA_HOST=https://redacted:443 ollama pull llama2:13b
pulling manifest
pulling 29fdb92e57cf... ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/911/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/211 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/211/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/211/comments | https://api.github.com/repos/ollama/ollama/issues/211/events | https://github.com/ollama/ollama/pull/211 | 1,820,841,544 | PR_kwDOJ0Z1Ps5WXETR | 211 | update llama.cpp | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-25T17:51:16 | 2023-07-27T23:57:04 | 2023-07-27T23:57:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/211",
"html_url": "https://github.com/ollama/ollama/pull/211",
"diff_url": "https://github.com/ollama/ollama/pull/211.diff",
"patch_url": "https://github.com/ollama/ollama/pull/211.patch",
"merged_at": "2023-07-27T23:57:03"
} | update to eb542d39324574a6778fad9ba9e34ba7a14a82a3 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/211/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2981 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2981/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2981/comments | https://api.github.com/repos/ollama/ollama/issues/2981/events | https://github.com/ollama/ollama/issues/2981 | 2,173,938,633 | I_kwDOJ0Z1Ps6Bk6vJ | 2,981 | when i restart windows, ollama will open automatically, how can i close the self-start function? | {
"login": "08183080",
"id": 51738561,
"node_id": "MDQ6VXNlcjUxNzM4NTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/51738561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/08183080",
"html_url": "https://github.com/08183080",
"followers_url": "https://api.github.com/users/081... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-03-07T14:01:52 | 2024-04-15T21:59:43 | 2024-03-11T22:25:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when i restart windows, ollama will open automatically, how can i close the self-start function? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2981/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/ollama/ollama/issues/2981/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6282 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6282/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6282/comments | https://api.github.com/repos/ollama/ollama/issues/6282/events | https://github.com/ollama/ollama/pull/6282 | 2,457,641,055 | PR_kwDOJ0Z1Ps537xTf | 6,282 | AMD integrated graphic on linux kernel 6.9.9+, GTT memory, loading freeze fix | {
"login": "MaciejMogilany",
"id": 56433591,
"node_id": "MDQ6VXNlcjU2NDMzNTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/56433591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaciejMogilany",
"html_url": "https://github.com/MaciejMogilany",
"followers_url": "https://api.gi... | [] | open | false | null | [] | null | 103 | 2024-08-09T10:43:59 | 2025-01-28T22:35:56 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6282",
"html_url": "https://github.com/ollama/ollama/pull/6282",
"diff_url": "https://github.com/ollama/ollama/pull/6282.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6282.patch",
"merged_at": null
} | This commit reflects changes in linux kernel 6.9.9+ on small APU. LLM load to GTT memory, which is set to 1/2 of RAM by default and can be changed. This allows to use bigger models with AMD APU without VRAM carveout and load models bigger than max VRAM carveout of 16GiB. No hacks like [torch-apu-helper](https://github.... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6282/reactions",
"total_count": 26,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6282/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5779 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5779/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5779/comments | https://api.github.com/repos/ollama/ollama/issues/5779/events | https://github.com/ollama/ollama/pull/5779 | 2,417,118,018 | PR_kwDOJ0Z1Ps510Pa5 | 5,779 | server: check for empty tools array too | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-18T18:43:28 | 2024-07-18T18:44:59 | 2024-07-18T18:44:58 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5779",
"html_url": "https://github.com/ollama/ollama/pull/5779",
"diff_url": "https://github.com/ollama/ollama/pull/5779.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5779.patch",
"merged_at": "2024-07-18T18:44:58"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5779/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7376 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7376/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7376/comments | https://api.github.com/repos/ollama/ollama/issues/7376/events | https://github.com/ollama/ollama/issues/7376 | 2,616,123,180 | I_kwDOJ0Z1Ps6b7t8s | 7,376 | Is there a way to track tokens/context window in real-time? | {
"login": "robotom",
"id": 45123215,
"node_id": "MDQ6VXNlcjQ1MTIzMjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotom",
"html_url": "https://github.com/robotom",
"followers_url": "https://api.github.com/users/roboto... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-10-26T20:20:44 | 2024-12-02T14:44:50 | 2024-12-02T14:44:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'd like to implement a counter in a front end app to track the tokens used in order to see if I'm close to exceeding the context window.
This is useful to me because if I feed a large document into the model, I'd like to know when it's "too large" and perhaps to break it down or do something else. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7376/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4704 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4704/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4704/comments | https://api.github.com/repos/ollama/ollama/issues/4704/events | https://github.com/ollama/ollama/issues/4704 | 2,323,425,194 | I_kwDOJ0Z1Ps6KfKeq | 4,704 | msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 " | {
"login": "wsry888",
"id": 21898282,
"node_id": "MDQ6VXNlcjIxODk4Mjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/21898282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wsry888",
"html_url": "https://github.com/wsry888",
"followers_url": "https://api.github.com/users/wsry88... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-29T14:05:06 | 2024-06-09T17:13:13 | 2024-06-09T17:13:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### run hhao/openbmb-minicpm-llama3-v-2_5:fp16
msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
time=2024-05-29T22:03:49.672+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.availab... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4704/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5702 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5702/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5702/comments | https://api.github.com/repos/ollama/ollama/issues/5702/events | https://github.com/ollama/ollama/pull/5702 | 2,408,859,378 | PR_kwDOJ0Z1Ps51ZLwm | 5,702 | Add sidellama link | {
"login": "gyopak",
"id": 25726935,
"node_id": "MDQ6VXNlcjI1NzI2OTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25726935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyopak",
"html_url": "https://github.com/gyopak",
"followers_url": "https://api.github.com/users/gyopak/fo... | [] | closed | false | null | [] | null | 0 | 2024-07-15T14:16:19 | 2024-07-17T17:24:44 | 2024-07-17T17:24:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5702",
"html_url": "https://github.com/ollama/ollama/pull/5702",
"diff_url": "https://github.com/ollama/ollama/pull/5702.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5702.patch",
"merged_at": "2024-07-17T17:24:44"
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5702/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3638 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3638/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3638/comments | https://api.github.com/repos/ollama/ollama/issues/3638/events | https://github.com/ollama/ollama/issues/3638 | 2,242,196,195 | I_kwDOJ0Z1Ps6FpTLj | 3,638 | Error: exception error loading model architecture: unknown model architecture: '' | {
"login": "anubissbe",
"id": 116725818,
"node_id": "U_kgDOBvUYOg",
"avatar_url": "https://avatars.githubusercontent.com/u/116725818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anubissbe",
"html_url": "https://github.com/anubissbe",
"followers_url": "https://api.github.com/users/anubis... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-14T14:30:01 | 2024-04-17T00:47:49 | 2024-04-17T00:47:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The modelfile cannot get loaded after the model creation
### What did you expect to see?
A working chat interface
### Steps to reproduce
drwho@jarvis:/opt/models/aya-101-GGUF$ ollama create aya -f Modelfile
transferring model data
creating model layer
creating system layer
creating p... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3638/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2687 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2687/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2687/comments | https://api.github.com/repos/ollama/ollama/issues/2687/events | https://github.com/ollama/ollama/issues/2687 | 2,149,493,793 | I_kwDOJ0Z1Ps6AHqwh | 2,687 | update README to add Gemma 2B, 7B model in Model Library Table | {
"login": "adminazhar",
"id": 20738252,
"node_id": "MDQ6VXNlcjIwNzM4MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/20738252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adminazhar",
"html_url": "https://github.com/adminazhar",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-02-22T16:47:55 | 2024-02-22T20:15:49 | 2024-02-22T20:15:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "adminazhar",
"id": 20738252,
"node_id": "MDQ6VXNlcjIwNzM4MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/20738252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adminazhar",
"html_url": "https://github.com/adminazhar",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2687/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/591 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/591/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/591/comments | https://api.github.com/repos/ollama/ollama/issues/591/events | https://github.com/ollama/ollama/pull/591 | 1,912,108,173 | PR_kwDOJ0Z1Ps5bKCub | 591 | unbound max num gpu layers | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-25T18:59:44 | 2023-09-25T22:36:47 | 2023-09-25T22:36:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/591",
"html_url": "https://github.com/ollama/ollama/pull/591",
"diff_url": "https://github.com/ollama/ollama/pull/591.diff",
"patch_url": "https://github.com/ollama/ollama/pull/591.patch",
"merged_at": "2023-09-25T22:36:46"
} | Load as many layers into VRAM as possible using model file size as a rough heuristic for the amount of memory required for a layer. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/591/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7547 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7547/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7547/comments | https://api.github.com/repos/ollama/ollama/issues/7547/events | https://github.com/ollama/ollama/issues/7547 | 2,640,324,207 | I_kwDOJ0Z1Ps6dYCZv | 7,547 | Response returns 'null' for 'finish_reason' | {
"login": "debruyckere",
"id": 676943,
"node_id": "MDQ6VXNlcjY3Njk0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/676943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/debruyckere",
"html_url": "https://github.com/debruyckere",
"followers_url": "https://api.github.com/user... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 12 | 2024-11-07T08:54:58 | 2024-11-18T17:16:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm using the OpenAI .Net library to connect to Ollama, using the default llama3.2 model. I get an "Unknown ChatFinishReason value." error from the library. You can see in below code from ChatFinishReasonExtensions (from OpenAI lib) that the value returned by Ollama is null.
.
Converted this model using options "--ctx 8192 --outtype f16 --vocab-type bpe" and quantized the result with option "q4_0". Both ended successfully.
But when using ollama to run the result, I got "Error: llama run... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4489/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/47 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/47/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/47/comments | https://api.github.com/repos/ollama/ollama/issues/47/events | https://github.com/ollama/ollama/issues/47 | 1,792,161,831 | I_kwDOJ0Z1Ps5q0jgn | 47 | When running the `ollama` should CLI start the server if it's not running | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-07-06T20:07:46 | 2023-08-02T14:51:25 | 2023-08-02T14:51:25 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/47/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/47/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7965 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7965/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7965/comments | https://api.github.com/repos/ollama/ollama/issues/7965/events | https://github.com/ollama/ollama/issues/7965 | 2,722,571,414 | I_kwDOJ0Z1Ps6iRySW | 7,965 | It seems that the new KV cache quantization feature is incorrectly allocating resources. | {
"login": "emzaedu",
"id": 152583617,
"node_id": "U_kgDOCRg9wQ",
"avatar_url": "https://avatars.githubusercontent.com/u/152583617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emzaedu",
"html_url": "https://github.com/emzaedu",
"followers_url": "https://api.github.com/users/emzaedu/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-06T09:53:06 | 2024-12-20T22:19:44 | 2024-12-20T22:19:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
For example (q4_0 kv):
/set parameter num_ctx 88000
Rombos-LLM-V2.6-Qwen-14b-Q4_K_M:latest 81d0d17e9f6a 21 GB 100% GPU 4 minutes from now
However, the actual VRAM usage amounts to 13,880,772K
There is a significant difference between the actual VRAM usage (13.24 GB) and what Ollama rep... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7965/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7723 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7723/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7723/comments | https://api.github.com/repos/ollama/ollama/issues/7723/events | https://github.com/ollama/ollama/issues/7723 | 2,667,707,748 | I_kwDOJ0Z1Ps6fAf1k | 7,723 | Can´t use GPU at Ubuntu 22.04 without Docker - permission problems | {
"login": "raullopezgn",
"id": 34060689,
"node_id": "MDQ6VXNlcjM0MDYwNjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/34060689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raullopezgn",
"html_url": "https://github.com/raullopezgn",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | null | [] | null | 26 | 2024-11-18T08:49:40 | 2024-12-02T15:31:17 | 2024-12-02T15:31:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi, I have been using Jan.ai but I wanted to try other options.
I can't run Ollama taking advantage of my GPU. I would prefer not to use Docker for security reasons.
Below I provide all the info that you maybe need to help me in order to find a solution. Thank you in advance.
CPU: AMD R... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7723/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8628 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8628/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8628/comments | https://api.github.com/repos/ollama/ollama/issues/8628/events | https://github.com/ollama/ollama/issues/8628 | 2,815,299,219 | I_kwDOJ0Z1Ps6nzg6T | 8,628 | Cannot download Ollama | {
"login": "ichiecodes1",
"id": 168488717,
"node_id": "U_kgDOCgrvDQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168488717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ichiecodes1",
"html_url": "https://github.com/ichiecodes1",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 12 | 2025-01-28T10:59:43 | 2025-01-29T23:59:10 | 2025-01-29T23:59:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | please i really want to download this platform but i cant can it be fixed?
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8628/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6228 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6228/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6228/comments | https://api.github.com/repos/ollama/ollama/issues/6228/events | https://github.com/ollama/ollama/issues/6228 | 2,452,899,411 | I_kwDOJ0Z1Ps6SNEZT | 6,228 | llama_init_from_gpt_params: error: failed to load model 'models\gemma-1.1-7b-it.Q4_K_M.gguf' | {
"login": "stephen521",
"id": 33420615,
"node_id": "MDQ6VXNlcjMzNDIwNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/33420615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stephen521",
"html_url": "https://github.com/stephen521",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-07T08:33:33 | 2024-09-02T23:21:12 | 2024-09-02T23:21:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when I run the code below in Windows machine(Intel Xeon Silver 421R 2.4GHz, 512m, NVIDIA GeForce RTX 3090) and I got the error below:
llama-cli.exe -m models\gemma-1.1-7b-it.Q4_K_M.gguf --prompt "Once upon a time"
Log start
main: build = 0 (unknown)
main: built with cc (GCC) 14.1.0 for i686-... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6228/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7546 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7546/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7546/comments | https://api.github.com/repos/ollama/ollama/issues/7546/events | https://github.com/ollama/ollama/issues/7546 | 2,640,226,786 | I_kwDOJ0Z1Ps6dXqni | 7,546 | libggml linked to wrong cuda version | {
"login": "jsurloppe",
"id": 20650010,
"node_id": "MDQ6VXNlcjIwNjUwMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/20650010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsurloppe",
"html_url": "https://github.com/jsurloppe",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-11-07T08:14:46 | 2024-11-08T09:29:38 | 2024-11-07T17:20:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I just upgraded to ollama 0.4.0 and loading a model fail with the following error:
```
/tmp/ollama2415219728/runners/cuda_v12/ollama_llama_server: error while loading shared libraries: libcublas.so.11: cannot open shared object file: No such file or directory
time=2024-11-07T08:55:32... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7546/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7546/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4382 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4382/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4382/comments | https://api.github.com/repos/ollama/ollama/issues/4382/events | https://github.com/ollama/ollama/pull/4382 | 2,291,494,082 | PR_kwDOJ0Z1Ps5vMDLt | 4,382 | Allow XDG user directories | {
"login": "noahgitsham",
"id": 73707948,
"node_id": "MDQ6VXNlcjczNzA3OTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/73707948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahgitsham",
"html_url": "https://github.com/noahgitsham",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 3 | 2024-05-12T20:44:25 | 2024-05-30T15:48:04 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4382",
"html_url": "https://github.com/ollama/ollama/pull/4382",
"diff_url": "https://github.com/ollama/ollama/pull/4382.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4382.patch",
"merged_at": null
} | Addresses #228. This is my first time writing go, please feel free to correct any bad code. *
This change defaults to using the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html) for the history file and private key files which are currently generated in ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4382/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4382/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4516 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4516/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4516/comments | https://api.github.com/repos/ollama/ollama/issues/4516/events | https://github.com/ollama/ollama/issues/4516 | 2,304,203,304 | I_kwDOJ0Z1Ps6JV1oo | 4,516 | Ollama: running Vite in production mode fails | {
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706485225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eh6Q... | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 10 | 2024-05-18T18:01:16 | 2025-01-09T09:59:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Im developing web app for educational purpose. I use React + Vite. Till now in development mode im not facing great problems.... But now i have tried to run in production mode and got the error below: h
```
vite v5.2.10 building for production...
[plugin:vite:resolve] [plugin vite:resolve] ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4516/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6340 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6340/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6340/comments | https://api.github.com/repos/ollama/ollama/issues/6340/events | https://github.com/ollama/ollama/pull/6340 | 2,463,805,804 | PR_kwDOJ0Z1Ps54QgT9 | 6,340 | Add new chat app LLMChat.co | {
"login": "deep93333",
"id": 100652109,
"node_id": "U_kgDOBf_UTQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100652109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deep93333",
"html_url": "https://github.com/deep93333",
"followers_url": "https://api.github.com/users/deep93... | [] | closed | false | null | [] | null | 3 | 2024-08-13T16:55:56 | 2024-09-23T13:40:19 | 2024-09-23T13:40:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6340",
"html_url": "https://github.com/ollama/ollama/pull/6340",
"diff_url": "https://github.com/ollama/ollama/pull/6340.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6340.patch",
"merged_at": null
} | null | {
"login": "deep93333",
"id": 100652109,
"node_id": "U_kgDOBf_UTQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100652109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deep93333",
"html_url": "https://github.com/deep93333",
"followers_url": "https://api.github.com/users/deep93... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6340/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1668/comments | https://api.github.com/repos/ollama/ollama/issues/1668/events | https://github.com/ollama/ollama/issues/1668 | 2,053,380,142 | I_kwDOJ0Z1Ps56ZBgu | 1,668 | unexpected EOF Mac OS | {
"login": "bhaskoro-muthohar",
"id": 35159954,
"node_id": "MDQ6VXNlcjM1MTU5OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/35159954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaskoro-muthohar",
"html_url": "https://github.com/bhaskoro-muthohar",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2023-12-22T04:51:49 | 2024-05-18T14:15:29 | 2024-01-08T02:59:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I got error
```
> ollama run dolphin-mixtral:latest
pulling manifest
pulling bdb11b0699e0... 60% ▕██████████████████ ▏ 15 GB/ 26 GB 3.4 MB/s 52m23s
Error: max retries exceeded: unexpected EOF
```
Here is my `.ollama/logs/server.log`
[server.log](https://github.com/jmorganca/ollama/files/137484... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1668/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1668/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2414 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2414/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2414/comments | https://api.github.com/repos/ollama/ollama/issues/2414/events | https://github.com/ollama/ollama/issues/2414 | 2,125,922,000 | I_kwDOJ0Z1Ps5-tv7Q | 2,414 | MB | {
"login": "arghunter",
"id": 91099806,
"node_id": "MDQ6VXNlcjkxMDk5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/91099806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arghunter",
"html_url": "https://github.com/arghunter",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 0 | 2024-02-08T19:27:30 | 2024-02-08T19:27:46 | 2024-02-08T19:27:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "arghunter",
"id": 91099806,
"node_id": "MDQ6VXNlcjkxMDk5ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/91099806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arghunter",
"html_url": "https://github.com/arghunter",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2414/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8675 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8675/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8675/comments | https://api.github.com/repos/ollama/ollama/issues/8675/events | https://github.com/ollama/ollama/issues/8675 | 2,819,497,807 | I_kwDOJ0Z1Ps6oDh9P | 8,675 | Download back always to 1% | {
"login": "fredroo",
"id": 6863089,
"node_id": "MDQ6VXNlcjY4NjMwODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6863089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredroo",
"html_url": "https://github.com/fredroo",
"followers_url": "https://api.github.com/users/fredroo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 8 | 2025-01-29T22:46:12 | 2025-01-30T11:36:13 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Try with CMD, PowerShell, and git bash, but the same error, with different models, like llama3.3 and deepseek-r1:70b
I have space in SSD OS disk C:\ and disk destination T:\Ollama\
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
v0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8675/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8675/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6988 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6988/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6988/comments | https://api.github.com/repos/ollama/ollama/issues/6988/events | https://github.com/ollama/ollama/pull/6988 | 2,551,498,739 | PR_kwDOJ0Z1Ps582IzU | 6,988 | llama: don't create extraneous directories | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-09-26T20:43:04 | 2024-09-26T21:05:34 | 2024-09-26T21:05:31 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6988",
"html_url": "https://github.com/ollama/ollama/pull/6988",
"diff_url": "https://github.com/ollama/ollama/pull/6988.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6988.patch",
"merged_at": "2024-09-26T21:05:31"
} | With the .WAIT this shouldn't be necessary any more, and was causing payload processing glitches. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6988/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/607 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/607/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/607/comments | https://api.github.com/repos/ollama/ollama/issues/607/events | https://github.com/ollama/ollama/issues/607 | 1,913,919,962 | I_kwDOJ0Z1Ps5yFBna | 607 | `ollama -v` prints `0.0.0` in the latest docker images | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 0 | 2023-09-26T16:49:37 | 2023-09-29T18:30:27 | 2023-09-29T18:30:27 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/607/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7103 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7103/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7103/comments | https://api.github.com/repos/ollama/ollama/issues/7103/events | https://github.com/ollama/ollama/pull/7103 | 2,566,665,258 | PR_kwDOJ0Z1Ps59oy47 | 7,103 | llama: cgo ggml | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-10-04T15:41:35 | 2024-10-08T16:23:30 | 2024-10-08T15:53:59 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7103",
"html_url": "https://github.com/ollama/ollama/pull/7103",
"diff_url": "https://github.com/ollama/ollama/pull/7103.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7103.patch",
"merged_at": null
} | Replaced by #7140 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7103/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6409 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6409/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6409/comments | https://api.github.com/repos/ollama/ollama/issues/6409/events | https://github.com/ollama/ollama/issues/6409 | 2,472,392,863 | I_kwDOJ0Z1Ps6TXbif | 6,409 | End and Home buttons don't work in ollama in tmux | {
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivic... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-08-19T03:47:24 | 2024-12-02T21:51:45 | 2024-12-02T21:51:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Pressing End doesn't move the cursor to the end of line, and instead it enters the '~' character.
Same happens with the Home button.
These are TERM environment variables in tmux:
```
$ env | grep TERM
COLORTERM=truecolor
TERM_PROGRAM_VERSION=3.3a
TERM=tmux-256color
TERM_PROGRAM=tmux
`... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6409/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1175 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1175/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1175/comments | https://api.github.com/repos/ollama/ollama/issues/1175/events | https://github.com/ollama/ollama/pull/1175 | 1,999,475,799 | PR_kwDOJ0Z1Ps5fxPgf | 1,175 | Refactor Request Retry | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-11-17T16:16:06 | 2023-11-17T19:22:36 | 2023-11-17T19:22:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1175",
"html_url": "https://github.com/ollama/ollama/pull/1175",
"diff_url": "https://github.com/ollama/ollama/pull/1175.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1175.patch",
"merged_at": "2023-11-17T19:22:35"
} | The request retry logic is mostly in `download.go` and `upload.go`. This function is only meant to retry on authentication failure, so doing that multiple times is not needed.
- do not log `upload failure` on error, this function is called on download also
- do not log on request cancellation, this causes a cancel... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1175/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1628 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1628/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1628/comments | https://api.github.com/repos/ollama/ollama/issues/1628/events | https://github.com/ollama/ollama/issues/1628 | 2,050,327,694 | I_kwDOJ0Z1Ps56NYSO | 1,628 | [Feature Request] integrate PowerInfer as alternative to llama.cpp | {
"login": "jenningsloy318",
"id": 10169236,
"node_id": "MDQ6VXNlcjEwMTY5MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/10169236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jenningsloy318",
"html_url": "https://github.com/jenningsloy318",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2023-12-20T11:03:25 | 2024-07-24T05:59:42 | 2024-03-11T18:13:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
just found another inference engine https://github.com/SJTU-IPADS/PowerInfer, it seems has some advantages, but not tested, if ollama can integrate it ? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1628/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1628/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7722 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7722/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7722/comments | https://api.github.com/repos/ollama/ollama/issues/7722/events | https://github.com/ollama/ollama/pull/7722 | 2,667,574,687 | PR_kwDOJ0Z1Ps6CNPY1 | 7,722 | openai: fix follow-on messages having "role": "assistant" | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | open | false | null | [] | null | 0 | 2024-11-18T08:04:33 | 2025-01-06T18:41:46 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7722",
"html_url": "https://github.com/ollama/ollama/pull/7722",
"diff_url": "https://github.com/ollama/ollama/pull/7722.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7722.patch",
"merged_at": null
} | Fixes https://github.com/ollama/ollama/issues/7626 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7722/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6236 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6236/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6236/comments | https://api.github.com/repos/ollama/ollama/issues/6236/events | https://github.com/ollama/ollama/issues/6236 | 2,453,873,359 | I_kwDOJ0Z1Ps6SQyLP | 6,236 | gpu not found in windows | {
"login": "showyoung",
"id": 5949457,
"node_id": "MDQ6VXNlcjU5NDk0NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5949457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/showyoung",
"html_url": "https://github.com/showyoung",
"followers_url": "https://api.github.com/users/sh... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 18 | 2024-08-07T16:23:10 | 2024-09-05T18:46:35 | 2024-09-05T18:46:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
A few days ago, my ollama could still run using the GPU, but today it suddenly can only use the CPU. I tried to reinstall ollama, use an old version of ollama, and updated the graphics card driver, but I couldn't make ollama run on the GPU. windows 11 22H2, graphics card is 3080, cpu is intel.
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6236/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/869 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/869/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/869/comments | https://api.github.com/repos/ollama/ollama/issues/869/events | https://github.com/ollama/ollama/issues/869 | 1,955,397,440 | I_kwDOJ0Z1Ps50jP9A | 869 | API documentation link in the Homepage is broken | {
"login": "kumarana",
"id": 6807325,
"node_id": "MDQ6VXNlcjY4MDczMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6807325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumarana",
"html_url": "https://github.com/kumarana",
"followers_url": "https://api.github.com/users/kumar... | [] | closed | false | null | [] | null | 2 | 2023-10-21T08:57:29 | 2023-10-22T13:03:29 | 2023-10-21T15:58:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It shd be https://github.com/jmorganca/ollama/blob/main/docs/api.md | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/869/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1860 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1860/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1860/comments | https://api.github.com/repos/ollama/ollama/issues/1860/events | https://github.com/ollama/ollama/issues/1860 | 2,071,300,606 | I_kwDOJ0Z1Ps57dYn- | 1,860 | [FEATURE] Add "mv" command + add possibly add confirmation for "rm" | {
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-01-08T22:07:10 | 2024-03-22T01:16:56 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be nice to have a "mv" command and could probably just be impliment as a "cp" followed by an "rm".
It might also be a good idea to add confirmation for "rm" as I've accidentally removed a model a couple of times now. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1860/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5679 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5679/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5679/comments | https://api.github.com/repos/ollama/ollama/issues/5679/events | https://github.com/ollama/ollama/pull/5679 | 2,407,112,636 | PR_kwDOJ0Z1Ps51TWry | 5,679 | Add LLPhant to README.md | {
"login": "f-lombardo",
"id": 280709,
"node_id": "MDQ6VXNlcjI4MDcwOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/280709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f-lombardo",
"html_url": "https://github.com/f-lombardo",
"followers_url": "https://api.github.com/users/f... | [] | closed | false | null | [] | null | 0 | 2024-07-13T19:30:03 | 2024-11-21T08:54:27 | 2024-11-21T08:54:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5679",
"html_url": "https://github.com/ollama/ollama/pull/5679",
"diff_url": "https://github.com/ollama/ollama/pull/5679.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5679.patch",
"merged_at": "2024-11-21T08:54:26"
} | LLPhant is a PHP library that wraps many LLM services and it supports Ollama.
https://github.com/theodo-group/LLPhant?tab=readme-ov-file#ollama | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5679/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.