url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6756 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6756/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6756/comments | https://api.github.com/repos/ollama/ollama/issues/6756/events | https://github.com/ollama/ollama/issues/6756 | 2,520,219,958 | I_kwDOJ0Z1Ps6WN4E2 | 6,756 | Yet another "segmentation fault" issue with AMD GPU | {
"login": "remon-nashid",
"id": 1994818,
"node_id": "MDQ6VXNlcjE5OTQ4MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1994818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remon-nashid",
"html_url": "https://github.com/remon-nashid",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 57 | 2024-09-11T16:39:22 | 2024-10-20T22:29:08 | 2024-10-12T16:56:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`Error: llama runner process has terminated: signal: segmentation fault (core dumped)`. It occurs while loading larger models, that are still within the VRAM capacity. Here I'm trying to load `command-r:35b-08-2024-q4_K_M` (**19GB**), on an RX 7900 XTX with **24GB** of VRAM. Smaller models loa... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6756/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/6756/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/696 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/696/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/696/comments | https://api.github.com/repos/ollama/ollama/issues/696/events | https://github.com/ollama/ollama/issues/696 | 1,925,653,139 | I_kwDOJ0Z1Ps5yxyKT | 696 | Offline Installation and Model Download | {
"login": "OguzcanOzdemir",
"id": 24637523,
"node_id": "MDQ6VXNlcjI0NjM3NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/24637523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OguzcanOzdemir",
"html_url": "https://github.com/OguzcanOzdemir",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 12 | 2023-10-04T08:15:26 | 2024-12-05T12:13:27 | 2023-10-04T17:41:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I'm trying to install ollama on an offline Ubuntu computer, Due to the lack of an internet connection, I need guidance on how to perform this installation offline. Additionally, I would like to understand how to download and utilize models on this offline Ubuntu machine.
Here are the specific questions and... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/696/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/696/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2633 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2633/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2633/comments | https://api.github.com/repos/ollama/ollama/issues/2633/events | https://github.com/ollama/ollama/issues/2633 | 2,146,630,251 | I_kwDOJ0Z1Ps5_8vpr | 2,633 | How to update all models | {
"login": "meminens",
"id": 42714627,
"node_id": "MDQ6VXNlcjQyNzE0NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/42714627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meminens",
"html_url": "https://github.com/meminens",
"followers_url": "https://api.github.com/users/mem... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 7 | 2024-02-21T12:25:55 | 2024-09-22T15:07:28 | 2024-03-11T21:27:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Do I have tun run `ollama pull <model name>` for each model downloaded? Is there a more automatic way to update all models at once? | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2633/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8001 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8001/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8001/comments | https://api.github.com/repos/ollama/ollama/issues/8001/events | https://github.com/ollama/ollama/issues/8001 | 2,725,701,792 | I_kwDOJ0Z1Ps6iduig | 8,001 | Add option to disable auto-completion | {
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2024-12-09T01:08:58 | 2024-12-09T01:09:41 | 2024-12-09T01:09:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be very helpful to have an option to disable the auto-completion feature. In some cases, it can be more of a hindrance than a help. Please consider adding a setting to turn it off.
Thanks! | {
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8001/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5409 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5409/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5409/comments | https://api.github.com/repos/ollama/ollama/issues/5409/events | https://github.com/ollama/ollama/pull/5409 | 2,384,281,872 | PR_kwDOJ0Z1Ps50GCpV | 5,409 | convert: only extract large files | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 2 | 2024-07-01T16:44:07 | 2024-07-31T21:32:14 | 2024-07-31T21:32:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5409",
"html_url": "https://github.com/ollama/ollama/pull/5409",
"diff_url": "https://github.com/ollama/ollama/pull/5409.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5409.patch",
"merged_at": null
} | many of the files needed during conversion don't need to be extracted and can be read directly from the zip. the only exception are the model weights. while it's possible to read these directly from the zip, the impact on performance is unacceptable (2m30s vs. 30s for gemma 2b). for these files, extract them when neede... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5409/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4192 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4192/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4192/comments | https://api.github.com/repos/ollama/ollama/issues/4192/events | https://github.com/ollama/ollama/pull/4192 | 2,279,925,563 | PR_kwDOJ0Z1Ps5ulis0 | 4,192 | feat: support registry basic auth | {
"login": "qcu266",
"id": 11624864,
"node_id": "MDQ6VXNlcjExNjI0ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/11624864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qcu266",
"html_url": "https://github.com/qcu266",
"followers_url": "https://api.github.com/users/qcu266/fo... | [] | closed | false | null | [] | null | 5 | 2024-05-06T03:02:16 | 2025-01-19T18:39:49 | 2024-12-27T06:06:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4192",
"html_url": "https://github.com/ollama/ollama/pull/4192",
"diff_url": "https://github.com/ollama/ollama/pull/4192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4192.patch",
"merged_at": null
} | support pull/push model from/to private oci registry.
relates: https://github.com/ollama/ollama/issues/2745 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4192/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4192/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6043 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6043/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6043/comments | https://api.github.com/repos/ollama/ollama/issues/6043/events | https://github.com/ollama/ollama/issues/6043 | 2,435,055,382 | I_kwDOJ0Z1Ps6RI_8W | 6,043 | Removing models from Ollama reverts the "last updated" tag | {
"login": "DuckyBlender",
"id": 42645784,
"node_id": "MDQ6VXNlcjQyNjQ1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuckyBlender",
"html_url": "https://github.com/DuckyBlender",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw... | closed | false | null | [] | null | 1 | 2024-07-29T10:34:18 | 2024-07-31T17:18:06 | 2024-07-31T17:18:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

Uploaded the model 8 hours ago, deleted it just now. It shows last updated 12 days ago.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6043/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6354 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6354/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6354/comments | https://api.github.com/repos/ollama/ollama/issues/6354/events | https://github.com/ollama/ollama/issues/6354 | 2,465,256,062 | I_kwDOJ0Z1Ps6S8NJ- | 6,354 | Embedding interface routing | {
"login": "xuzeyu91",
"id": 26290929,
"node_id": "MDQ6VXNlcjI2MjkwOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/26290929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuzeyu91",
"html_url": "https://github.com/xuzeyu91",
"followers_url": "https://api.github.com/users/xuz... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-08-14T08:46:52 | 2024-08-18T13:05:12 | 2024-08-14T16:54:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The chat interface is currently compatible with OpenAI in terms of interface routing.
Would you consider making Embedding's interface routing also compatible with OpenAI's format, so that it will be more user-friendly when called by third-party applications? Otherwise, we need to focus on
```
http://host/v1/embeddin... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6354/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1690 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1690/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1690/comments | https://api.github.com/repos/ollama/ollama/issues/1690/events | https://github.com/ollama/ollama/pull/1690 | 2,054,820,785 | PR_kwDOJ0Z1Ps5isvHt | 1,690 | Added LangChain4j links | {
"login": "langchain4j",
"id": 132277850,
"node_id": "O_kgDOB-JmWg",
"avatar_url": "https://avatars.githubusercontent.com/u/132277850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langchain4j",
"html_url": "https://github.com/langchain4j",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-12-23T15:30:30 | 2024-02-22T19:09:09 | 2024-02-22T19:09:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1690",
"html_url": "https://github.com/ollama/ollama/pull/1690",
"diff_url": "https://github.com/ollama/ollama/pull/1690.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1690.patch",
"merged_at": "2024-02-22T19:09:08"
} | Hi, I would appreciate a lot if you could add LangChain4j links to your README, we have a nice integration with Ollama! | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1690/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2451 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2451/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2451/comments | https://api.github.com/repos/ollama/ollama/issues/2451/events | https://github.com/ollama/ollama/issues/2451 | 2,129,162,086 | I_kwDOJ0Z1Ps5-6G9m | 2,451 | [FEATURE] Add support for Intel Xeon (Sapphire and Emerald Rapids) accelerators and AI features such as AMX and AVX 512. | {
"login": "scouzi1966",
"id": 58265937,
"node_id": "MDQ6VXNlcjU4MjY1OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/58265937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scouzi1966",
"html_url": "https://github.com/scouzi1966",
"followers_url": "https://api.github.com/use... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-02-11T20:46:18 | 2024-05-11T00:37:16 | 2024-05-11T00:37:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Note that Intel is trying to demystify AVX512 with a AVX 10 standard. But they are the same.
AVX512
https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html
AMX
https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/advanced-matrix-extensions/overview.html
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2451/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2451/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8182 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8182/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8182/comments | https://api.github.com/repos/ollama/ollama/issues/8182/events | https://github.com/ollama/ollama/issues/8182 | 2,752,630,584 | I_kwDOJ0Z1Ps6kEc84 | 8,182 | {"error":"POST predict: Post \"http://127.0.0.1:33603/completion\": EOF"} | {
"login": "forReason",
"id": 12736950,
"node_id": "MDQ6VXNlcjEyNzM2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/12736950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forReason",
"html_url": "https://github.com/forReason",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-12-20T12:21:10 | 2025-01-13T01:42:50 | 2025-01-13T01:42:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I get the following error:
{"error":"POST predict: Post \"http://127.0.0.1:33603/completion\": EOF"}
this seems to be happening on longer context lengths.
I think I can circumvent this using options["use_mlock"] = false; at the cost of roughly 4x or more speed loss
It seems like the gpu ... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8182/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3867 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3867/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3867/comments | https://api.github.com/repos/ollama/ollama/issues/3867/events | https://github.com/ollama/ollama/issues/3867 | 2,260,317,111 | I_kwDOJ0Z1Ps6GubO3 | 3,867 | Ctrl+D to exit is not stopping service | {
"login": "nishithshowri006",
"id": 58651995,
"node_id": "MDQ6VXNlcjU4NjUxOTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/58651995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nishithshowri006",
"html_url": "https://github.com/nishithshowri006",
"followers_url": "https://... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-04-24T04:56:11 | 2024-05-04T23:53:05 | 2024-05-04T23:52:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have observed that when we Cntrl+D or exit the chat interface when we run a model it is not stopping the ollama process. This inturn is blocking RAM and VRAM for other tasks. I observed this behavior in wsl and windows versions of ollama.
### OS
Windows, WSL2
### GPU
Nvidia
### CPU
Inte... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3867/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/781 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/781/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/781/comments | https://api.github.com/repos/ollama/ollama/issues/781/events | https://github.com/ollama/ollama/pull/781 | 1,942,392,202 | PR_kwDOJ0Z1Ps5cwxib | 781 | improve api error handling | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-13T17:54:56 | 2023-10-13T20:57:11 | 2023-10-13T20:57:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/781",
"html_url": "https://github.com/ollama/ollama/pull/781",
"diff_url": "https://github.com/ollama/ollama/pull/781.diff",
"patch_url": "https://github.com/ollama/ollama/pull/781.patch",
"merged_at": "2023-10-13T20:57:10"
} | - remove new lines from llama.cpp error messages relayed to client
- check api option types and return error on wrong type
- change num layers from 95% VRAM to 92% VRAM | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/781/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/234 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/234/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/234/comments | https://api.github.com/repos/ollama/ollama/issues/234/events | https://github.com/ollama/ollama/pull/234 | 1,826,927,325 | PR_kwDOJ0Z1Ps5WroYw | 234 | use max scan token size to hold large objects | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-28T18:44:44 | 2023-07-28T19:03:52 | 2023-07-28T19:03:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/234",
"html_url": "https://github.com/ollama/ollama/pull/234",
"diff_url": "https://github.com/ollama/ollama/pull/234.diff",
"patch_url": "https://github.com/ollama/ollama/pull/234.patch",
"merged_at": "2023-07-28T19:03:51"
} | The internal buffer used by scanner is too small to hold Meta's license so allocate the maximum size set in bufio. It can be potentially higher but it's not necessary right now | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/234/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3504 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3504/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3504/comments | https://api.github.com/repos/ollama/ollama/issues/3504/events | https://github.com/ollama/ollama/issues/3504 | 2,228,152,315 | I_kwDOJ0Z1Ps6Ezuf7 | 3,504 | I can't pull any models | {
"login": "jsrcode",
"id": 139555610,
"node_id": "U_kgDOCFFzGg",
"avatar_url": "https://avatars.githubusercontent.com/u/139555610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsrcode",
"html_url": "https://github.com/jsrcode",
"followers_url": "https://api.github.com/users/jsrcode/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 44 | 2024-04-05T14:18:57 | 2025-01-29T17:42:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
C:\Users\18164>ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1712324131": net/http: TLS handshake timeout
### What did you e... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3504/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3504/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7756 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7756/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7756/comments | https://api.github.com/repos/ollama/ollama/issues/7756/events | https://github.com/ollama/ollama/pull/7756 | 2,674,410,211 | PR_kwDOJ0Z1Ps6CeV0j | 7,756 | Update README.md | {
"login": "jonathanhecl",
"id": 1691623,
"node_id": "MDQ6VXNlcjE2OTE2MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1691623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanhecl",
"html_url": "https://github.com/jonathanhecl",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 0 | 2024-11-20T05:04:44 | 2024-11-20T05:31:43 | 2024-11-20T05:31:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7756",
"html_url": "https://github.com/ollama/ollama/pull/7756",
"diff_url": "https://github.com/ollama/ollama/pull/7756.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7756.patch",
"merged_at": "2024-11-20T05:31:43"
} | Gollama Library | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7756/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1744 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1744/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1744/comments | https://api.github.com/repos/ollama/ollama/issues/1744/events | https://github.com/ollama/ollama/issues/1744 | 2,060,803,482 | I_kwDOJ0Z1Ps561V2a | 1,744 | 💡 Idea/Suggestion: Rich API Documentation | {
"login": "amithkoujalgi",
"id": 1876165,
"node_id": "MDQ6VXNlcjE4NzYxNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amithkoujalgi",
"html_url": "https://github.com/amithkoujalgi",
"followers_url": "https://api.github.... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
... | open | false | null | [] | null | 1 | 2023-12-30T17:13:28 | 2024-11-06T19:03:58 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello @jmorganca.
First of all, thank you for your amazing work! 🤩 I have been using Ollama for a while now and I'm really enjoying it.
I was wondering if we could introduce a API documentation website (right from GitHub using GH Pages). Along with this we could also have a GitHub action workflow setup to auto-b... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1744/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1744/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6538 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6538/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6538/comments | https://api.github.com/repos/ollama/ollama/issues/6538/events | https://github.com/ollama/ollama/pull/6538 | 2,490,608,317 | PR_kwDOJ0Z1Ps55ox0g | 6,538 | throw an error when encountering unsupport tensor sizes | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-08-28T00:31:38 | 2024-08-28T00:54:06 | 2024-08-28T00:54:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6538",
"html_url": "https://github.com/ollama/ollama/pull/6538",
"diff_url": "https://github.com/ollama/ollama/pull/6538.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6538.patch",
"merged_at": "2024-08-28T00:54:04"
} | The `bitsandbytes` package creates an 8 bit quantized version of a model which is unsupported by the llama.cpp back end. It does this by creating two tensors for each of the layers which look like:
```
model.layers.0.mlp.down_proj.weight dtype=I8 shape=[4096, 14336]
model.layers.0.mlp.down_proj.weight_format dtype... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6538/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4117 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4117/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4117/comments | https://api.github.com/repos/ollama/ollama/issues/4117/events | https://github.com/ollama/ollama/issues/4117 | 2,276,837,880 | I_kwDOJ0Z1Ps6Htcn4 | 4,117 | 0.1.33 on Windows not using GPU | {
"login": "Eisaichen",
"id": 12467320,
"node_id": "MDQ6VXNlcjEyNDY3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12467320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eisaichen",
"html_url": "https://github.com/Eisaichen",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g... | closed | false | null | [] | null | 3 | 2024-05-03T03:42:57 | 2024-05-04T21:23:14 | 2024-05-03T04:15:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After upgrading to [v0.1.33](https://github.com/ollama/ollama/releases/tag/v0.1.33), Ollama no longer using my GPU, CPU will be used instead.
On the same PC, I tried to run 0.1.33 and older 0.1.32 side by side, 0.1.32 can run on GPU just fine while 0.1.33 is not.
After investigating the lo... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4117/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/233 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/233/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/233/comments | https://api.github.com/repos/ollama/ollama/issues/233/events | https://github.com/ollama/ollama/issues/233 | 1,825,401,154 | I_kwDOJ0Z1Ps5szWlC | 233 | Descriptions for 3rd Party Imports | {
"login": "eagleEggs",
"id": 29800532,
"node_id": "MDQ6VXNlcjI5ODAwNTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/29800532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eagleEggs",
"html_url": "https://github.com/eagleEggs",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-07-27T23:56:15 | 2023-08-30T21:40:30 | 2023-08-30T21:40:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Just wanted to share that it would be nice to have in the docs a list of the imported 3rd party libraries for transparency - Really just for the smaller obscure ones. One of them threw me down a rabbit hole looking over their code as it was a small repo with not much activity. Just a suggestion. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/233/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4446 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4446/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4446/comments | https://api.github.com/repos/ollama/ollama/issues/4446/events | https://github.com/ollama/ollama/issues/4446 | 2,297,222,153 | I_kwDOJ0Z1Ps6I7NQJ | 4,446 | JSON Mode + Streaming + OpenAI API + Llama3 = never sends STOP, and a lot of whitespace after the JSON | {
"login": "odrobnik",
"id": 333270,
"node_id": "MDQ6VXNlcjMzMzI3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/333270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odrobnik",
"html_url": "https://github.com/odrobnik",
"followers_url": "https://api.github.com/users/odrobni... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-05-15T08:34:39 | 2024-12-05T00:50:01 | 2024-12-05T00:50:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
without JSON Mode the last few lines of the stream of Chunk objects is:
```
...
data: {"id":"chatcmpl-273","object":"chat.completion.chunk","created":1715761661,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":" adventures"},"fini... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4446/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3851 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3851/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3851/comments | https://api.github.com/repos/ollama/ollama/issues/3851/events | https://github.com/ollama/ollama/issues/3851 | 2,259,624,265 | I_kwDOJ0Z1Ps6GryFJ | 3,851 | Why Ollama is so terribly slow when I set format="json" | {
"login": "marksalpeter",
"id": 1033500,
"node_id": "MDQ6VXNlcjEwMzM1MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1033500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marksalpeter",
"html_url": "https://github.com/marksalpeter",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | open | false | null | [] | null | 6 | 2024-04-23T19:32:01 | 2024-11-06T17:41:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This is a duplicate of #3154, which was closed, I'm assuming, by mistake.
The performance of the `format="json"` param is 10x slower than regular inference when additional context is included
A prompt like this takes ~24s to return on an NVIDIA T4 with CUDA enabled and ` format="json"`. The... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3851/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3851/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7540 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7540/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7540/comments | https://api.github.com/repos/ollama/ollama/issues/7540/events | https://github.com/ollama/ollama/issues/7540 | 2,639,909,948 | I_kwDOJ0Z1Ps6dWdQ8 | 7,540 | ollama blocking itself from binding port it's already using...? | {
"login": "gearskullguy",
"id": 32692076,
"node_id": "MDQ6VXNlcjMyNjkyMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/32692076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gearskullguy",
"html_url": "https://github.com/gearskullguy",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjg... | closed | false | null | [] | null | 1 | 2024-11-07T05:24:49 | 2024-12-02T15:19:58 | 2024-12-02T15:19:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've just been going through the current instructions, and this is a really weird error to get after seeing all the "Pulled" messages:
$ sudo docker compose --profile gpu-nvidia up
[+] Running 38/38
... Pulled etc ... 0.0s
Atta... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7540/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7840 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7840/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7840/comments | https://api.github.com/repos/ollama/ollama/issues/7840/events | https://github.com/ollama/ollama/issues/7840 | 2,694,079,352 | I_kwDOJ0Z1Ps6glGN4 | 7,840 | Please add a way to view the request after a template is applied | {
"login": "vt-alt",
"id": 36664211,
"node_id": "MDQ6VXNlcjM2NjY0MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36664211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vt-alt",
"html_url": "https://github.com/vt-alt",
"followers_url": "https://api.github.com/users/vt-alt/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-26T10:27:50 | 2024-11-26T19:43:17 | 2024-11-26T19:43:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would be nice to view raw requests passing between ollama and model. This would be useful to debug templates (for example to debug requests with tools) and for education purposes to see how things works low level. | {
"login": "vt-alt",
"id": 36664211,
"node_id": "MDQ6VXNlcjM2NjY0MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36664211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vt-alt",
"html_url": "https://github.com/vt-alt",
"followers_url": "https://api.github.com/users/vt-alt/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7840/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4344 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4344/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4344/comments | https://api.github.com/repos/ollama/ollama/issues/4344/events | https://github.com/ollama/ollama/issues/4344 | 2,290,692,929 | I_kwDOJ0Z1Ps6IiTNB | 4,344 | failed to run llama3 on macos | {
"login": "Mercccccc",
"id": 91967966,
"node_id": "U_kgDOBXtR3g",
"avatar_url": "https://avatars.githubusercontent.com/u/91967966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mercccccc",
"html_url": "https://github.com/Mercccccc",
"followers_url": "https://api.github.com/users/Merccccc... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-11T05:52:37 | 2024-05-11T06:00:11 | 2024-05-11T05:58:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`ollama run llama3
pulling manifest
pulling 00e1317cbf74... 100% ▕██████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕██████████████████████████████████████████████████████████▏ 12 KB
pulling 8ab4849b038c... 100% ▕████████████████████████████████... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4344/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4447 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4447/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4447/comments | https://api.github.com/repos/ollama/ollama/issues/4447/events | https://github.com/ollama/ollama/issues/4447 | 2,297,232,264 | I_kwDOJ0Z1Ps6I7PuI | 4,447 | NumCtx can't change, just 2048 | {
"login": "jianwen-wang",
"id": 137679484,
"node_id": "U_kgDOCDTSfA",
"avatar_url": "https://avatars.githubusercontent.com/u/137679484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianwen-wang",
"html_url": "https://github.com/jianwen-wang",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-15T08:39:36 | 2024-08-02T03:06:35 | 2024-05-15T15:00:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using the LLaMA 3 model with OpenAI’s /v1/chat/completions API, it was discovered that the request message can’t exceed 2k tokens, while the LLaMA 3 8B model inherently supports up to 8K tokens.
Upon examining the code, it was found that in api/types.go at line 475, DefaultOptions conta... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4447/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4447/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/450 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/450/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/450/comments | https://api.github.com/repos/ollama/ollama/issues/450/events | https://github.com/ollama/ollama/pull/450 | 1,877,579,278 | PR_kwDOJ0Z1Ps5ZWP5q | 450 | update readme | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-01T14:55:13 | 2023-09-01T15:21:51 | 2023-09-01T15:21:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/450",
"html_url": "https://github.com/ollama/ollama/pull/450",
"diff_url": "https://github.com/ollama/ollama/pull/450.diff",
"patch_url": "https://github.com/ollama/ollama/pull/450.patch",
"merged_at": "2023-09-01T15:21:50"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/450/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6221 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6221/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6221/comments | https://api.github.com/repos/ollama/ollama/issues/6221/events | https://github.com/ollama/ollama/issues/6221 | 2,452,290,027 | I_kwDOJ0Z1Ps6SKvnr | 6,221 | Mistake tools calls +-every user prompt. | {
"login": "websharik",
"id": 33082364,
"node_id": "MDQ6VXNlcjMzMDgyMzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/33082364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/websharik",
"html_url": "https://github.com/websharik",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-08-07T04:07:30 | 2024-08-07T04:07:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Model: `llama3.1:8b`
TOOLS:
```json
[
{
"type": "function",
"function": {
"name": "base64DecodeTool",
"description": "Tool to decode base64 string.",
"parameters": {
"type": "object",
"properties": {
"str": {
"type"... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6221/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4214 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4214/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4214/comments | https://api.github.com/repos/ollama/ollama/issues/4214/events | https://github.com/ollama/ollama/issues/4214 | 2,281,979,049 | I_kwDOJ0Z1Ps6IBDyp | 4,214 | (Prune unwanted dangling models) Prune data from models which where partially downloaded | {
"login": "arthurGrigo",
"id": 35745065,
"node_id": "MDQ6VXNlcjM1NzQ1MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/35745065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthurGrigo",
"html_url": "https://github.com/arthurGrigo",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-06T23:59:11 | 2024-05-07T16:44:39 | 2024-05-07T16:44:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Sometime it happens that a user starts to download the wrong model or runs out of space (https://github.com/ollama/ollama/issues/2497).
The partially downloaded model is not visible through 'ollama list' after canceling the download and therefore cannot be removed using 'ollama rm <model>'.
At the moment users ha... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4214/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1958 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1958/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1958/comments | https://api.github.com/repos/ollama/ollama/issues/1958/events | https://github.com/ollama/ollama/pull/1958 | 2,079,367,384 | PR_kwDOJ0Z1Ps5j9BJJ | 1,958 | ci: update setup-go action | {
"login": "purificant",
"id": 4669013,
"node_id": "MDQ6VXNlcjQ2NjkwMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4669013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/purificant",
"html_url": "https://github.com/purificant",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2024-01-12T17:41:11 | 2024-01-18T22:53:37 | 2024-01-18T22:53:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1958",
"html_url": "https://github.com/ollama/ollama/pull/1958",
"diff_url": "https://github.com/ollama/ollama/pull/1958.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1958.patch",
"merged_at": "2024-01-18T22:53:36"
} | This PR updates [actions/setup-go](https://github.com/actions/setup-go/releases/tag/v5.0.0) ~~and tests with go 1.21~~
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1958/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6629 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6629/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6629/comments | https://api.github.com/repos/ollama/ollama/issues/6629/events | https://github.com/ollama/ollama/issues/6629 | 2,504,550,794 | I_kwDOJ0Z1Ps6VSGmK | 6,629 | Fail to Convert Huggingface Llama3.1 with ollama create | {
"login": "YueChenkkk",
"id": 36752416,
"node_id": "MDQ6VXNlcjM2NzUyNDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/36752416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YueChenkkk",
"html_url": "https://github.com/YueChenkkk",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 4 | 2024-09-04T07:39:23 | 2024-09-10T18:44:26 | 2024-09-10T18:44:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I downloaded the meta-llama-3.1-8b model from huggingface. [https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct]
And I installed ollama-linux-amd64 (version==0.3.6) manually.
When I start the service with `ollama serve &`, it seems nothing goes wrong.
But after I build a Mod... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6629/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7671 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7671/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7671/comments | https://api.github.com/repos/ollama/ollama/issues/7671/events | https://github.com/ollama/ollama/pull/7671 | 2,660,148,716 | PR_kwDOJ0Z1Ps6B-d_s | 7,671 | build: add sync-clean target | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-11-14T21:42:01 | 2024-11-20T22:12:40 | 2024-11-20T22:12:37 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7671",
"html_url": "https://github.com/ollama/ollama/pull/7671",
"diff_url": "https://github.com/ollama/ollama/pull/7671.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7671.patch",
"merged_at": null
} | Helpful target to ensure all vendored files are fresh
Since the sync target uses file timestamps, if you jump back and forth between different upstream commits it can get confused and fail to sync the "new" content since timestamps are older. This gives a quick way to reset all the vendored files to avoid potential... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7671/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7091 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7091/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7091/comments | https://api.github.com/repos/ollama/ollama/issues/7091/events | https://github.com/ollama/ollama/issues/7091 | 2,563,852,474 | I_kwDOJ0Z1Ps6Y0Ui6 | 7,091 | What are ollama doing? | {
"login": "Molnfront",
"id": 935328,
"node_id": "MDQ6VXNlcjkzNTMyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/935328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Molnfront",
"html_url": "https://github.com/Molnfront",
"followers_url": "https://api.github.com/users/Moln... | [] | closed | false | null | [] | null | 7 | 2024-10-03T11:48:42 | 2024-10-03T15:46:58 | 2024-10-03T15:46:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I let my model Gemma 2 work in the background because it was like watching paint dry when it was writing in the terminal. It took all the memory and CPU and lots of GPU. But I suspect nothing is being done, because now the ollama process is only at 0,4% and ollama server is at 0. I ask if it's working on the task and... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7091/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6445 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6445/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6445/comments | https://api.github.com/repos/ollama/ollama/issues/6445/events | https://github.com/ollama/ollama/pull/6445 | 2,476,001,382 | PR_kwDOJ0Z1Ps544Ie5 | 6,445 | Update manual instructions with discrete ROCm bundle | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-20T15:56:57 | 2024-08-27T20:42:31 | 2024-08-27T20:42:28 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6445",
"html_url": "https://github.com/ollama/ollama/pull/6445",
"diff_url": "https://github.com/ollama/ollama/pull/6445.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6445.patch",
"merged_at": "2024-08-27T20:42:28"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6445/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/677 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/677/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/677/comments | https://api.github.com/repos/ollama/ollama/issues/677/events | https://github.com/ollama/ollama/issues/677 | 1,922,588,766 | I_kwDOJ0Z1Ps5ymGBe | 677 | Connecting the client to the server | {
"login": "skorokithakis",
"id": 23648,
"node_id": "MDQ6VXNlcjIzNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/23648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skorokithakis",
"html_url": "https://github.com/skorokithakis",
"followers_url": "https://api.github.com/user... | [] | closed | false | null | [] | null | 1 | 2023-10-02T20:00:29 | 2023-10-02T20:08:25 | 2023-10-02T20:08:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any way to specify the hostname of the server for `ollama run` to connect to? I can use the HTTP API, but what about the cli client? I haven't been able to find this info in the docs. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/677/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7217 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7217/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7217/comments | https://api.github.com/repos/ollama/ollama/issues/7217/events | https://github.com/ollama/ollama/pull/7217 | 2,590,088,318 | PR_kwDOJ0Z1Ps5-vcMr | 7,217 | Add arm64 cuda jetpack variants | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 7 | 2024-10-15T22:39:07 | 2024-12-30T09:52:50 | 2024-11-12T18:31:52 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7217",
"html_url": "https://github.com/ollama/ollama/pull/7217",
"diff_url": "https://github.com/ollama/ollama/pull/7217.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7217.patch",
"merged_at": "2024-11-12T18:31:52"
} | This adds 2 new variants for the arm64 build to support nvidia jetson systems based on jetpack 5 and 6. Jetpack 4 is too old to be built with our toolchain (the older cuda requires an old gcc which can't build llama.cpp) and will remain unsupported.
The sbsa discrete GPU cuda libraries we bundle in the existing arm6... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7217/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6378 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6378/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6378/comments | https://api.github.com/repos/ollama/ollama/issues/6378/events | https://github.com/ollama/ollama/pull/6378 | 2,468,874,640 | PR_kwDOJ0Z1Ps54gY5Q | 6,378 | Add confichat to README.md | {
"login": "1runeberg",
"id": 17371351,
"node_id": "MDQ6VXNlcjE3MzcxMzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/17371351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1runeberg",
"html_url": "https://github.com/1runeberg",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2024-08-15T20:16:19 | 2024-09-04T21:26:03 | 2024-09-04T21:26:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6378",
"html_url": "https://github.com/ollama/ollama/pull/6378",
"diff_url": "https://github.com/ollama/ollama/pull/6378.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6378.patch",
"merged_at": "2024-09-04T21:26:02"
} | - Added ConfiChat to Community Integration > Web & Desktop
- Added ConfiChat to Community Integration > Mobile | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6378/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3803 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3803/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3803/comments | https://api.github.com/repos/ollama/ollama/issues/3803/events | https://github.com/ollama/ollama/issues/3803 | 2,255,195,626 | I_kwDOJ0Z1Ps6Ga43q | 3,803 | Does not produce same results via curl API for the same model. | {
"login": "MathematicianOnGithub",
"id": 138249377,
"node_id": "U_kgDOCD2EoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/138249377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MathematicianOnGithub",
"html_url": "https://github.com/MathematicianOnGithub",
"followers_url": ... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-04-21T17:59:25 | 2024-04-26T12:01:19 | 2024-04-26T12:01:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`export OLLAMA_HOST="127.0.0.1:3336"; ollama create tagger -f tagger_Modelfile && ollama run tagger`
Running the below does NOT produce the same result as in the terminal cli in the above command.
```
curl http://localhost:3336/api/generate -d '{
"model": "tagger",
"prompt": "hello"... | {
"login": "MathematicianOnGithub",
"id": 138249377,
"node_id": "U_kgDOCD2EoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/138249377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MathematicianOnGithub",
"html_url": "https://github.com/MathematicianOnGithub",
"followers_url": ... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3803/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2728 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2728/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2728/comments | https://api.github.com/repos/ollama/ollama/issues/2728/events | https://github.com/ollama/ollama/pull/2728 | 2,152,283,257 | PR_kwDOJ0Z1Ps5n0ZrB | 2,728 | feat: implement OpenAI model listing | {
"login": "da-z",
"id": 3681019,
"node_id": "MDQ6VXNlcjM2ODEwMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/da-z",
"html_url": "https://github.com/da-z",
"followers_url": "https://api.github.com/users/da-z/followers",
... | [] | closed | false | null | [] | null | 2 | 2024-02-24T12:14:43 | 2024-09-05T02:57:05 | 2024-09-05T02:57:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2728",
"html_url": "https://github.com/ollama/ollama/pull/2728",
"diff_url": "https://github.com/ollama/ollama/pull/2728.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2728.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2728/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/897 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/897/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/897/comments | https://api.github.com/repos/ollama/ollama/issues/897/events | https://github.com/ollama/ollama/pull/897 | 1,960,140,405 | PR_kwDOJ0Z1Ps5dr_NY | 897 | allow for a configurable ollama model storage directory | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 27 | 2023-10-24T21:53:27 | 2024-06-19T04:14:47 | 2023-10-27T14:19:59 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/897",
"html_url": "https://github.com/ollama/ollama/pull/897",
"diff_url": "https://github.com/ollama/ollama/pull/897.diff",
"patch_url": "https://github.com/ollama/ollama/pull/897.patch",
"merged_at": "2023-10-27T14:19:59"
} | - set `OLLAMA_MODELS` in the environment that ollama is running in to change where models are stored
- update docs
```bash
$ OLLAMA_MODELS=/Users/bruce/ollama_models ollama serve
# store models in /Users/bruce/ollama_models
```
Resolves ~#228~ #153
I'll hold off on merging this until #847 is in to avoid ca... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/897/reactions",
"total_count": 21,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 6,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/897/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8461 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8461/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8461/comments | https://api.github.com/repos/ollama/ollama/issues/8461/events | https://github.com/ollama/ollama/issues/8461 | 2,793,832,386 | I_kwDOJ0Z1Ps6mhn_C | 8,461 | Maintain Object Key Order in JSON Schema Outputs | {
"login": "ElliottStorey",
"id": 70775866,
"node_id": "MDQ6VXNlcjcwNzc1ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/70775866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ElliottStorey",
"html_url": "https://github.com/ElliottStorey",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2025-01-16T21:42:16 | 2025-01-16T21:42:16 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, when generating JSON outputs based on a provided schema, the keys in objects do not retain the order specified in the schema. This behavior differs from OpenAI's implementation, where the order of keys is preserved as defined. Maintaining the specified key order is crucial for applications that rely on consi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8461/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2591 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2591/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2591/comments | https://api.github.com/repos/ollama/ollama/issues/2591/events | https://github.com/ollama/ollama/issues/2591 | 2,142,264,088 | I_kwDOJ0Z1Ps5_sFsY | 2,591 | Failure after download via curl | {
"login": "krenax",
"id": 127540387,
"node_id": "U_kgDOB5ocow",
"avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krenax",
"html_url": "https://github.com/krenax",
"followers_url": "https://api.github.com/users/krenax/follower... | [] | closed | false | null | [] | null | 1 | 2024-02-19T12:30:36 | 2024-02-19T13:40:13 | 2024-02-19T13:40:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama can not be started after download via curl. I received the following message:
```
Warning: Failed to open the file /tmp/tmp.T4lmv4bro6/ollama: No such file or
Warning: directory
curl: (23) Failure writing output to destination
``` | {
"login": "krenax",
"id": 127540387,
"node_id": "U_kgDOB5ocow",
"avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krenax",
"html_url": "https://github.com/krenax",
"followers_url": "https://api.github.com/users/krenax/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2591/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5705 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5705/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5705/comments | https://api.github.com/repos/ollama/ollama/issues/5705/events | https://github.com/ollama/ollama/pull/5705 | 2,409,155,752 | PR_kwDOJ0Z1Ps51aNFf | 5,705 | Enable windows error dialog for subprocess | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-07-15T16:31:31 | 2024-07-26T21:49:37 | 2024-07-26T21:49:34 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5705",
"html_url": "https://github.com/ollama/ollama/pull/5705",
"diff_url": "https://github.com/ollama/ollama/pull/5705.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5705.patch",
"merged_at": "2024-07-26T21:49:34"
} | Make sure if something goes wrong spawning the process, the user gets
enough info to be able to try to self correct, or at least file a bug
with details so we can fix it. Once the process starts, we immediately
change back to the recommended setting to prevent the blocking dialog.
This ensures if the model fails t... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5705/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4620 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4620/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4620/comments | https://api.github.com/repos/ollama/ollama/issues/4620/events | https://github.com/ollama/ollama/issues/4620 | 2,316,133,442 | I_kwDOJ0Z1Ps6KDWRC | 4,620 | Llava 1.6 34B fp16: refuses to answer questions on forms or hallucinates, when official Llava 1.6 34B demo does answer them perfectly | {
"login": "ChristianWeyer",
"id": 888718,
"node_id": "MDQ6VXNlcjg4ODcxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/888718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristianWeyer",
"html_url": "https://github.com/ChristianWeyer",
"followers_url": "https://api.github... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-05-24T19:32:32 | 2024-05-24T19:38:17 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hey all,
I am using https://ollama.com/library/llava:34b-v1.6-fp16.
When asking questions about data in a form (see attached, it is a public sample), the model refuses to answer them or it hallucinates.
```
ollama run llava:34b-v1.6-fp16
>>> How is the ending balance? ./demo.png
Added ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4620/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7959 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7959/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7959/comments | https://api.github.com/repos/ollama/ollama/issues/7959/events | https://github.com/ollama/ollama/issues/7959 | 2,721,519,206 | I_kwDOJ0Z1Ps6iNxZm | 7,959 | FROM ./vicuna-33b.Q4_0.gguf | {
"login": "enzoxic",
"id": 157711992,
"node_id": "U_kgDOCWZ-eA",
"avatar_url": "https://avatars.githubusercontent.com/u/157711992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoxic",
"html_url": "https://github.com/enzoxic",
"followers_url": "https://api.github.com/users/enzoxic/foll... | [] | closed | false | null | [] | null | 2 | 2024-12-05T21:58:22 | 2024-12-06T09:04:06 | 2024-12-05T22:07:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/ollama/ollama/tree/main/llama | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7959/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6931 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6931/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6931/comments | https://api.github.com/repos/ollama/ollama/issues/6931/events | https://github.com/ollama/ollama/pull/6931 | 2,545,165,398 | PR_kwDOJ0Z1Ps58gacN | 6,931 | Added Local Multimodal AI Chat link to README.md | {
"login": "Leon-Sander",
"id": 72946124,
"node_id": "MDQ6VXNlcjcyOTQ2MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72946124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leon-Sander",
"html_url": "https://github.com/Leon-Sander",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-09-24T11:44:38 | 2024-11-22T04:39:38 | 2024-11-22T04:39:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6931",
"html_url": "https://github.com/ollama/ollama/pull/6931",
"diff_url": "https://github.com/ollama/ollama/pull/6931.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6931.patch",
"merged_at": "2024-11-22T04:39:38"
} | Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6931/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2029 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2029/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2029/comments | https://api.github.com/repos/ollama/ollama/issues/2029/events | https://github.com/ollama/ollama/issues/2029 | 2,085,882,942 | I_kwDOJ0Z1Ps58VAw- | 2,029 | ggml-cuda.cu: "8792: !" CUDA error | {
"login": "hsiehgeorge",
"id": 45024980,
"node_id": "MDQ6VXNlcjQ1MDI0OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/45024980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsiehgeorge",
"html_url": "https://github.com/hsiehgeorge",
"followers_url": "https://api.github.com/... | [
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA... | closed | false | null | [] | null | 7 | 2024-01-17T10:28:56 | 2024-03-24T02:13:26 | 2024-03-11T18:56:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
We have a Dell XE8545 server with 4 * A100 GPU cards. When we are running "ollama run mixtral", it was fine but few minutes later, it's halt. I got multiple errors from the log:
1. ggml-cuda.cu: "8792: !" CUDA error
2. ollama.service: State 'stop-sigterm' timed out. Killing.
I tried to kill ollama process... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2029/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2029/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2690 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2690/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2690/comments | https://api.github.com/repos/ollama/ollama/issues/2690/events | https://github.com/ollama/ollama/issues/2690 | 2,149,656,857 | I_kwDOJ0Z1Ps6AISkZ | 2,690 | default windows install folder | {
"login": "goldelio",
"id": 98236877,
"node_id": "U_kgDOBdr5zQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98236877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldelio",
"html_url": "https://github.com/goldelio",
"followers_url": "https://api.github.com/users/goldelio/fo... | [] | closed | false | null | [] | null | 1 | 2024-02-22T18:15:09 | 2024-02-22T20:24:08 | 2024-02-22T20:24:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please, it would be nice if we could chose where to install the software, right now it default installs on C:/ on windows and it's not the best for multiple reasons. | {
"login": "goldelio",
"id": 98236877,
"node_id": "U_kgDOBdr5zQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98236877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldelio",
"html_url": "https://github.com/goldelio",
"followers_url": "https://api.github.com/users/goldelio/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2690/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2115 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2115/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2115/comments | https://api.github.com/repos/ollama/ollama/issues/2115/events | https://github.com/ollama/ollama/pull/2115 | 2,092,259,293 | PR_kwDOJ0Z1Ps5kor4e | 2,115 | Update submodule to `6f9939d119b2d004c264952eb510bd106455531e` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-01-20T22:19:52 | 2024-01-22T19:56:41 | 2024-01-22T19:56:40 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2115",
"html_url": "https://github.com/ollama/ollama/pull/2115",
"diff_url": "https://github.com/ollama/ollama/pull/2115.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2115.patch",
"merged_at": "2024-01-22T19:56:40"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2115/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7184 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7184/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7184/comments | https://api.github.com/repos/ollama/ollama/issues/7184/events | https://github.com/ollama/ollama/issues/7184 | 2,582,965,788 | I_kwDOJ0Z1Ps6Z9O4c | 7,184 | create minimal cpu-only smaller docker image | {
"login": "ozbillwang",
"id": 8954908,
"node_id": "MDQ6VXNlcjg5NTQ5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8954908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozbillwang",
"html_url": "https://github.com/ozbillwang",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677677816,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 14 | 2024-10-12T12:50:37 | 2024-10-16T02:40:47 | 2024-10-16T02:40:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The image `ollama/ollama` is 4.87GB already, which I plan to run it on my macbook or Ubuntu (linux) without any GPUs.
```
ollama/ollama latest e458178cf2c1 2 weeks ago 4.87GB
```
Are there any ways to reduce the size to as small as possible, since I don't need care of GPU, CUBA driver, etc | {
"login": "ozbillwang",
"id": 8954908,
"node_id": "MDQ6VXNlcjg5NTQ5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8954908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ozbillwang",
"html_url": "https://github.com/ozbillwang",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7184/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/37 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/37/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/37/comments | https://api.github.com/repos/ollama/ollama/issues/37/events | https://github.com/ollama/ollama/pull/37 | 1,790,053,259 | PR_kwDOJ0Z1Ps5UuxHc | 37 | upgrade fuzzy search library | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-07-05T18:16:14 | 2023-07-05T20:41:50 | 2023-07-05T19:16:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/37",
"html_url": "https://github.com/ollama/ollama/pull/37",
"diff_url": "https://github.com/ollama/ollama/pull/37.diff",
"patch_url": "https://github.com/ollama/ollama/pull/37.patch",
"merged_at": "2023-07-05T19:16:19"
} | fuzzywuzzy was renamed starting 0.19 so use that instead
use process.extract to produce a list of fuzzy matches instead of process.extractOne | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/37/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/37/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1469 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1469/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1469/comments | https://api.github.com/repos/ollama/ollama/issues/1469/events | https://github.com/ollama/ollama/pull/1469 | 2,036,192,736 | PR_kwDOJ0Z1Ps5htcUp | 1,469 | remove per-model types | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-11T17:40:49 | 2023-12-12T20:27:04 | 2023-12-12T20:27:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1469",
"html_url": "https://github.com/ollama/ollama/pull/1469",
"diff_url": "https://github.com/ollama/ollama/pull/1469.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1469.patch",
"merged_at": "2023-12-12T20:27:03"
} | mostly replaced by decoding tensors except ggml models which only support llama | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1469/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7881 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7881/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7881/comments | https://api.github.com/repos/ollama/ollama/issues/7881/events | https://github.com/ollama/ollama/issues/7881 | 2,704,483,328 | I_kwDOJ0Z1Ps6hMyQA | 7,881 | OpenAI-compatible API tool calls have no index | {
"login": "jackmpcollins",
"id": 6640905,
"node_id": "MDQ6VXNlcjY2NDA5MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6640905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackmpcollins",
"html_url": "https://github.com/jackmpcollins",
"followers_url": "https://api.github.... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 3 | 2024-11-29T09:25:52 | 2024-12-02T03:50:48 | 2024-11-30T04:00:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API does not populate the `.choices[].delta.tool_calls[].index` field. This is different to OpenAI's API where this is populated on all tool call chunks and enumerates the tool calls. This breaks comp... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7881/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7881/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1728 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1728/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1728/comments | https://api.github.com/repos/ollama/ollama/issues/1728/events | https://github.com/ollama/ollama/issues/1728 | 2,057,323,728 | I_kwDOJ0Z1Ps56oETQ | 1,728 | Streaming multiple json objects at the same time | {
"login": "pepperoni21",
"id": 29759371,
"node_id": "MDQ6VXNlcjI5NzU5Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29759371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepperoni21",
"html_url": "https://github.com/pepperoni21",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2023-12-27T12:35:50 | 2023-12-27T15:32:34 | 2023-12-27T15:32:33 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It seems like sometimes Ollama streams multiple json objects one after the other in the same streamed response, which cannot be deserialized.
Here's an example of one single streamed json response using the /generate endpoint
```json
{"model":"dolphin-mixtral:latest","created_at":"2023-12-25T01:12:45.58944567Z",... | {
"login": "pepperoni21",
"id": 29759371,
"node_id": "MDQ6VXNlcjI5NzU5Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29759371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepperoni21",
"html_url": "https://github.com/pepperoni21",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1728/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/347 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/347/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/347/comments | https://api.github.com/repos/ollama/ollama/issues/347/events | https://github.com/ollama/ollama/issues/347 | 1,850,569,328 | I_kwDOJ0Z1Ps5uTXJw | 347 | Support for GPT-NeoX GGML models - e.g. Stablecode | {
"login": "njarecki",
"id": 94956985,
"node_id": "U_kgDOBajtuQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94956985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njarecki",
"html_url": "https://github.com/njarecki",
"followers_url": "https://api.github.com/users/njarecki/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": ... | closed | false | null | [] | null | 7 | 2023-08-14T21:29:24 | 2024-02-20T00:52:41 | 2024-02-20T00:52:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/347/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2248 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2248/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2248/comments | https://api.github.com/repos/ollama/ollama/issues/2248/events | https://github.com/ollama/ollama/pull/2248 | 2,104,522,311 | PR_kwDOJ0Z1Ps5lRhzi | 2,248 | Add requirements | {
"login": "Yuan-ManX",
"id": 68322456,
"node_id": "MDQ6VXNlcjY4MzIyNDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuan-ManX",
"html_url": "https://github.com/Yuan-ManX",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-01-29T02:37:07 | 2024-11-21T08:57:22 | 2024-11-21T08:57:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2248",
"html_url": "https://github.com/ollama/ollama/pull/2248",
"diff_url": "https://github.com/ollama/ollama/pull/2248.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2248.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2248/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4805 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4805/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4805/comments | https://api.github.com/repos/ollama/ollama/issues/4805/events | https://github.com/ollama/ollama/issues/4805 | 2,332,545,712 | I_kwDOJ0Z1Ps6LB9Kw | 4,805 | can not serve VL models | {
"login": "techResearcher2021",
"id": 90097102,
"node_id": "MDQ6VXNlcjkwMDk3MTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/90097102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techResearcher2021",
"html_url": "https://github.com/techResearcher2021",
"followers_url": "ht... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-04T04:53:22 | 2024-06-09T17:12:55 | 2024-06-09T17:12:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
As I served my VL models, It can not work correctly.
Here. I tried the Minicpm-llama3-V-2.5, and convert it to GGUF format under the instruction from the official repository: https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md.
Then I use the service from open-w... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4805/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4360 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4360/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4360/comments | https://api.github.com/repos/ollama/ollama/issues/4360/events | https://github.com/ollama/ollama/issues/4360 | 2,290,889,479 | I_kwDOJ0Z1Ps6IjDMH | 4,360 | bge-reranker-v2-m3、mxbai-rerank-large-v1 and other rerank models | {
"login": "Feng-YiJing-Dao",
"id": 18107069,
"node_id": "MDQ6VXNlcjE4MTA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/18107069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Feng-YiJing-Dao",
"html_url": "https://github.com/Feng-YiJing-Dao",
"followers_url": "https://api... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 17 | 2024-05-11T13:02:06 | 2024-09-19T03:18:01 | 2024-09-02T20:57:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The rerank model cannot be converted to the ollama-supported format through llama.cpp, but in RAG, I hope to run a rerank model to improve the accuracy of recall.
I try to use bge-reranker-v2-m3、mxbai-rerank-large-v1,model.safetensors format,
: Generator System.Management.Automation.RemoteException Ninja System.Management.Automation.RemoteException does not support platform specification, but platform | {
"login": "chaoqunxie",
"id": 44899524,
"node_id": "MDQ6VXNlcjQ0ODk5NTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44899524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaoqunxie",
"html_url": "https://github.com/chaoqunxie",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-05-31T07:07:46 | 2024-10-23T21:33:12 | 2024-10-23T21:33:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Your branch is up to date with 'origin/minicpm-v2.5'.
Already on 'minicpm-v2.5'
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...
CommandType Name Version Source ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4745/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4758 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4758/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4758/comments | https://api.github.com/repos/ollama/ollama/issues/4758/events | https://github.com/ollama/ollama/issues/4758 | 2,328,648,910 | I_kwDOJ0Z1Ps6KzFzO | 4,758 | Add this web app to the list of apps in the README | {
"login": "greenido",
"id": 61472,
"node_id": "MDQ6VXNlcjYxNDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/61472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greenido",
"html_url": "https://github.com/greenido",
"followers_url": "https://api.github.com/users/greenido/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-31T22:09:12 | 2024-09-14T17:16:53 | 2024-09-14T17:16:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I created: https://github.com/greenido/multi-LLM-at-once base on your wonderful project.
...and it would be cool if you can add it to the list of web apps in the readme.
More info on the 'why' is here: https://greenido.wordpress.com/2024/04/08/the-power-of-many-why-you-should-consider-using-multiple-large-language-mo... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4758/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7648 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7648/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7648/comments | https://api.github.com/repos/ollama/ollama/issues/7648/events | https://github.com/ollama/ollama/issues/7648 | 2,654,967,829 | I_kwDOJ0Z1Ps6eP5gV | 7,648 | Performance Impact of Scaling a 70B Model Across Multiple A100 GPUs and Further Speed Optimization | {
"login": "gslin1224",
"id": 151395340,
"node_id": "U_kgDOCQYcDA",
"avatar_url": "https://avatars.githubusercontent.com/u/151395340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gslin1224",
"html_url": "https://github.com/gslin1224",
"followers_url": "https://api.github.com/users/gslin1... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 5 | 2024-11-13T10:24:23 | 2024-11-20T02:27:00 | 2024-11-17T12:23:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi guys,
I have a question regarding the performance impact and potential optimizations for distributing a large model across multiple GPUs. Specifically:
When running a 70B parameter model, how does the speed compare when distributed across two A100 GPUs versus four A100 GPUs?
In general, does adding more GPUs co... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7648/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2738 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2738/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2738/comments | https://api.github.com/repos/ollama/ollama/issues/2738/events | https://github.com/ollama/ollama/pull/2738 | 2,152,584,894 | PR_kwDOJ0Z1Ps5n1S0_ | 2,738 | Update routes.go | {
"login": "ohko",
"id": 4863673,
"node_id": "MDQ6VXNlcjQ4NjM2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4863673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohko",
"html_url": "https://github.com/ohko",
"followers_url": "https://api.github.com/users/ohko/followers",
... | [] | closed | false | null | [] | null | 0 | 2024-02-25T03:47:12 | 2024-04-26T13:41:57 | 2024-04-26T13:41:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2738",
"html_url": "https://github.com/ollama/ollama/pull/2738",
"diff_url": "https://github.com/ollama/ollama/pull/2738.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2738.patch",
"merged_at": null
} | CORS policy: Request header field x-requested-with is not allowed by Access-Control-Allow-Headers in preflight response.
add X-Requested-With in Access-Control-Allow-Headers | {
"login": "ohko",
"id": 4863673,
"node_id": "MDQ6VXNlcjQ4NjM2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4863673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohko",
"html_url": "https://github.com/ohko",
"followers_url": "https://api.github.com/users/ohko/followers",
... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2738/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5766 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5766/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5766/comments | https://api.github.com/repos/ollama/ollama/issues/5766/events | https://github.com/ollama/ollama/issues/5766 | 2,415,886,028 | I_kwDOJ0Z1Ps6P_37M | 5,766 | specify a single GPU (id=1)using Docker, Error! | {
"login": "catsled",
"id": 18079717,
"node_id": "MDQ6VXNlcjE4MDc5NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/18079717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catsled",
"html_url": "https://github.com/catsled",
"followers_url": "https://api.github.com/users/catsle... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-07-18T09:46:15 | 2024-07-25T06:06:22 | 2024-07-25T06:06:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have 8 gpus, i want to specify the id=1's gpu in a docker,
`docker run -it ... --device=/dev/dri/card1 --device=/dev/dri/renderD129 ....`
Error occured!

when i set `HIP_VISIBLE_DEVICES=1`
... | {
"login": "catsled",
"id": 18079717,
"node_id": "MDQ6VXNlcjE4MDc5NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/18079717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catsled",
"html_url": "https://github.com/catsled",
"followers_url": "https://api.github.com/users/catsle... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5766/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7651 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7651/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7651/comments | https://api.github.com/repos/ollama/ollama/issues/7651/events | https://github.com/ollama/ollama/issues/7651 | 2,655,369,362 | I_kwDOJ0Z1Ps6eRbiS | 7,651 | Unable to download model llama3.2-vision:11b and request update ollama. | {
"login": "Luckyjjjjjjj",
"id": 145416388,
"node_id": "U_kgDOCKrgxA",
"avatar_url": "https://avatars.githubusercontent.com/u/145416388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luckyjjjjjjj",
"html_url": "https://github.com/Luckyjjjjjjj",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2024-11-13T12:49:05 | 2024-11-18T18:24:27 | 2024-11-18T18:24:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I download the latest version from https://ollama.com/download and install, but cannot download llama3.2 in setting the model download - vision: 11 b, and suggest to update the ollama, but I have is the latest version, how do I solve
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7651/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5630 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5630/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5630/comments | https://api.github.com/repos/ollama/ollama/issues/5630/events | https://github.com/ollama/ollama/pull/5630 | 2,403,272,029 | PR_kwDOJ0Z1Ps51Gdku | 5,630 | Update README.md to Portuguese Brazilian and Optimized the image files of the project | {
"login": "ItaloGustavoS",
"id": 42496107,
"node_id": "MDQ6VXNlcjQyNDk2MTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/42496107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ItaloGustavoS",
"html_url": "https://github.com/ItaloGustavoS",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 1 | 2024-07-11T14:01:31 | 2024-07-30T23:00:35 | 2024-07-30T23:00:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5630",
"html_url": "https://github.com/ollama/ollama/pull/5630",
"diff_url": "https://github.com/ollama/ollama/pull/5630.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5630.patch",
"merged_at": null
} | Updated the branch with the latest updates.
Optimize images
*Total -- 463.96kb -> 380.99kb (17.88%)
/examples/modelfile-mario/logo.png -- 445.60kb -> 362.69kb (18.61%)
/macapp/assets/[iconTemplate@2x.png](mailto:iconTemplate@2x.png) -- 0.87kb -> 0.84kb (3.82%)
/macapp/assets/[iconUpdateTemplate@2x.png](mailto:... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5630/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7260 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7260/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7260/comments | https://api.github.com/repos/ollama/ollama/issues/7260/events | https://github.com/ollama/ollama/issues/7260 | 2,598,220,850 | I_kwDOJ0Z1Ps6a3bQy | 7,260 | Migrate off centos 7 for intermediate build layers in container image builds | {
"login": "cazlo",
"id": 3895350,
"node_id": "MDQ6VXNlcjM4OTUzNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3895350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cazlo",
"html_url": "https://github.com/cazlo",
"followers_url": "https://api.github.com/users/cazlo/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7700262114,
"node_id": ... | open | false | null | [] | null | 2 | 2024-10-18T19:17:21 | 2024-11-04T19:19:08 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # What
[Centos is dead](https://endoflife.date/centos), long live [centos stream (9)](https://endoflife.date/centos-stream)
Ollama should probably not be using centos 7 now that it is unsupported and at EOL.
# Why
AMD and Nvidia are no longer publishing updates to their centos 7 flavor of dependencies.
S... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7260/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5338 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5338/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5338/comments | https://api.github.com/repos/ollama/ollama/issues/5338/events | https://github.com/ollama/ollama/issues/5338 | 2,378,857,221 | I_kwDOJ0Z1Ps6NynsF | 5,338 | The main shell script runner for ollama downloader doesn't check for hash | {
"login": "Ahmed",
"id": 537483,
"node_id": "MDQ6VXNlcjUzNzQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/537483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ahmed",
"html_url": "https://github.com/Ahmed",
"followers_url": "https://api.github.com/users/Ahmed/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-06-27T18:37:18 | 2024-06-27T18:37:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi team:
You have this ollama installer on the main website.
```
curl -fsSL https://ollama.com/install.sh | sh
```
If someone hacked into the website, and change the script, the end user would never know. You should bake the checksum into the downloader to make sure the installer is not m... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5338/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5099 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5099/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5099/comments | https://api.github.com/repos/ollama/ollama/issues/5099/events | https://github.com/ollama/ollama/issues/5099 | 2,357,452,145 | I_kwDOJ0Z1Ps6Mg91x | 5,099 | Add `upgrade` command to upgrade the version | {
"login": "chyok",
"id": 32629225,
"node_id": "MDQ6VXNlcjMyNjI5MjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32629225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chyok",
"html_url": "https://github.com/chyok",
"followers_url": "https://api.github.com/users/chyok/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-06-17T14:08:54 | 2024-06-18T11:30:50 | 2024-06-18T11:30:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
This is an excellent project!
Is there any plan to add an upgrade command-line feature?
So that we can use `ollama --upgrade` or something else to update the version instead of manual download & install it again.
Thanks! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5099/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5099/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4269 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4269/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4269/comments | https://api.github.com/repos/ollama/ollama/issues/4269/events | https://github.com/ollama/ollama/pull/4269 | 2,286,666,718 | PR_kwDOJ0Z1Ps5u7-33 | 4,269 | update pull handler to use model.Name | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | open | false | null | [] | null | 0 | 2024-05-09T00:06:40 | 2024-10-01T17:47:46 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4269",
"html_url": "https://github.com/ollama/ollama/pull/4269",
"diff_url": "https://github.com/ollama/ollama/pull/4269.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4269.patch",
"merged_at": null
} | follow up to #3737 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4269/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3479 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3479/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3479/comments | https://api.github.com/repos/ollama/ollama/issues/3479/events | https://github.com/ollama/ollama/pull/3479 | 2,224,126,418 | PR_kwDOJ0Z1Ps5rogKu | 3,479 | Fix CI release glitches | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-03T23:42:27 | 2024-04-04T01:42:30 | 2024-04-04T01:42:28 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3479",
"html_url": "https://github.com/ollama/ollama/pull/3479",
"diff_url": "https://github.com/ollama/ollama/pull/3479.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3479.patch",
"merged_at": "2024-04-04T01:42:28"
} | The subprocess change moved the build directory
arm64 builds weren't setting cross-compilation flags when building on x86 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3479/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1588 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1588/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1588/comments | https://api.github.com/repos/ollama/ollama/issues/1588/events | https://github.com/ollama/ollama/issues/1588 | 2,047,602,326 | I_kwDOJ0Z1Ps56C-6W | 1,588 | [mistral][docker][linuxWSL]Infinit tags | {
"login": "wildcat7534",
"id": 38839946,
"node_id": "MDQ6VXNlcjM4ODM5OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/38839946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wildcat7534",
"html_url": "https://github.com/wildcat7534",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2023-12-18T22:59:28 | 2024-03-11T18:23:31 | 2024-03-11T18:23:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi !
I just say Hello how are you in French in Mistral and....
I had infinit tags for response :
sudo docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
pulling manifest
>>> /set verbose
>>> Bonjour ! ça va ?
Hello! I'm just a text-based AI, so I don't have th... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1588/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/691 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/691/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/691/comments | https://api.github.com/repos/ollama/ollama/issues/691/events | https://github.com/ollama/ollama/issues/691 | 1,924,698,003 | I_kwDOJ0Z1Ps5yuI-T | 691 | Expose the API as ProtocolBuffer | {
"login": "Solido",
"id": 1295961,
"node_id": "MDQ6VXNlcjEyOTU5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1295961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solido",
"html_url": "https://github.com/Solido",
"followers_url": "https://api.github.com/users/Solido/foll... | [] | closed | false | null | [] | null | 1 | 2023-10-03T18:27:53 | 2023-10-04T01:24:51 | 2023-10-04T01:23:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Streaming but also configuration of models can benefits to have
all this expose and auto generate api per langs.
Thank you for the Ollama initiative.
Cheers. | {
"login": "Solido",
"id": 1295961,
"node_id": "MDQ6VXNlcjEyOTU5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1295961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Solido",
"html_url": "https://github.com/Solido",
"followers_url": "https://api.github.com/users/Solido/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/691/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7144 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7144/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7144/comments | https://api.github.com/repos/ollama/ollama/issues/7144/events | https://github.com/ollama/ollama/pull/7144 | 2,574,368,265 | PR_kwDOJ0Z1Ps5-AYcG | 7,144 | Better handle small models in scheduler | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 4 | 2024-10-08T22:52:51 | 2025-01-23T20:07:33 | 2025-01-19T19:26:01 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7144",
"html_url": "https://github.com/ollama/ollama/pull/7144",
"diff_url": "https://github.com/ollama/ollama/pull/7144.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7144.patch",
"merged_at": null
} | Our memory prediction for small models tends to over-estimate the actual VRAM usage, which causes the scheduler to incorrectly wait too long for recovery.
Fixes #7130 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7144/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5204 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5204/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5204/comments | https://api.github.com/repos/ollama/ollama/issues/5204/events | https://github.com/ollama/ollama/issues/5204 | 2,367,058,873 | I_kwDOJ0Z1Ps6NFnO5 | 5,204 | Can't even attempt to load Deepseek-Coder-v2:236B due to arbitrary timeout | {
"login": "Nantris",
"id": 6835891,
"node_id": "MDQ6VXNlcjY4MzU4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6835891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nantris",
"html_url": "https://github.com/Nantris",
"followers_url": "https://api.github.com/users/Nantris/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2024-06-21T18:31:57 | 2024-07-16T01:51:21 | 2024-06-21T22:21:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This issue thread mentions the overarching issue, and the specific comment a potential workaround: https://github.com/ollama/ollama/issues/630#issuecomment-2182371780
My understanding is that the 236B model should be feasible to load into less RAM than the model actually takes up since not al... | {
"login": "Nantris",
"id": 6835891,
"node_id": "MDQ6VXNlcjY4MzU4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6835891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nantris",
"html_url": "https://github.com/Nantris",
"followers_url": "https://api.github.com/users/Nantris/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5204/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3823 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3823/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3823/comments | https://api.github.com/repos/ollama/ollama/issues/3823/events | https://github.com/ollama/ollama/issues/3823 | 2,256,616,467 | I_kwDOJ0Z1Ps6GgTwT | 3,823 | Can we add support for LLaVA-Llama-3-8B? | {
"login": "octavioccl",
"id": 6987693,
"node_id": "MDQ6VXNlcjY5ODc2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6987693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/octavioccl",
"html_url": "https://github.com/octavioccl",
"followers_url": "https://api.github.com/users... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 13 | 2024-04-22T13:59:16 | 2024-05-09T14:55:47 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi I just saw in redis that there is a llava model based on llama-3, can be added it to the library? Thanks
Source:https://www.reddit.com/r/LocalLLaMA/comments/1ca8uxo/llavallama38b_is_released/ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3823/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3823/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3547 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3547/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3547/comments | https://api.github.com/repos/ollama/ollama/issues/3547/events | https://github.com/ollama/ollama/issues/3547 | 2,232,578,120 | I_kwDOJ0Z1Ps6FEnBI | 3,547 | Support for all NAVI GPUs | {
"login": "swapduzoo",
"id": 87898144,
"node_id": "MDQ6VXNlcjg3ODk4MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/87898144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swapduzoo",
"html_url": "https://github.com/swapduzoo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-04-09T04:28:01 | 2024-07-03T22:37:53 | 2024-07-03T22:37:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When the support for all NAVI GPUs will come? and in particular the RX6700? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3547/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3547/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2019 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2019/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2019/comments | https://api.github.com/repos/ollama/ollama/issues/2019/events | https://github.com/ollama/ollama/issues/2019 | 2,084,719,153 | I_kwDOJ0Z1Ps58Qkox | 2,019 | Model Path Arch - AUR | {
"login": "DerRehberg",
"id": 20538874,
"node_id": "MDQ6VXNlcjIwNTM4ODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/20538874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DerRehberg",
"html_url": "https://github.com/DerRehberg",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVw... | closed | false | null | [] | null | 3 | 2024-01-16T19:43:38 | 2024-03-11T18:43:07 | 2024-03-11T18:43:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I installed ollama from the Aur but the model path you guys specified doesn't exist, anyone know where it is? Is see this as a big Problem for running custom models | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2019/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2306 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2306/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2306/comments | https://api.github.com/repos/ollama/ollama/issues/2306/events | https://github.com/ollama/ollama/issues/2306 | 2,111,993,477 | I_kwDOJ0Z1Ps594naF | 2,306 | Show file sizes on the models page on the ollama website | {
"login": "mika76",
"id": 229311,
"node_id": "MDQ6VXNlcjIyOTMxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/229311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mika76",
"html_url": "https://github.com/mika76",
"followers_url": "https://api.github.com/users/mika76/follow... | [] | closed | false | null | [] | null | 2 | 2024-02-01T09:24:02 | 2024-02-01T20:18:33 | 2024-02-01T19:46:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I would like to try different models but it does not really show me how much space it will take up and on my desktop machine space is at a premium. Please show the size on the search list as well as the model detail page. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2306/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3086 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3086/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3086/comments | https://api.github.com/repos/ollama/ollama/issues/3086/events | https://github.com/ollama/ollama/pull/3086 | 2,182,694,322 | PR_kwDOJ0Z1Ps5pb505 | 3,086 | Import server.cpp to retain llava support | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-03-12T21:21:05 | 2024-03-15T23:10:38 | 2024-03-15T23:10:35 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3086",
"html_url": "https://github.com/ollama/ollama/pull/3086",
"diff_url": "https://github.com/ollama/ollama/pull/3086.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3086.patch",
"merged_at": "2024-03-15T23:10:35"
} | Recent refactoring upstream has temporarily(?) removed llava support from the server.cpp code, which we rely on. This pulls the server just before that change into our repo so we can keep current with the base llama.cpp code updates until llava support is added back.
Verified on Mac, Linux and Windows. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3086/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6969 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6969/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6969/comments | https://api.github.com/repos/ollama/ollama/issues/6969/events | https://github.com/ollama/ollama/pull/6969 | 2,549,158,578 | PR_kwDOJ0Z1Ps58uF_p | 6,969 | Bump ROCm on linux to 6.2 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | open | false | null | [] | null | 3 | 2024-09-25T23:23:12 | 2025-01-24T21:25:33 | null | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6969",
"html_url": "https://github.com/ollama/ollama/pull/6969",
"diff_url": "https://github.com/ollama/ollama/pull/6969.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6969.patch",
"merged_at": null
} | Fixes #6773
According to the compat matrix, no GPUs are dropped compared to 6.1
No regressions detected across gfx1034 gfx1035 gfx1030 gfx900 gfx906 gfx1100 gfx1103 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6969/reactions",
"total_count": 10,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6969/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8540 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8540/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8540/comments | https://api.github.com/repos/ollama/ollama/issues/8540/events | https://github.com/ollama/ollama/pull/8540 | 2,805,211,246 | PR_kwDOJ0Z1Ps6IrB2D | 8,540 | Update README.md added deepseek-r1 | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [] | open | false | null | [] | null | 1 | 2025-01-22T19:45:52 | 2025-01-30T05:58:08 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8540",
"html_url": "https://github.com/ollama/ollama/pull/8540",
"diff_url": "https://github.com/ollama/ollama/pull/8540.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8540.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8540/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8540/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6666 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6666/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6666/comments | https://api.github.com/repos/ollama/ollama/issues/6666/events | https://github.com/ollama/ollama/pull/6666 | 2,509,120,961 | PR_kwDOJ0Z1Ps56mObP | 6,666 | Improve logging on GPU too small | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-09-06T00:22:17 | 2024-09-06T15:29:40 | 2024-09-06T15:29:37 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6666",
"html_url": "https://github.com/ollama/ollama/pull/6666",
"diff_url": "https://github.com/ollama/ollama/pull/6666.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6666.patch",
"merged_at": "2024-09-06T15:29:37"
} | When we determine a GPU is too small for any layers, it's not always clear why. This will help troubleshoot those scenarios. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6666/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3620 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3620/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3620/comments | https://api.github.com/repos/ollama/ollama/issues/3620/events | https://github.com/ollama/ollama/issues/3620 | 2,241,126,705 | I_kwDOJ0Z1Ps6FlOEx | 3,620 | Mixtral 8x22b - v0.1 | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 1 | 2024-04-12T23:51:44 | 2024-04-16T23:32:11 | 2024-04-16T23:32:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
BTW Mixtral released a new model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3620/reactions",
"total_count": 18,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3620/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/73 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/73/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/73/comments | https://api.github.com/repos/ollama/ollama/issues/73/events | https://github.com/ollama/ollama/pull/73 | 1,801,359,638 | PR_kwDOJ0Z1Ps5VVOZZ | 73 | fix eof error in generate | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-12T16:36:33 | 2023-07-12T18:09:27 | 2023-07-12T18:09:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/73",
"html_url": "https://github.com/ollama/ollama/pull/73",
"diff_url": "https://github.com/ollama/ollama/pull/73.diff",
"patch_url": "https://github.com/ollama/ollama/pull/73.patch",
"merged_at": "2023-07-12T18:09:23"
} | maybe related to #72 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/73/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/73/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7933/comments | https://api.github.com/repos/ollama/ollama/issues/7933/events | https://github.com/ollama/ollama/pull/7933 | 2,718,511,426 | PR_kwDOJ0Z1Ps6EFNxR | 7,933 | Added logging for generated responses | {
"login": "NicholasPaulick",
"id": 76536219,
"node_id": "MDQ6VXNlcjc2NTM2MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/76536219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NicholasPaulick",
"html_url": "https://github.com/NicholasPaulick",
"followers_url": "https://api... | [] | open | false | null | [] | null | 0 | 2024-12-04T18:31:15 | 2024-12-04T18:31:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7933",
"html_url": "https://github.com/ollama/ollama/pull/7933",
"diff_url": "https://github.com/ollama/ollama/pull/7933.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7933.patch",
"merged_at": null
} | https://github.com/ollama/ollama/issues/4669
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7933/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1905 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1905/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1905/comments | https://api.github.com/repos/ollama/ollama/issues/1905/events | https://github.com/ollama/ollama/pull/1905 | 2,074,879,748 | PR_kwDOJ0Z1Ps5jtkMG | 1,905 | docs: add `ollero.nvim` to community applications | {
"login": "marco-souza",
"id": 4452113,
"node_id": "MDQ6VXNlcjQ0NTIxMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4452113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marco-souza",
"html_url": "https://github.com/marco-souza",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2024-01-10T17:49:04 | 2024-03-25T19:06:09 | 2024-03-25T19:06:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1905",
"html_url": "https://github.com/ollama/ollama/pull/1905",
"diff_url": "https://github.com/ollama/ollama/pull/1905.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1905.patch",
"merged_at": "2024-03-25T19:06:08"
} | - adding `[ollero.nvim](https://github.com/marco-souza/ollero.nvim)` to the terminal applications session
> Ollero (ollero.nvim) is a Neovim Plugin that unleashes Ollama super powers to your beloved text editor. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1905/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3870 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3870/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3870/comments | https://api.github.com/repos/ollama/ollama/issues/3870/events | https://github.com/ollama/ollama/issues/3870 | 2,260,555,441 | I_kwDOJ0Z1Ps6GvVax | 3,870 | Failure to Load Llava in Ollama Windows Ver. | {
"login": "PasserDreamer",
"id": 30385417,
"node_id": "MDQ6VXNlcjMwMzg1NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/30385417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PasserDreamer",
"html_url": "https://github.com/PasserDreamer",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-04-24T07:15:20 | 2024-10-23T18:45:23 | 2024-10-23T18:45:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I encountered an issue when attempting to load the 'llava' AI model; however, others such as 'Llama3' or 'Phi3' have no problem. Here are the details:
```
>>ollama run llava
Error: llama runner process no longer running: 1
```
server.log
```
...
clip_model_load: CLIP using CUDA backend
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3870/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4654 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4654/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4654/comments | https://api.github.com/repos/ollama/ollama/issues/4654/events | https://github.com/ollama/ollama/issues/4654 | 2,318,052,299 | I_kwDOJ0Z1Ps6KKqvL | 4,654 | Can the model download page add a new ranking? | {
"login": "despairTK",
"id": 111871110,
"node_id": "U_kgDOBqsEhg",
"avatar_url": "https://avatars.githubusercontent.com/u/111871110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/despairTK",
"html_url": "https://github.com/despairTK",
"followers_url": "https://api.github.com/users/despai... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | null | [] | null | 0 | 2024-05-27T01:30:29 | 2024-07-08T17:21:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://ollama.com/library
At present, there are only three types of sorting the model download page. Can it add a sort that is mainly based on model update time? This is convenient for the models that are updated on the old version recently, don't find it slowly.
 until the submodule is updated | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5807/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3222 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3222/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3222/comments | https://api.github.com/repos/ollama/ollama/issues/3222/events | https://github.com/ollama/ollama/issues/3222 | 2,191,953,757 | I_kwDOJ0Z1Ps6Cpo9d | 3,222 | Support Grok | {
"login": "FloLecoeuche",
"id": 2616127,
"node_id": "MDQ6VXNlcjI2MTYxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2616127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FloLecoeuche",
"html_url": "https://github.com/FloLecoeuche",
"followers_url": "https://api.github.com... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 21 | 2024-03-18T11:31:44 | 2024-09-05T14:44:33 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
Please add [xai-org/grok-1](https://github.com/xai-org/grok-1) model to ollama. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3222/reactions",
"total_count": 82,
"+1": 67,
"-1": 0,
"laugh": 15,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3222/timeline | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.