url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4997 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4997/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4997/comments | https://api.github.com/repos/ollama/ollama/issues/4997/events | https://github.com/ollama/ollama/pull/4997 | 2,348,010,141 | PR_kwDOJ0Z1Ps5yMkNk | 4,997 | Initial commit | {
"login": "enzoxic",
"id": 157711992,
"node_id": "U_kgDOCWZ-eA",
"avatar_url": "https://avatars.githubusercontent.com/u/157711992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoxic",
"html_url": "https://github.com/enzoxic",
"followers_url": "https://api.github.com/users/enzoxic/foll... | [] | closed | false | null | [] | null | 1 | 2024-06-12T07:18:06 | 2024-06-12T21:11:29 | 2024-06-12T21:11:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4997",
"html_url": "https://github.com/ollama/ollama/pull/4997",
"diff_url": "https://github.com/ollama/ollama/pull/4997.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4997.patch",
"merged_at": null
} | a basic AIOS project with ollama to manage in concert with other M1 friendly models the prototype I am developping | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4997/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1104 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1104/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1104/comments | https://api.github.com/repos/ollama/ollama/issues/1104/events | https://github.com/ollama/ollama/pull/1104 | 1,989,586,253 | PR_kwDOJ0Z1Ps5fPjob | 1,104 | add jupyter notebook example | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-11-12T21:37:13 | 2023-11-17T22:46:27 | 2023-11-17T22:46:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1104",
"html_url": "https://github.com/ollama/ollama/pull/1104",
"diff_url": "https://github.com/ollama/ollama/pull/1104.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1104.patch",
"merged_at": "2023-11-17T22:46:26"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1104/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7276/comments | https://api.github.com/repos/ollama/ollama/issues/7276/events | https://github.com/ollama/ollama/issues/7276 | 2,600,433,330 | I_kwDOJ0Z1Ps6a_3ay | 7,276 | bitnet is more energy efficient than llama.cpp | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-10-20T12:54:20 | 2024-10-24T18:24:04 | 2024-10-24T18:24:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/microsoft/BitNet?tab=readme-ov-file
Can ollama switch to that to save more power. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7276/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7276/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2034 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2034/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2034/comments | https://api.github.com/repos/ollama/ollama/issues/2034/events | https://github.com/ollama/ollama/issues/2034 | 2,086,826,586 | I_kwDOJ0Z1Ps58YnJa | 2,034 | Not running on gpu | {
"login": "DragonBtc93",
"id": 36767505,
"node_id": "MDQ6VXNlcjM2NzY3NTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/36767505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DragonBtc93",
"html_url": "https://github.com/DragonBtc93",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-01-17T19:20:47 | 2024-01-26T21:20:07 | 2024-01-26T21:19:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm a Ubuntu 22.04 use have a Nvidia tesla p40 and a k80 gpu and it will not use gpu. I can use text generation webui and get gpu. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2034/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2034/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7541 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7541/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7541/comments | https://api.github.com/repos/ollama/ollama/issues/7541/events | https://github.com/ollama/ollama/issues/7541 | 2,639,968,107 | I_kwDOJ0Z1Ps6dWrdr | 7,541 | How to use the brand new models? | {
"login": "dwsmart32",
"id": 70640776,
"node_id": "MDQ6VXNlcjcwNjQwNzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/70640776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwsmart32",
"html_url": "https://github.com/dwsmart32",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-11-07T05:54:54 | 2024-11-13T21:36:25 | 2024-11-13T21:36:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, always thank you for all your hard work. I would like to ask how to use a VLM like Qwen2 VL-72B(Not LLM) or Nvidia/NVLM-D-72B in Ollama. When I attempt to customize the model, it fails due to an unsupported backbone. Could you advise on what steps are needed to enable support for these newer models? Thank you. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7541/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6818 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6818/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6818/comments | https://api.github.com/repos/ollama/ollama/issues/6818/events | https://github.com/ollama/ollama/pull/6818 | 2,527,264,043 | PR_kwDOJ0Z1Ps57jyq6 | 6,818 | Add vim-intelligence-bridge to Terminal section in README | {
"login": "pepo-ec",
"id": 1961172,
"node_id": "MDQ6VXNlcjE5NjExNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1961172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepo-ec",
"html_url": "https://github.com/pepo-ec",
"followers_url": "https://api.github.com/users/pepo-ec/... | [] | closed | false | null | [] | null | 0 | 2024-09-15T23:39:00 | 2024-09-16T01:20:36 | 2024-09-16T01:20:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6818",
"html_url": "https://github.com/ollama/ollama/pull/6818",
"diff_url": "https://github.com/ollama/ollama/pull/6818.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6818.patch",
"merged_at": "2024-09-16T01:20:36"
} | I'm proposing to add vim-intelligence-bridge to the Terminal section. This plugin is unique as it's the only one that integrates Ollama directly with traditional Vim (not Neovim). Key benefits:
- Expands Ollama's reach to Vim users
- Promotes local, private AI processing within Vim
- Enhances developer productivit... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6818/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7127 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7127/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7127/comments | https://api.github.com/repos/ollama/ollama/issues/7127/events | https://github.com/ollama/ollama/issues/7127 | 2,572,074,646 | I_kwDOJ0Z1Ps6ZTr6W | 7,127 | Difference in Function Call Support between Ollama and Unsloth for Llama 3.2 | {
"login": "Saber120",
"id": 108297159,
"node_id": "U_kgDOBnR7xw",
"avatar_url": "https://avatars.githubusercontent.com/u/108297159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saber120",
"html_url": "https://github.com/Saber120",
"followers_url": "https://api.github.com/users/Saber120/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-10-08T05:00:07 | 2024-10-09T13:11:58 | 2024-10-09T13:11:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I’ve fine-tune a Llama 3.2 model using Unsloth, and when I try to enable function calling with ollama, I receive a message indicating that the model does not support function calls. However, when I download the same Llama 3.2 model directly from Ollama, function calls work without any i... | {
"login": "Saber120",
"id": 108297159,
"node_id": "U_kgDOBnR7xw",
"avatar_url": "https://avatars.githubusercontent.com/u/108297159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saber120",
"html_url": "https://github.com/Saber120",
"followers_url": "https://api.github.com/users/Saber120/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7127/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1356 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1356/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1356/comments | https://api.github.com/repos/ollama/ollama/issues/1356/events | https://github.com/ollama/ollama/issues/1356 | 2,022,290,587 | I_kwDOJ0Z1Ps54ibSb | 1,356 | Concurrency and multiple calls | {
"login": "enriquesouza",
"id": 1700699,
"node_id": "MDQ6VXNlcjE3MDA2OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1700699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enriquesouza",
"html_url": "https://github.com/enriquesouza",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 4 | 2023-12-03T02:01:34 | 2024-01-26T23:55:51 | 2024-01-26T23:55:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, I would like to know if running Ollama and making multiple calls is possible. I would love to add a server and use it for my users.
Therefore, when testing it, I saw it is waiting until a process finishes when I use the liteLLM proxy.
Is it possible? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1356/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1356/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5356 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5356/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5356/comments | https://api.github.com/repos/ollama/ollama/issues/5356/events | https://github.com/ollama/ollama/issues/5356 | 2,380,063,057 | I_kwDOJ0Z1Ps6N3OFR | 5,356 | allow for num_ctx parameter in the openai API compatibility | {
"login": "PabloRMira",
"id": 36644554,
"node_id": "MDQ6VXNlcjM2NjQ0NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/36644554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PabloRMira",
"html_url": "https://github.com/PabloRMira",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 7 | 2024-06-28T10:04:14 | 2025-01-18T05:00:23 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The OpenAI compatibility module does not allow for setting the number of tokens window (num_ctx) via API call dynamically instead of having to adjust the Modelfile each time we want to use another context window.
Therefore it would be great to have in the OpenAI compatibility. I can also try a PR for this.
Thanks... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5356/reactions",
"total_count": 8,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/5356/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/4827 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4827/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4827/comments | https://api.github.com/repos/ollama/ollama/issues/4827/events | https://github.com/ollama/ollama/issues/4827 | 2,334,988,111 | I_kwDOJ0Z1Ps6LLRdP | 4,827 | When do you introduce the glm-4-9b model? | {
"login": "mywwq",
"id": 133221105,
"node_id": "U_kgDOB_DK8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133221105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mywwq",
"html_url": "https://github.com/mywwq",
"followers_url": "https://api.github.com/users/mywwq/followers",
... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-06-05T06:10:11 | 2024-06-06T17:33:16 | 2024-06-06T17:33:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When do you introduce the glm-4-9b model? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4827/reactions",
"total_count": 28,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4827/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2549 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2549/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2549/comments | https://api.github.com/repos/ollama/ollama/issues/2549/events | https://github.com/ollama/ollama/issues/2549 | 2,139,297,883 | I_kwDOJ0Z1Ps5_gxhb | 2,549 | Invalid characters in windows command prompt | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-02-16T19:35:36 | 2024-07-29T20:40:25 | 2024-07-29T20:40:25 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2549/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/39 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/39/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/39/comments | https://api.github.com/repos/ollama/ollama/issues/39/events | https://github.com/ollama/ollama/pull/39 | 1,790,602,456 | PR_kwDOJ0Z1Ps5UwrWd | 39 | enable metal gpu acceleration | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-07-06T01:12:35 | 2023-07-06T18:25:25 | 2023-07-06T01:45:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/39",
"html_url": "https://github.com/ollama/ollama/pull/39",
"diff_url": "https://github.com/ollama/ollama/pull/39.diff",
"patch_url": "https://github.com/ollama/ollama/pull/39.patch",
"merged_at": "2023-07-06T01:45:53"
} | ggml-metal.metal must be in the same directory as the ollama binary otherwise llama.cpp will not be able to find it and load it.
1. go generate llama/llama_metal.go
2. go build .
3. ./ollama serve | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/39/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/39/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/373 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/373/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/373/comments | https://api.github.com/repos/ollama/ollama/issues/373/events | https://github.com/ollama/ollama/issues/373 | 1,855,727,475 | I_kwDOJ0Z1Ps5unCdz | 373 | Can we optimize performance with the Apple M1 Max's 32-core GPU and Neural Engine? | {
"login": "pascalandy",
"id": 6694151,
"node_id": "MDQ6VXNlcjY2OTQxNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6694151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pascalandy",
"html_url": "https://github.com/pascalandy",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 14 | 2023-08-17T21:20:58 | 2024-06-28T20:35:45 | 2023-10-23T16:32:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello everyone,
I'm keen to explore ways to maximize the efficiency of my robust machines. It appears that Ollama currently utilizes only the CPU for processing.
I'm wondering if there's an option to configure it to leverage our GPU. Specifically, I'm interested in harnessing the power of the **32-core GPU** and ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/373/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/373/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5389 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5389/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5389/comments | https://api.github.com/repos/ollama/ollama/issues/5389/events | https://github.com/ollama/ollama/issues/5389 | 2,382,163,897 | I_kwDOJ0Z1Ps6N_O-5 | 5,389 | Add Basic Networking Tools to Docker Image for Health Checks | {
"login": "OlaoluwaM",
"id": 37044906,
"node_id": "MDQ6VXNlcjM3MDQ0OTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/37044906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OlaoluwaM",
"html_url": "https://github.com/OlaoluwaM",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2024-06-30T09:16:26 | 2024-10-22T20:56:08 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi! Would it be possible to include one or two network command packages like `ping`, `netcat`, or `curl` in the Docker image? This would allow folks who want to query the health check endpoint using `docker compose` or similar tools to do so without needing to create a custom `Dockerfile`.
I get that y'all are [hesi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5389/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5389/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8607 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8607/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8607/comments | https://api.github.com/repos/ollama/ollama/issues/8607/events | https://github.com/ollama/ollama/issues/8607 | 2,812,661,670 | I_kwDOJ0Z1Ps6npc-m | 8,607 | Add an ability to inject env variables to modelfile system message. | {
"login": "BotVasya",
"id": 10455417,
"node_id": "MDQ6VXNlcjEwNDU1NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/10455417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BotVasya",
"html_url": "https://github.com/BotVasya",
"followers_url": "https://api.github.com/users/Bot... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2025-01-27T10:43:33 | 2025-01-27T10:43:33 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi All.
I`ve realized that that there is no way to make ollama models to know current date and time if it runs on ms windows. So I believe that would be useful if there would be possibility to use OS variables in the modelfile. Especially for date and time it would be better if model could obtain that data dynamically,... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8607/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/707 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/707/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/707/comments | https://api.github.com/repos/ollama/ollama/issues/707/events | https://github.com/ollama/ollama/issues/707 | 1,927,459,911 | I_kwDOJ0Z1Ps5y4rRH | 707 | 127.0.0.1:11434: bind: address already in use | {
"login": "Nivek92",
"id": 5087400,
"node_id": "MDQ6VXNlcjUwODc0MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5087400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nivek92",
"html_url": "https://github.com/Nivek92",
"followers_url": "https://api.github.com/users/Nivek92/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 29 | 2023-10-05T06:15:02 | 2024-07-09T09:58:19 | 2023-12-04T19:39:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I run `ollama serve` I get
`Error: listen tcp 127.0.0.1:11434: bind: address already in use`
After checking what's running on the port with `sudo lsof -i :11434`
I see that ollama is already running
`ollama 2233 ollama 3u IPv4 37563 0t0 TCP localhost:11434 (LISTEN)`
I killed the process ... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/707/reactions",
"total_count": 40,
"+1": 40,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/707/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6314 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6314/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6314/comments | https://api.github.com/repos/ollama/ollama/issues/6314/events | https://github.com/ollama/ollama/issues/6314 | 2,459,722,892 | I_kwDOJ0Z1Ps6SnGSM | 6,314 | Better guidance for using `with_structured_output` with `ChatOllama` | {
"login": "GuyPaddock",
"id": 2631799,
"node_id": "MDQ6VXNlcjI2MzE3OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2631799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuyPaddock",
"html_url": "https://github.com/GuyPaddock",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-08-11T19:09:17 | 2024-08-15T20:53:05 | 2024-08-15T20:53:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using `ChatOllama` from `langchain_ollama` rather than `langchain_community.chat_models`, it's posslble to use `with_structured_output`. However, there are several pitfalls that the docs hint at but don't explicitly mention, leading to issues like these:
* https://github.com/langchain-ai/langchain/discussions/2... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6314/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3542 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3542/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3542/comments | https://api.github.com/repos/ollama/ollama/issues/3542/events | https://github.com/ollama/ollama/issues/3542 | 2,231,954,406 | I_kwDOJ0Z1Ps6FCOvm | 3,542 | Push of new model | {
"login": "emsi",
"id": 433383,
"node_id": "MDQ6VXNlcjQzMzM4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/433383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emsi",
"html_url": "https://github.com/emsi",
"followers_url": "https://api.github.com/users/emsi/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-04-08T19:40:56 | 2024-04-09T14:29:15 | 2024-04-08T20:44:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When trying to push NEW model I get `Error: file does not exist`.
I have debugged it under mitm and it seems ollama server tries to HEAD the new, nonexisting, model:
```
[19:36:39.502][172.17.0.1:52910] server connect registry.ollama.ai:443 (104.21.75.227:443)
172.17.0.1:52910: HEAD https:... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3542/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/812 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/812/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/812/comments | https://api.github.com/repos/ollama/ollama/issues/812/events | https://github.com/ollama/ollama/pull/812 | 1,946,248,418 | PR_kwDOJ0Z1Ps5c9JJ6 | 812 | fix: wrong format string type | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-10-16T23:14:51 | 2023-10-17T15:40:50 | 2023-10-17T15:40:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/812",
"html_url": "https://github.com/ollama/ollama/pull/812",
"diff_url": "https://github.com/ollama/ollama/pull/812.diff",
"patch_url": "https://github.com/ollama/ollama/pull/812.patch",
"merged_at": "2023-10-17T15:40:49"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/812/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8225 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8225/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8225/comments | https://api.github.com/repos/ollama/ollama/issues/8225/events | https://github.com/ollama/ollama/pull/8225 | 2,757,158,560 | PR_kwDOJ0Z1Ps6GIxT9 | 8,225 | README.md inclusion of a project alpaca | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [] | closed | false | null | [] | null | 0 | 2024-12-24T04:35:57 | 2024-12-24T07:25:47 | 2024-12-24T07:25:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8225",
"html_url": "https://github.com/ollama/ollama/pull/8225",
"diff_url": "https://github.com/ollama/ollama/pull/8225.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8225.patch",
"merged_at": null
} | Alpaca An Ollama client application for linux and macos made with GTK4 and Adwaita
https://github.com/ollama/ollama/issues/8220#issuecomment-2560451003 | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8225/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5625 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5625/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5625/comments | https://api.github.com/repos/ollama/ollama/issues/5625/events | https://github.com/ollama/ollama/issues/5625 | 2,402,195,194 | I_kwDOJ0Z1Ps6PLpb6 | 5,625 | gpu discovery crashes on nvidia CC 2.1 GPU on windows 10 | {
"login": "snufflemarlstar-rg",
"id": 54530186,
"node_id": "MDQ6VXNlcjU0NTMwMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/54530186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snufflemarlstar-rg",
"html_url": "https://github.com/snufflemarlstar-rg",
"followers_url": "ht... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-07-11T04:00:30 | 2024-11-13T21:38:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have repeatedly installed and uninstalled ollama and searched for some advice regarding
"Warning: could not connect to a running Ollama instance" for windows 10 but I have not found a solution.
2024/07/11 10:49:03 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVIC... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5625/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7048 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7048/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7048/comments | https://api.github.com/repos/ollama/ollama/issues/7048/events | https://github.com/ollama/ollama/issues/7048 | 2,557,089,341 | I_kwDOJ0Z1Ps6YahY9 | 7,048 | Molmo support | {
"login": "win4r",
"id": 42172631,
"node_id": "MDQ6VXNlcjQyMTcyNjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/42172631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/win4r",
"html_url": "https://github.com/win4r",
"followers_url": "https://api.github.com/users/win4r/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-09-30T16:03:29 | 2024-10-03T19:47:34 | 2024-10-03T19:47:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://youtu.be/gtcOncFLMeo | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7048/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7668/comments | https://api.github.com/repos/ollama/ollama/issues/7668/events | https://github.com/ollama/ollama/issues/7668 | 2,659,793,854 | I_kwDOJ0Z1Ps6eiTu- | 7,668 | is:issue iOllama Error - {json_chunk}s:open | {
"login": "papiche",
"id": 80590245,
"node_id": "MDQ6VXNlcjgwNTkwMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/80590245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papiche",
"html_url": "https://github.com/papiche",
"followers_url": "https://api.github.com/users/papich... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 9 | 2024-11-14T19:11:39 | 2024-12-30T03:25:19 | 2024-12-02T15:26:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Using aider.chat with ollama, got
```
Pourlitellm.APIConnectionError: Ollama Error - {'error': 'an unknown error was encountered while running the model '}
Traceback (most recent call last):
File "/home/fred/.astro/lib/python3.12/site-packages/litellm/utils.py", line 7023, in chunk_crea... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7668/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1710 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1710/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1710/comments | https://api.github.com/repos/ollama/ollama/issues/1710/events | https://github.com/ollama/ollama/issues/1710 | 2,055,779,339 | I_kwDOJ0Z1Ps56iLQL | 1,710 | How do we output ollama response to file? | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 5 | 2023-12-25T16:46:17 | 2025-01-14T21:54:24 | 2023-12-26T09:55:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If Ollama can read prompts from file, there has to be a way somehow to receive response to file and save it in the working directory.
How do I achieve this?
Scenario:
ollama run dolphin-phi
>>> '/home/ai/repo/llama2.c/run.c' rewrite this code with arguments for blah... :smile:
Thanks. | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1710/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1710/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8608 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8608/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8608/comments | https://api.github.com/repos/ollama/ollama/issues/8608/events | https://github.com/ollama/ollama/issues/8608 | 2,812,662,391 | I_kwDOJ0Z1Ps6npdJ3 | 8,608 | Panic while downloading the model | {
"login": "tchaton",
"id": 12861981,
"node_id": "MDQ6VXNlcjEyODYxOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/12861981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchaton",
"html_url": "https://github.com/tchaton",
"followers_url": "https://api.github.com/users/tchato... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | null | [] | null | 2 | 2025-01-27T10:43:53 | 2025-01-27T16:23:44 | 2025-01-27T16:23:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`/bin/ollama run llama3.1`
<img width="1243" alt="Image" src="https://github.com/user-attachments/assets/0c520af1-52d5-4371-bf89-fac7a9fe94d9" />
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8608/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/310 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/310/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/310/comments | https://api.github.com/repos/ollama/ollama/issues/310/events | https://github.com/ollama/ollama/issues/310 | 1,842,091,811 | I_kwDOJ0Z1Ps5tzBcj | 310 | generating embeddings when creating a model should use loaded llm logic | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-08-08T20:58:48 | 2023-08-15T19:12:03 | 2023-08-15T19:12:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Right now embeddings modelfile generation ignores the loaded model and loads its own, it should share the same logic | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/310/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6875 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6875/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6875/comments | https://api.github.com/repos/ollama/ollama/issues/6875/events | https://github.com/ollama/ollama/issues/6875 | 2,535,914,100 | I_kwDOJ0Z1Ps6XJvp0 | 6,875 | reader-lm - heavy hallucinations? | {
"login": "MeinDeutschkurs",
"id": 129950466,
"node_id": "U_kgDOB77jAg",
"avatar_url": "https://avatars.githubusercontent.com/u/129950466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MeinDeutschkurs",
"html_url": "https://github.com/MeinDeutschkurs",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2024-09-19T10:20:09 | 2025-01-07T23:57:39 | 2025-01-07T23:57:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Heavy hallucinations.
Is this normal for the model, or does it not know when to stop?
```
(base) jophi@10 flux-web % ollama run reader-lm
pulling manifest
pulling b730ecb20584... 100% ▕████████████████▏ 934 MB
pulling 94baea11f788... 100% ▕████████████████▏ 1... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6875/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6875/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5723 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5723/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5723/comments | https://api.github.com/repos/ollama/ollama/issues/5723/events | https://github.com/ollama/ollama/issues/5723 | 2,411,224,046 | I_kwDOJ0Z1Ps6PuFvu | 5,723 | Is ollama since 0.2.1 slower on CPU's | {
"login": "mklue",
"id": 108096783,
"node_id": "U_kgDOBnFtDw",
"avatar_url": "https://avatars.githubusercontent.com/u/108096783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mklue",
"html_url": "https://github.com/mklue",
"followers_url": "https://api.github.com/users/mklue/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-07-16T13:58:59 | 2024-08-09T23:34:17 | 2024-08-09T23:34:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Love the project, recently I updated from 0.2.1 to 0.2.2 and found the server's responses slower. I upgraded to 0.2.5 and I think it has not improved. Is it load times on the initial run ?
I have a 36 core xeon processor, 64 GB ram and an older radeon GPU, So all the work s going on the the CPU... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5723/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8066/comments | https://api.github.com/repos/ollama/ollama/issues/8066/events | https://github.com/ollama/ollama/issues/8066 | 2,735,122,133 | I_kwDOJ0Z1Ps6jBqbV | 8,066 | ollama 0.5.1 is detecting my NVIDIA Tesla M40, but they are not used. | {
"login": "bones0",
"id": 55978585,
"node_id": "MDQ6VXNlcjU1OTc4NTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55978585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bones0",
"html_url": "https://github.com/bones0",
"followers_url": "https://api.github.com/users/bones0/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-12-12T07:49:41 | 2024-12-20T10:24:35 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama 0.5.1 binary distribution is recognising the TESLA M40:
```
Dec 12 07:11:11 bigrig ollama[362206]: time=2024-12-12T07:11:11.731Z level=INFO source=types.go:123 msg="inference compute" id=GPU-c8f87326-45f6-945a-1a1a-63bd9a7fc262 library=cuda variant=v12 compute=8.6 driver=12.2 name="... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8066/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4734 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4734/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4734/comments | https://api.github.com/repos/ollama/ollama/issues/4734/events | https://github.com/ollama/ollama/pull/4734 | 2,326,710,810 | PR_kwDOJ0Z1Ps5xETOX | 4,734 | partial offloading: allow flash attention and disable mmap | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-30T23:39:10 | 2024-05-30T23:58:02 | 2024-05-30T23:58:02 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4734",
"html_url": "https://github.com/ollama/ollama/pull/4734",
"diff_url": "https://github.com/ollama/ollama/pull/4734.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4734.patch",
"merged_at": "2024-05-30T23:58:01"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4734/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5156 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5156/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5156/comments | https://api.github.com/repos/ollama/ollama/issues/5156/events | https://github.com/ollama/ollama/issues/5156 | 2,363,387,235 | I_kwDOJ0Z1Ps6M3m1j | 5,156 | Set the encoding for API responses | {
"login": "santclear",
"id": 1068127,
"node_id": "MDQ6VXNlcjEwNjgxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1068127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santclear",
"html_url": "https://github.com/santclear",
"followers_url": "https://api.github.com/users/sa... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 0 | 2024-06-20T02:38:14 | 2024-11-06T01:19:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is it possible to set the encoding for API responses? I have integrated Ollama into a platform where it is not possible to change the encoding from 'UTF-16 LE BOM'. Therefore, I would like Ollama to respond to API calls in this encoding. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5156/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5156/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7225 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7225/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7225/comments | https://api.github.com/repos/ollama/ollama/issues/7225/events | https://github.com/ollama/ollama/issues/7225 | 2,591,485,913 | I_kwDOJ0Z1Ps6adu_Z | 7,225 | ollama parallel | {
"login": "jamalibrahimsec",
"id": 185197390,
"node_id": "U_kgDOCwnjTg",
"avatar_url": "https://avatars.githubusercontent.com/u/185197390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamalibrahimsec",
"html_url": "https://github.com/jamalibrahimsec",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396220,
"node_id": ... | closed | false | null | [] | null | 19 | 2024-10-16T10:53:42 | 2024-11-12T16:35:15 | 2024-10-17T18:42:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello
I am trying to run ollama in instance that has 40 cores of cpus
what I understood is that max models env variable permit doing that but there was no cleear explanation about how it would doing it with cpu (knowing that I have enough ram ).
if you can explain for me how ollama manage that with cpu it would be... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7225/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6210 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6210/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6210/comments | https://api.github.com/repos/ollama/ollama/issues/6210/events | https://github.com/ollama/ollama/issues/6210 | 2,451,754,458 | I_kwDOJ0Z1Ps6SIs3a | 6,210 | [question] Do you plan to upstream patches for llama.cpp? | {
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivic... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-08-06T21:57:06 | 2024-08-06T21:57:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
We would like to use the packaged version of llama.cpp (to simplify packaging on FreeBSD) but patches need to be upstreamed first.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.4 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6210/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6210/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3031 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3031/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3031/comments | https://api.github.com/repos/ollama/ollama/issues/3031/events | https://github.com/ollama/ollama/issues/3031 | 2,177,465,161 | I_kwDOJ0Z1Ps6ByXtJ | 3,031 | Unstopped empty lines when I say "hi" to "vicuna" model (temperature: 0.0) | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-03-10T00:45:49 | 2024-03-11T20:31:23 | 2024-03-11T20:31:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Unstopped empty lines when I say "hi" to "vicuna" model (temperature: 0.0) | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3031/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1908 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1908/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1908/comments | https://api.github.com/repos/ollama/ollama/issues/1908/events | https://github.com/ollama/ollama/pull/1908 | 2,075,180,560 | PR_kwDOJ0Z1Ps5jumE1 | 1,908 | Rebase | {
"login": "kris-hansen",
"id": 8484582,
"node_id": "MDQ6VXNlcjg0ODQ1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8484582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kris-hansen",
"html_url": "https://github.com/kris-hansen",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2024-01-10T20:48:34 | 2024-01-25T17:36:27 | 2024-01-10T20:48:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1908",
"html_url": "https://github.com/ollama/ollama/pull/1908",
"diff_url": "https://github.com/ollama/ollama/pull/1908.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1908.patch",
"merged_at": null
} | - Rebase from upstream
- That is all | {
"login": "kris-hansen",
"id": 8484582,
"node_id": "MDQ6VXNlcjg0ODQ1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8484582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kris-hansen",
"html_url": "https://github.com/kris-hansen",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1908/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8459 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8459/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8459/comments | https://api.github.com/repos/ollama/ollama/issues/8459/events | https://github.com/ollama/ollama/issues/8459 | 2,793,577,758 | I_kwDOJ0Z1Ps6mgp0e | 8,459 | MLX Community models for Macs | {
"login": "VistritPandey",
"id": 56611775,
"node_id": "MDQ6VXNlcjU2NjExNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/56611775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VistritPandey",
"html_url": "https://github.com/VistritPandey",
"followers_url": "https://api.githu... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2025-01-16T19:18:47 | 2025-01-19T07:37:58 | 2025-01-19T07:37:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | MLX Community has many models specifically for Macs, which are faster and better than their normal/OG counterparts.
Link: https://huggingface.co/mlx-community | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8459/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/640 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/640/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/640/comments | https://api.github.com/repos/ollama/ollama/issues/640/events | https://github.com/ollama/ollama/pull/640 | 1,918,318,031 | PR_kwDOJ0Z1Ps5bfK0O | 640 | remove list from interactive mode | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-28T21:48:54 | 2023-10-20T16:44:25 | 2023-09-28T21:49:41 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/640",
"html_url": "https://github.com/ollama/ollama/pull/640",
"diff_url": "https://github.com/ollama/ollama/pull/640.diff",
"patch_url": "https://github.com/ollama/ollama/pull/640.patch",
"merged_at": null
} | List in interactive mode doesn't make sense since you cant switch models in the repl | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/640/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7496 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7496/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7496/comments | https://api.github.com/repos/ollama/ollama/issues/7496/events | https://github.com/ollama/ollama/pull/7496 | 2,633,542,432 | PR_kwDOJ0Z1Ps6A1kvI | 7,496 | CI: fix matrix wiring | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-11-04T18:26:43 | 2024-11-04T18:48:38 | 2024-11-04T18:48:35 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7496",
"html_url": "https://github.com/ollama/ollama/pull/7496",
"diff_url": "https://github.com/ollama/ollama/pull/7496.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7496.patch",
"merged_at": "2024-11-04T18:48:35"
} | Matrix strategies can't use env vars so unwind the prior changes to dry the definitions out a little.
Fixes release CI error:
```
[Invalid workflow file: .github/workflows/release.yaml#L166](https://github.com/ollama/ollama/actions/runs/11670233094/workflow)
The workflow is not valid. .github/workflows/release.... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7496/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4915 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4915/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4915/comments | https://api.github.com/repos/ollama/ollama/issues/4915/events | https://github.com/ollama/ollama/issues/4915 | 2,340,902,046 | I_kwDOJ0Z1Ps6Lh1Se | 4,915 | need cogvlm2-llama3-chinese-chat | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-06-07T17:41:11 | 2024-07-20T14:29:35 | 2024-07-20T14:29:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/THUDM/cogvlm2-llama3-chinese-chat-19B
thanks | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4915/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6774 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6774/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6774/comments | https://api.github.com/repos/ollama/ollama/issues/6774/events | https://github.com/ollama/ollama/issues/6774 | 2,522,209,703 | I_kwDOJ0Z1Ps6WVd2n | 6,774 | Add Tokenizer functionality to API | {
"login": "Master-Pr0grammer",
"id": 147747206,
"node_id": "U_kgDOCM5xhg",
"avatar_url": "https://avatars.githubusercontent.com/u/147747206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Master-Pr0grammer",
"html_url": "https://github.com/Master-Pr0grammer",
"followers_url": "https://api... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 1 | 2024-09-12T12:04:44 | 2024-11-06T00:26:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Having access to the models tokenizer is extremely useful for counting tokens, and managing the context window. In a lot of cases its essential to get an LLM implementation to work properly. The model already has the tokenizer loaded, and ollama's backend, llama.cpp, already has an interface for the tokenizer, so it sh... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6774/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6774/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2193 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2193/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2193/comments | https://api.github.com/repos/ollama/ollama/issues/2193/events | https://github.com/ollama/ollama/issues/2193 | 2,101,110,822 | I_kwDOJ0Z1Ps59PGgm | 2,193 | :duck: Publish `DuckDB-NSQL-7B` on ollama | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | [] | closed | false | null | [] | null | 5 | 2024-01-25T20:22:56 | 2024-01-26T22:47:33 | 2024-01-25T22:46:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # :grey_question: About
[`DuckDB-NSQL-7B`](https://motherduck.com/blog/duckdb-text2sql-llm/), A LLM for [duckdb](https://github.com/duckdb/duckdb) has been released.
It would be very useful to add it to `ollama` so anyone could build new experiences on top if it.
# :bookmark: Resources
- [AI That Quacks: In... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2193/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7193 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7193/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7193/comments | https://api.github.com/repos/ollama/ollama/issues/7193/events | https://github.com/ollama/ollama/pull/7193 | 2,584,144,445 | PR_kwDOJ0Z1Ps5-dETZ | 7,193 | Add missing BF16 tensor type. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-10-13T17:46:23 | 2024-10-15T00:06:35 | 2024-10-15T00:06:35 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7193",
"html_url": "https://github.com/ollama/ollama/pull/7193",
"diff_url": "https://github.com/ollama/ollama/pull/7193.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7193.patch",
"merged_at": "2024-10-15T00:06:35"
} | Models with BF16 tensors are not imported because the typeSize is 0.
Fixes: https://github.com/ollama/ollama/issues/7188 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7193/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/7193/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2502 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2502/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2502/comments | https://api.github.com/repos/ollama/ollama/issues/2502/events | https://github.com/ollama/ollama/issues/2502 | 2,135,209,522 | I_kwDOJ0Z1Ps5_RLYy | 2,502 | Ollama fails to detect gpu on prerelease 0.1.25 | {
"login": "abysssol",
"id": 76763323,
"node_id": "MDQ6VXNlcjc2NzYzMzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/76763323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abysssol",
"html_url": "https://github.com/abysssol",
"followers_url": "https://api.github.com/users/aby... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 22 | 2024-02-14T21:10:01 | 2024-05-11T10:43:11 | 2024-02-17T01:23:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm working to update the ollama package in [nixpkgs](https://github.com/NixOS/nixpkgs), and release 0.1.24 works as expected ([nix source](https://github.com/abysssol/nixpkgs/tree/update-ollama-0.1.24), [build here](https://github.com/abysssol/ollama-flake/tree/1.4.1)), but the new prerelease 0.1.25 fails to detect th... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2502/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7917 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7917/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7917/comments | https://api.github.com/repos/ollama/ollama/issues/7917/events | https://github.com/ollama/ollama/issues/7917 | 2,715,108,025 | I_kwDOJ0Z1Ps6h1UK5 | 7,917 | option to change the model loading device (CPU/GPU) | {
"login": "ansilmbabl",
"id": 86063895,
"node_id": "MDQ6VXNlcjg2MDYzODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/86063895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ansilmbabl",
"html_url": "https://github.com/ansilmbabl",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-12-03T13:58:18 | 2024-12-14T15:38:53 | 2024-12-14T15:38:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | it would be great if we can mention the device the model to be loaded (CPU/GPU). | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7917/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7218 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7218/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7218/comments | https://api.github.com/repos/ollama/ollama/issues/7218/events | https://github.com/ollama/ollama/pull/7218 | 2,590,338,878 | PR_kwDOJ0Z1Ps5-wP0f | 7,218 | Update README.md | {
"login": "anan1213095357",
"id": 43770875,
"node_id": "MDQ6VXNlcjQzNzcwODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/43770875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anan1213095357",
"html_url": "https://github.com/anan1213095357",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 0 | 2024-10-16T01:27:27 | 2024-10-16T05:23:52 | 2024-10-16T05:23:52 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7218",
"html_url": "https://github.com/ollama/ollama/pull/7218",
"diff_url": "https://github.com/ollama/ollama/pull/7218.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7218.patch",
"merged_at": null
} | Modern and easy-to-use multi-platform client for Ollama | {
"login": "anan1213095357",
"id": 43770875,
"node_id": "MDQ6VXNlcjQzNzcwODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/43770875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anan1213095357",
"html_url": "https://github.com/anan1213095357",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7218/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5266 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5266/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5266/comments | https://api.github.com/repos/ollama/ollama/issues/5266/events | https://github.com/ollama/ollama/issues/5266 | 2,371,568,361 | I_kwDOJ0Z1Ps6NW0Lp | 5,266 | Windows 11上 ,ollama_llama_server.exe会被“效率模式”运行,导致响应非常慢 | {
"login": "fengbangyao",
"id": 135579315,
"node_id": "U_kgDOCBTGsw",
"avatar_url": "https://avatars.githubusercontent.com/u/135579315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fengbangyao",
"html_url": "https://github.com/fengbangyao",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-06-25T03:40:24 | 2024-07-05T20:14:09 | 2024-07-05T20:14:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
再window 11上,intel 13代cpu (目前手里的有 i5-13490F,i5-13600,应该和大小核有关,在没有小核的cpu上没有复现)运行ollama时,出现提出问题后,cpu占用40%左右,但一直没回答问题或者几分钟回答一个字。使用process lasso关闭效率模式后,正常响应。 是否可以增加一个环境变量,控制是否使用效率模式工作。
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.1.45.0 及以前 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5266/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1354 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1354/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1354/comments | https://api.github.com/repos/ollama/ollama/issues/1354/events | https://github.com/ollama/ollama/issues/1354 | 2,022,254,665 | I_kwDOJ0Z1Ps54iShJ | 1,354 | Llama 2 is listed as open source | {
"login": "raphj",
"id": 3817365,
"node_id": "MDQ6VXNlcjM4MTczNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3817365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raphj",
"html_url": "https://github.com/raphj",
"followers_url": "https://api.github.com/users/raphj/follower... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 1 | 2023-12-02T23:37:36 | 2024-02-20T07:21:00 | 2024-02-20T01:15:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
The readme says "Here are some example open-source models that can be downloaded:" and lists Llama 2. But it's not, notably because it forbids usage for more than 700M users.
You might want to phrase this differently. Ideas:
- remove "open source" from this sentence and
- possibly add a column "Open sou... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1354/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1354/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4707 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4707/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4707/comments | https://api.github.com/repos/ollama/ollama/issues/4707/events | https://github.com/ollama/ollama/pull/4707 | 2,324,038,474 | PR_kwDOJ0Z1Ps5w7Jmc | 4,707 | Draft for Multi-Language Modelfile Creation | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-05-29T18:59:42 | 2024-07-11T20:07:17 | 2024-07-11T20:07:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4707",
"html_url": "https://github.com/ollama/ollama/pull/4707",
"diff_url": "https://github.com/ollama/ollama/pull/4707.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4707.patch",
"merged_at": null
} | Allow support for non-English Modelfile names | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4707/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/719 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/719/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/719/comments | https://api.github.com/repos/ollama/ollama/issues/719/events | https://github.com/ollama/ollama/issues/719 | 1,930,419,605 | I_kwDOJ0Z1Ps5zD92V | 719 | Question -> Request: Mac acceleration for https://hub.docker.com/r/ollama/ollama | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 14 | 2023-10-06T15:21:25 | 2024-06-28T20:47:30 | 2023-10-19T22:12:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama continues to be one of the most user-friendly local model serving libraries out there.
https://hub.docker.com/r/ollama/ollama has great instructions for attaining GPU optimizations.
I am wondering, is there a similar optimization attainable for Mac Metal?
From reading around, it _seems_ there isn't, but... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/719/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/719/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/846 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/846/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/846/comments | https://api.github.com/repos/ollama/ollama/issues/846/events | https://github.com/ollama/ollama/issues/846 | 1,952,948,233 | I_kwDOJ0Z1Ps50Z6AJ | 846 | Can't access model information in fresh (botched?) Linux (Ubuntu 22.04 LTS) install | {
"login": "TM-hub",
"id": 42901776,
"node_id": "MDQ6VXNlcjQyOTAxNzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/42901776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TM-hub",
"html_url": "https://github.com/TM-hub",
"followers_url": "https://api.github.com/users/TM-hub/fo... | [] | closed | false | null | [] | null | 2 | 2023-10-19T19:45:46 | 2023-10-19T22:25:50 | 2023-10-19T22:07:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Step1, run linux install script in terminal window
$curl https://ollama.ai/install.sh | sh
Step 2, install ollama model ***in the same terminal window***
$ollama run mistral
Model is installed to /usr/share/ollama/.ollama owned by ollama:ollama.
>>>/show template
Fails with message ~"can't access ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/846/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/361 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/361/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/361/comments | https://api.github.com/repos/ollama/ollama/issues/361/events | https://github.com/ollama/ollama/issues/361 | 1,853,794,045 | I_kwDOJ0Z1Ps5ufqb9 | 361 | `ollama pull` doesn't start mac app if it's not running | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg... | closed | false | null | [] | null | 2 | 2023-08-16T19:30:54 | 2023-08-28T15:07:16 | 2023-08-28T15:07:15 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Calling `ollama run` will start the Mac app if it's not running and if the `ollama` is contained in `Ollama.app`, but `ollama pull` doesn't seem to do this | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/361/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7360 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7360/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7360/comments | https://api.github.com/repos/ollama/ollama/issues/7360/events | https://github.com/ollama/ollama/pull/7360 | 2,614,587,294 | PR_kwDOJ0Z1Ps5_7bwD | 7,360 | Be quiet when redirecting output | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-10-25T16:34:14 | 2024-11-22T16:04:58 | 2024-11-22T16:04:54 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7360",
"html_url": "https://github.com/ollama/ollama/pull/7360",
"diff_url": "https://github.com/ollama/ollama/pull/7360.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7360.patch",
"merged_at": "2024-11-22T16:04:54"
} | This avoids emitting the progress indicators to stderr, and the interactive prompts to the output file or pipe. Running "ollama run model > out.txt" now exits immediately, and "echo hello | ollama run model > out.txt" produces zero stderr output and a typical response in out.txt
Example output from the echo pipe sc... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7360/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7360/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5084 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5084/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5084/comments | https://api.github.com/repos/ollama/ollama/issues/5084/events | https://github.com/ollama/ollama/pull/5084 | 2,355,843,032 | PR_kwDOJ0Z1Ps5ynTWz | 5,084 | Set the default timeout to 600 seconds | {
"login": "slavonnet",
"id": 9463626,
"node_id": "MDQ6VXNlcjk0NjM2MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9463626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slavonnet",
"html_url": "https://github.com/slavonnet",
"followers_url": "https://api.github.com/users/sl... | [] | closed | false | null | [] | null | 3 | 2024-06-16T15:38:09 | 2024-11-22T17:55:19 | 2024-11-22T17:55:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5084",
"html_url": "https://github.com/ollama/ollama/pull/5084",
"diff_url": "https://github.com/ollama/ollama/pull/5084.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5084.patch",
"merged_at": null
} | Since you have llama.cpp the default timeout is 600 seconds, then we also set 600 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5084/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1058 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1058/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1058/comments | https://api.github.com/repos/ollama/ollama/issues/1058/events | https://github.com/ollama/ollama/issues/1058 | 1,985,967,442 | I_kwDOJ0Z1Ps52X3VS | 1,058 | Examples deploy Sagemaker AWS | {
"login": "DimIsaev",
"id": 11172642,
"node_id": "MDQ6VXNlcjExMTcyNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11172642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DimIsaev",
"html_url": "https://github.com/DimIsaev",
"followers_url": "https://api.github.com/users/Dim... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396220,
... | closed | false | null | [] | null | 3 | 2023-11-09T16:13:49 | 2024-08-25T19:48:49 | 2024-08-25T19:48:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | is there an example of deploy a model using an ollama in a Endpoint SageMaker AWS ? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1058/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7056 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7056/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7056/comments | https://api.github.com/repos/ollama/ollama/issues/7056/events | https://github.com/ollama/ollama/issues/7056 | 2,558,303,172 | I_kwDOJ0Z1Ps6YfJvE | 7,056 | Undefined variable in this code file: convert/tokenizer_spm.go | {
"login": "vignesh1507",
"id": 143084478,
"node_id": "U_kgDOCIdLvg",
"avatar_url": "https://avatars.githubusercontent.com/u/143084478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vignesh1507",
"html_url": "https://github.com/vignesh1507",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-10-01T06:28:12 | 2024-10-03T15:52:35 | 2024-10-03T15:52:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The code doesn't currently define tokenTypeUserDefined, which will cause a compilation error.
How to fix?
Add a constant declaration to fix the issue, for example:
`const tokenTypeUserDefined = int32(1)
`
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
3.1 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7056/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4418 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4418/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4418/comments | https://api.github.com/repos/ollama/ollama/issues/4418/events | https://github.com/ollama/ollama/issues/4418 | 2,294,182,815 | I_kwDOJ0Z1Ps6IvnOf | 4,418 | [Contribution] ZSH Completion script | {
"login": "obeone",
"id": 2248719,
"node_id": "MDQ6VXNlcjIyNDg3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2248719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/obeone",
"html_url": "https://github.com/obeone",
"followers_url": "https://api.github.com/users/obeone/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-14T01:51:34 | 2024-05-14T06:23:03 | 2024-05-14T06:23:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
Here, you can find a `ollama` ZSH completion script. Feel free to add it in the project if you want !
https://gist.github.com/obeone/9313811fd61a7cbb843e0001a4434c58 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4418/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4418/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8423 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8423/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8423/comments | https://api.github.com/repos/ollama/ollama/issues/8423/events | https://github.com/ollama/ollama/issues/8423 | 2,787,549,208 | I_kwDOJ0Z1Ps6mJqAY | 8,423 | save with OLLAMA_MODELS set doesn't work anymore in 0.5.5 | {
"login": "sammyf",
"id": 42468608,
"node_id": "MDQ6VXNlcjQyNDY4NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/42468608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammyf",
"html_url": "https://github.com/sammyf",
"followers_url": "https://api.github.com/users/sammyf/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2025-01-14T15:46:45 | 2025-01-30T10:08:57 | 2025-01-15T23:54:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
On Archlinux (with the latest updates), Ollama 0.5.5
Worked with prior version just a few hours ago
```
$ ollama run llama3.2-abliterated:1b_Q8
> /set parameter num_ctx 8192
> Set parameter 'num_ctx' to '8192'
> >>> /save llama3.2-abliterated:1b_Q8_8k
> error: The model name 'llama3.2-ablitera... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8423/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2330 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2330/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2330/comments | https://api.github.com/repos/ollama/ollama/issues/2330/events | https://github.com/ollama/ollama/pull/2330 | 2,115,138,634 | PR_kwDOJ0Z1Ps5l18-1 | 2,330 | Add fast server stop | {
"login": "alpe",
"id": 28003,
"node_id": "MDQ6VXNlcjI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/28003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alpe",
"html_url": "https://github.com/alpe",
"followers_url": "https://api.github.com/users/alpe/followers",
"follo... | [] | closed | false | null | [] | null | 1 | 2024-02-02T15:04:37 | 2024-05-06T22:52:49 | 2024-05-06T22:52:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2330",
"html_url": "https://github.com/ollama/ollama/pull/2330",
"diff_url": "https://github.com/ollama/ollama/pull/2330.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2330.patch",
"merged_at": null
} | Resolves #2052
First sigterm for a graceful shutdown, second to kill the server.
There are no automated tests for this. Steps to reproduce:
in 1st terminal:
```sh
# go build .
./ollama serve
```
in 2nd terminal
```
./ollama run llama2
```
then start a request that takes some seconds: `long response ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2330/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6250 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6250/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6250/comments | https://api.github.com/repos/ollama/ollama/issues/6250/events | https://github.com/ollama/ollama/issues/6250 | 2,454,703,337 | I_kwDOJ0Z1Ps6ST8zp | 6,250 | 运行glm4-9b模型,对话时间久了会偶发性的回复GGGGGGG | {
"login": "MdcGIt",
"id": 26782023,
"node_id": "MDQ6VXNlcjI2NzgyMDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/26782023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MdcGIt",
"html_url": "https://github.com/MdcGIt",
"followers_url": "https://api.github.com/users/MdcGIt/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 6 | 2024-08-08T01:55:27 | 2024-09-30T23:00:00 | 2024-09-30T23:00:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
运行glm4-9b模型,对话时间久了会偶发性的回复GGGGGGG
显卡信息如下:

### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.3.0 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6250/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3162 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3162/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3162/comments | https://api.github.com/repos/ollama/ollama/issues/3162/events | https://github.com/ollama/ollama/issues/3162 | 2,187,848,031 | I_kwDOJ0Z1Ps6CZ-lf | 3,162 | Possibility to remove max retries exceeded when downloading models from a slow connection | {
"login": "DaRetriever",
"id": 163505097,
"node_id": "U_kgDOCb7jyQ",
"avatar_url": "https://avatars.githubusercontent.com/u/163505097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaRetriever",
"html_url": "https://github.com/DaRetriever",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 9 | 2024-03-15T06:56:44 | 2025-01-25T13:54:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I'm trying to download Mixtral (26Gb), but every 120 mb an error pops up stating:
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/e9/e9e56e8bb5f0fcd4860675e6837a8f6a94e659f5fa7dce6a1076279336320f2... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3162/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3162/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3838 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3838/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3838/comments | https://api.github.com/repos/ollama/ollama/issues/3838/events | https://github.com/ollama/ollama/issues/3838 | 2,257,943,149 | I_kwDOJ0Z1Ps6GlXpt | 3,838 | On Archlinux and AMD Radeon RX 6800S ollama falls back to CPU | {
"login": "arael",
"id": 587072,
"node_id": "MDQ6VXNlcjU4NzA3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/587072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arael",
"html_url": "https://github.com/arael",
"followers_url": "https://api.github.com/users/arael/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-04-23T04:55:50 | 2024-05-01T18:00:04 | 2024-04-24T16:06:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am not able to use my AMD Radeon RX 6800S with ollama. When I try, it falls back to CPU. I have installed tried both ollama and a fresh install with the scripts/install.sh from the git repo. The result is the same. Please help me.
### Command outputs and logs
Here is the output of rocm... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3838/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6462 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6462/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6462/comments | https://api.github.com/repos/ollama/ollama/issues/6462/events | https://github.com/ollama/ollama/issues/6462 | 2,480,563,237 | I_kwDOJ0Z1Ps6T2mQl | 6,462 | Make tool call response compatible with OpenAI format | {
"login": "eliasfroehner",
"id": 11318229,
"node_id": "MDQ6VXNlcjExMzE4MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/11318229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasfroehner",
"html_url": "https://github.com/eliasfroehner",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-08-22T11:37:16 | 2024-08-22T14:03:35 | 2024-08-22T14:03:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Description
Currently, the response for a tool call looks like this:
```json
{
"model": "llama3.1",
"created_at": "2024-07-22T20:33:28.123648Z",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "get_current_weather",
... | {
"login": "eliasfroehner",
"id": 11318229,
"node_id": "MDQ6VXNlcjExMzE4MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/11318229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasfroehner",
"html_url": "https://github.com/eliasfroehner",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6462/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6329 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6329/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6329/comments | https://api.github.com/repos/ollama/ollama/issues/6329/events | https://github.com/ollama/ollama/issues/6329 | 2,461,955,923 | I_kwDOJ0Z1Ps6SvndT | 6,329 | Change log for updated models on website? | {
"login": "coodoo",
"id": 325936,
"node_id": "MDQ6VXNlcjMyNTkzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/325936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coodoo",
"html_url": "https://github.com/coodoo",
"followers_url": "https://api.github.com/users/coodoo/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-08-12T21:52:06 | 2024-08-12T21:52:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Seemed for the past few days llama3.1 models on the website had been constantly updated every couple of hours, wondering is there an changelog to see what's changed (specifically which size of models were updated)?
Ps. attached image was captured just now, indicating the model was updated about an hour ago.
![Sc... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6329/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6329/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1572 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1572/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1572/comments | https://api.github.com/repos/ollama/ollama/issues/1572/events | https://github.com/ollama/ollama/issues/1572 | 2,045,307,774 | I_kwDOJ0Z1Ps556Ot- | 1,572 | Embeddings response too slow | {
"login": "perezjnv",
"id": 18506353,
"node_id": "MDQ6VXNlcjE4NTA2MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/18506353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perezjnv",
"html_url": "https://github.com/perezjnv",
"followers_url": "https://api.github.com/users/per... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWj... | closed | false | null | [] | null | 7 | 2023-12-17T17:44:57 | 2024-11-30T22:16:23 | 2024-05-06T23:43:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I did an ingest with a CSV for fine tuning in a model called2-7b in .bin format, that worked well for me but when using ollma with a Modelfile that implements it the responses are too slow, any suggestions? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1572/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1994 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1994/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1994/comments | https://api.github.com/repos/ollama/ollama/issues/1994/events | https://github.com/ollama/ollama/issues/1994 | 2,080,887,144 | I_kwDOJ0Z1Ps58B9Fo | 1,994 | Ollama requests hangs after about 20 requests and needs to be restarted | {
"login": "Shajan",
"id": 1411014,
"node_id": "MDQ6VXNlcjE0MTEwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1411014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shajan",
"html_url": "https://github.com/Shajan",
"followers_url": "https://api.github.com/users/Shajan/foll... | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 5 | 2024-01-14T20:32:40 | 2024-01-16T21:13:28 | 2024-01-16T21:13:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Request hangs after about 20 requests.
Ollama version : 0.1.20, Linux with T4 GPU as well as Mac M2.
All subsequent `api/generate` request hangs for all models. The only way to resume is to restart ollama `sudo systemctl restart ollama`.
Repro
```python
import requests
def query(session):
url = "http:/... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1994/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1994/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1090 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1090/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1090/comments | https://api.github.com/repos/ollama/ollama/issues/1090/events | https://github.com/ollama/ollama/issues/1090 | 1,989,036,074 | I_kwDOJ0Z1Ps52jkgq | 1,090 | Suggestions for instruction clarifications for running in docker in Windows. | {
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavi... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2023-11-11T16:45:13 | 2024-03-12T15:41:40 | 2024-03-12T15:41:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I just got this installed in Windows using Docker.
The instructions were a bit unclear since the instructions for installing the Nvidia stuff is Linux based. I mistakenly thought I needed to run the container and install all the Nvidia stuff in the container. . It might help other people like me who aren't so clever... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1090/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4546 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4546/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4546/comments | https://api.github.com/repos/ollama/ollama/issues/4546/events | https://github.com/ollama/ollama/pull/4546 | 2,306,889,495 | PR_kwDOJ0Z1Ps5wAgqj | 4,546 | tidy intermediate blobs | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2024-05-20T22:15:34 | 2024-06-05T20:13:15 | 2024-05-20T22:22:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4546",
"html_url": "https://github.com/ollama/ollama/pull/4546",
"diff_url": "https://github.com/ollama/ollama/pull/4546.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4546.patch",
"merged_at": null
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4546/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1137 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1137/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1137/comments | https://api.github.com/repos/ollama/ollama/issues/1137/events | https://github.com/ollama/ollama/issues/1137 | 1,994,631,219 | I_kwDOJ0Z1Ps5246gz | 1,137 | The ollama parameters in the modelfile do not support num_beams | {
"login": "garth-waters",
"id": 85235369,
"node_id": "MDQ6VXNlcjg1MjM1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/85235369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garth-waters",
"html_url": "https://github.com/garth-waters",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-11-15T11:53:31 | 2024-12-23T01:09:42 | 2024-12-23T01:09:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Thanks very much for a great product
I am using a modelfile to create a SqlDecoder2 custom model.
The num_beams parameter improves the accuracy of the model by a lot.
However, this parameter is not yet supported.
Is there any intention of including this parameter in the future?
Many Thanks
Garth | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1137/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6130 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6130/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6130/comments | https://api.github.com/repos/ollama/ollama/issues/6130/events | https://github.com/ollama/ollama/pull/6130 | 2,443,589,943 | PR_kwDOJ0Z1Ps53LrdK | 6,130 | feat(run): Add a --quiet flag to the run command to disable progress | {
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 3 | 2024-08-01T22:26:02 | 2024-11-22T17:06:44 | 2024-11-22T17:05:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6130",
"html_url": "https://github.com/ollama/ollama/pull/6130",
"diff_url": "https://github.com/ollama/ollama/pull/6130.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6130.patch",
"merged_at": null
} | ## Description
The --quiet flag will disable all progress control characters so that if the output of stderr and stdout are combined, the control characters will not be visible.
https://github.com/ollama/ollama/issues/6120
## Testing
Since there is not currently a unit test suite for the `cmd` package, I di... | {
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6130/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3971 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3971/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3971/comments | https://api.github.com/repos/ollama/ollama/issues/3971/events | https://github.com/ollama/ollama/issues/3971 | 2,266,848,281 | I_kwDOJ0Z1Ps6HHVwZ | 3,971 | support for openelm apple | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-04-27T05:24:39 | 2024-05-02T18:16:01 | 2024-05-02T18:16:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/apple/OpenELM
Apples openelm with small models do this can be run on a low power on device ai. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3971/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3971/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7718 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7718/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7718/comments | https://api.github.com/repos/ollama/ollama/issues/7718/events | https://github.com/ollama/ollama/pull/7718 | 2,666,994,816 | PR_kwDOJ0Z1Ps6CL5Jz | 7,718 | readme: improve Community Integrations section | {
"login": "vinhnx",
"id": 1097578,
"node_id": "MDQ6VXNlcjEwOTc1Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1097578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinhnx",
"html_url": "https://github.com/vinhnx",
"followers_url": "https://api.github.com/users/vinhnx/foll... | [] | closed | false | null | [] | null | 0 | 2024-11-18T03:28:55 | 2024-11-18T03:54:12 | 2024-11-18T03:30:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7718",
"html_url": "https://github.com/ollama/ollama/pull/7718",
"diff_url": "https://github.com/ollama/ollama/pull/7718.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7718.patch",
"merged_at": "2024-11-18T03:30:22"
} | * Fix README link opening/closed bracket for Reddit Rate link
* Fix and improve README link for VT project.
* Thank you, Ollama team! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7718/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8395 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8395/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8395/comments | https://api.github.com/repos/ollama/ollama/issues/8395/events | https://github.com/ollama/ollama/issues/8395 | 2,782,337,798 | I_kwDOJ0Z1Ps6l1xsG | 8,395 | Empty response via API | {
"login": "gl2007",
"id": 4097227,
"node_id": "MDQ6VXNlcjQwOTcyMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4097227?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gl2007",
"html_url": "https://github.com/gl2007",
"followers_url": "https://api.github.com/users/gl2007/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 12 | 2025-01-12T07:54:31 | 2025-01-14T21:08:23 | 2025-01-13T19:24:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hosted ollama via 0.0.0.0 in my server in my LAN and "curl <ip>:11434 " returns ollama is running. Also, when I run ollama run<model> in cmd in that machine, I am able to see proper responses.
However, when I run an API request via Postman, I get this empty response, irrespective of the mo... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8395/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1489 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1489/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1489/comments | https://api.github.com/repos/ollama/ollama/issues/1489/events | https://github.com/ollama/ollama/issues/1489 | 2,038,514,181 | I_kwDOJ0Z1Ps55gUIF | 1,489 | Request for Contributor.md | {
"login": "aravindputrevu",
"id": 599694,
"node_id": "MDQ6VXNlcjU5OTY5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/599694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aravindputrevu",
"html_url": "https://github.com/aravindputrevu",
"followers_url": "https://api.github... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-12-12T20:36:48 | 2024-09-04T03:33:48 | 2024-09-04T03:33:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It'd be great to have a sample `contributor.md` for aspiring contributors. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1489/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1489/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3163 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3163/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3163/comments | https://api.github.com/repos/ollama/ollama/issues/3163/events | https://github.com/ollama/ollama/issues/3163 | 2,187,872,589 | I_kwDOJ0Z1Ps6CaElN | 3,163 | Question ollama and lm-studio | {
"login": "kalle07",
"id": 118767589,
"node_id": "U_kgDOBxQ_5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/118767589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kalle07",
"html_url": "https://github.com/kalle07",
"followers_url": "https://api.github.com/users/kalle07/foll... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-03-15T07:16:31 | 2024-03-15T11:29:27 | 2024-03-15T11:29:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
please what is significantly better or different
VS
https://lmstudio.ai/
it has a great gui so every one can handle it ;)
btw where is downlaoded the model?
ollama run llama2
### How should we solve this?
_No response_
### What is the impact of not solving this?
_No resp... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3163/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6617 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6617/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6617/comments | https://api.github.com/repos/ollama/ollama/issues/6617/events | https://github.com/ollama/ollama/pull/6617 | 2,503,844,337 | PR_kwDOJ0Z1Ps56UMyn | 6,617 | Log system memory at info | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-09-03T21:43:13 | 2024-09-03T21:55:24 | 2024-09-03T21:55:21 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6617",
"html_url": "https://github.com/ollama/ollama/pull/6617",
"diff_url": "https://github.com/ollama/ollama/pull/6617.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6617.patch",
"merged_at": "2024-09-03T21:55:21"
} | On systems with low system memory, we can hit allocation failures that are difficult to diagnose without debug logs. This will make it easier to spot.
Resolves #6558 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6617/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1393 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1393/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1393/comments | https://api.github.com/repos/ollama/ollama/issues/1393/events | https://github.com/ollama/ollama/pull/1393 | 2,027,020,151 | PR_kwDOJ0Z1Ps5hOQdR | 1,393 | fix: trim space in modelfile fields | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-05T19:58:30 | 2023-12-05T20:18:02 | 2023-12-05T20:18:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1393",
"html_url": "https://github.com/ollama/ollama/pull/1393",
"diff_url": "https://github.com/ollama/ollama/pull/1393.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1393.patch",
"merged_at": "2023-12-05T20:18:01"
} | only trim whitespace for FROM, ADAPTER, and PARAMETER since whitespace in LICENSE, TEMPLATE, SYSTEM might be significant
resolves #1390
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1393/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5085 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5085/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5085/comments | https://api.github.com/repos/ollama/ollama/issues/5085/events | https://github.com/ollama/ollama/issues/5085 | 2,355,854,394 | I_kwDOJ0Z1Ps6Ma3w6 | 5,085 | OllaMail - An email client powered by Ollama | {
"login": "perpendicularai",
"id": 146530480,
"node_id": "U_kgDOCLvgsA",
"avatar_url": "https://avatars.githubusercontent.com/u/146530480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perpendicularai",
"html_url": "https://github.com/perpendicularai",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2024-06-16T16:07:11 | 2024-06-18T18:24:16 | 2024-06-18T11:38:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi Ollama Team,
Thank you for your time and effort in making sure that the Ollama API is always exceptional when a new version is released.
With that said, I'd like to inform you of an email client that reads and sends email using Ollama. I'll be making a version native to Windows available and wanted to know if ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5085/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5085/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7293 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7293/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7293/comments | https://api.github.com/repos/ollama/ollama/issues/7293/events | https://github.com/ollama/ollama/issues/7293 | 2,602,070,790 | I_kwDOJ0Z1Ps6bGHMG | 7,293 | 0.4.0rc0 arm64 andro termux compile error | {
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-10-21T10:21:45 | 2024-12-31T15:35:38 | 2024-11-12T18:31:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
the earlier worked easy , `go generate ./...` and `go build .`
whats the new style ? gcc .c instead go ?
readme doesnt seem to contain about compilement
i run `go generate ./...` it returns
ill retry w/o go
```~/ollama-0.4.0-rc0 $ go generate ./...
<rm cmd removed for discord>
make ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7293/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7440 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7440/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7440/comments | https://api.github.com/repos/ollama/ollama/issues/7440/events | https://github.com/ollama/ollama/issues/7440 | 2,626,036,645 | I_kwDOJ0Z1Ps6chiOl | 7,440 | [v0.4.0-rc6] CUDA OOM using x/llama3.2-vision:11b-instruct | {
"login": "thatjpk",
"id": 1297471,
"node_id": "MDQ6VXNlcjEyOTc0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1297471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thatjpk",
"html_url": "https://github.com/thatjpk",
"followers_url": "https://api.github.com/users/thatjpk/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 10 | 2024-10-31T05:59:40 | 2024-11-09T01:20:19 | 2024-11-05T03:45:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Attached log: [llama3.2-cuda-oom.log](https://github.com/user-attachments/files/17582524/llama3.2-cuda-oom.log)
I'm testing the `x/llama3.2-vision:11b-instruct-q4_K_M` and `x/llama3.2-vision:11b-instruct-q8_0` models from ollama.com, using ollama 0.4.0-rc6 via Open WebUI v0.3.35 (in docker)... | {
"login": "thatjpk",
"id": 1297471,
"node_id": "MDQ6VXNlcjEyOTc0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1297471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thatjpk",
"html_url": "https://github.com/thatjpk",
"followers_url": "https://api.github.com/users/thatjpk/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7440/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6085 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6085/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6085/comments | https://api.github.com/repos/ollama/ollama/issues/6085/events | https://github.com/ollama/ollama/pull/6085 | 2,438,972,450 | PR_kwDOJ0Z1Ps527_at | 6,085 | commit | {
"login": "rpreslar4765",
"id": 89657947,
"node_id": "MDQ6VXNlcjg5NjU3OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/89657947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rpreslar4765",
"html_url": "https://github.com/rpreslar4765",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2024-07-31T02:19:04 | 2024-07-31T20:12:15 | 2024-07-31T20:12:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6085",
"html_url": "https://github.com/ollama/ollama/pull/6085",
"diff_url": "https://github.com/ollama/ollama/pull/6085.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6085.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6085/reactions",
"total_count": 4,
"+1": 0,
"-1": 2,
"laugh": 1,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6085/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6522 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6522/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6522/comments | https://api.github.com/repos/ollama/ollama/issues/6522/events | https://github.com/ollama/ollama/pull/6522 | 2,488,023,796 | PR_kwDOJ0Z1Ps55gKa5 | 6,522 | detect chat template from configs that contain lists | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2024-08-27T00:32:21 | 2024-08-28T18:04:20 | 2024-08-28T18:04:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6522",
"html_url": "https://github.com/ollama/ollama/pull/6522",
"diff_url": "https://github.com/ollama/ollama/pull/6522.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6522.patch",
"merged_at": "2024-08-28T18:04:18"
} | models like [hermes3](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B/blob/main/tokenizer_config.json#L2053) have a list of chat templates
```json
"chat_template": [
{
"name": "default",
"template": "{{bos_token}}{% for message in messages %}{% if loop.first and messages[0]['role'] !=... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6522/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/273 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/273/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/273/comments | https://api.github.com/repos/ollama/ollama/issues/273/events | https://github.com/ollama/ollama/pull/273 | 1,835,857,757 | PR_kwDOJ0Z1Ps5XJsAl | 273 | Create a sentiments example | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-08-03T23:39:06 | 2023-08-31T23:32:00 | 2023-08-31T23:31:59 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/273",
"html_url": "https://github.com/ollama/ollama/pull/273",
"diff_url": "https://github.com/ollama/ollama/pull/273.diff",
"patch_url": "https://github.com/ollama/ollama/pull/273.patch",
"merged_at": "2023-08-31T23:31:59"
} | A simple example for sentiments analysis and a writer of lists of 10 tweets | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/273/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8290 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8290/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8290/comments | https://api.github.com/repos/ollama/ollama/issues/8290/events | https://github.com/ollama/ollama/issues/8290 | 2,766,879,372 | I_kwDOJ0Z1Ps6k6zqM | 8,290 | pull model manifest: open /usr/local/bin/ollama/.ollama/xxx: not a directory | {
"login": "18279811184",
"id": 35674790,
"node_id": "MDQ6VXNlcjM1Njc0Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/35674790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/18279811184",
"html_url": "https://github.com/18279811184",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2025-01-03T02:13:20 | 2025-01-24T09:51:31 | 2025-01-24T09:51:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I modified the configuration file/etc/systemd/system/ollama.service of ollama, I encountered an error when using ollama to pull the model after restarting:
pulling manifest
Error: pull model manifest: open /usr/local/bin/ollama/.ollama/id_ed25519: not a directory
### OS
Linux
### GPU
... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8290/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1231 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1231/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1231/comments | https://api.github.com/repos/ollama/ollama/issues/1231/events | https://github.com/ollama/ollama/issues/1231 | 2,005,302,301 | I_kwDOJ0Z1Ps53hnwd | 1,231 | `ollama run llama2` on m1 macbook fails after fresh install | {
"login": "johnlarkin1",
"id": 18692931,
"node_id": "MDQ6VXNlcjE4NjkyOTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/18692931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnlarkin1",
"html_url": "https://github.com/johnlarkin1",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 8 | 2023-11-21T23:19:18 | 2024-02-20T01:11:41 | 2024-02-20T01:11:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello! I am getting the following issue after I've downloaded the desktop application and tried to do the following:
```
╰─➤ ollama run llama2
Error: llama runner process has terminated
```
It also seemingly borks my computer for a second, and I'm not even able to use my trackpad (probably due to personal me... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1231/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7353 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7353/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7353/comments | https://api.github.com/repos/ollama/ollama/issues/7353/events | https://github.com/ollama/ollama/issues/7353 | 2,613,301,190 | I_kwDOJ0Z1Ps6bw8_G | 7,353 | Does ollama have other model support plans?Such as TTS, graphics, video, etc | {
"login": "E218PQ",
"id": 110892042,
"node_id": "U_kgDOBpwUCg",
"avatar_url": "https://avatars.githubusercontent.com/u/110892042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/E218PQ",
"html_url": "https://github.com/E218PQ",
"followers_url": "https://api.github.com/users/E218PQ/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 2 | 2024-10-25T07:18:58 | 2024-11-05T00:52:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | We deeply appreciate the convenience, speed, and power of Olama. In order to meet more application scenarios, we hope that Olama can increase support for other model categories, such as text generated speech, text generated images, text generated videos, etc. With the rapid development of AI, the demand for AI will als... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7353/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7353/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1293 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1293/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1293/comments | https://api.github.com/repos/ollama/ollama/issues/1293/events | https://github.com/ollama/ollama/issues/1293 | 2,013,296,184 | I_kwDOJ0Z1Ps54AHY4 | 1,293 | Ollama list modified column shows when the model was last pulled, rather than when last modified | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | open | false | null | [] | null | 0 | 2023-11-27T23:18:06 | 2023-11-27T23:18:06 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If you pull a model, but there are no changes, then the modified column will show that the model was modified seconds ago, even if it hadn't actually been modified in weeks. It should show the last time the actual model was modified | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1293/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1293/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6556 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6556/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6556/comments | https://api.github.com/repos/ollama/ollama/issues/6556/events | https://github.com/ollama/ollama/issues/6556 | 2,494,893,262 | I_kwDOJ0Z1Ps6UtQzO | 6,556 | cuda_v12 returns poor results or crashes for Driver Version: 525.147.05 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-08-29T15:34:49 | 2024-09-04T00:15:32 | 2024-09-04T00:15:32 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Between 0.3.7-rc5 and 0.3.7-rc6 the default CUDA driver was switched from v11 to v12 and results from a variety of models degraded. I first noticed this with 0.3.7-rc6 but the problem also exists in -rc4 if OLLAMA_LLM_LIBRARY is set to cuda_v12. The problem persists into 0.3.8.
```
$ fo... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6556/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6556/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2127 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2127/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2127/comments | https://api.github.com/repos/ollama/ollama/issues/2127/events | https://github.com/ollama/ollama/pull/2127 | 2,092,735,971 | PR_kwDOJ0Z1Ps5kqMy1 | 2,127 | Combine the 2 Dockerfiles and add ROCm | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-21T19:39:53 | 2024-01-21T19:49:04 | 2024-01-21T19:49:01 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2127",
"html_url": "https://github.com/ollama/ollama/pull/2127",
"diff_url": "https://github.com/ollama/ollama/pull/2127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2127.patch",
"merged_at": "2024-01-21T19:49:01"
} | This renames Dockerfile.build to replace the old Dockerfile, and adds some new stages to support 2 modes of building - the build_linux.sh script uses intermediate stages to extract the artifacts for ./dist, and the default build generates a container image usable by both cuda and rocm cards. This required transitioning... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2127/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4184 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4184/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4184/comments | https://api.github.com/repos/ollama/ollama/issues/4184/events | https://github.com/ollama/ollama/issues/4184 | 2,279,710,783 | I_kwDOJ0Z1Ps6H4aA_ | 4,184 | Warning: could not connect to a running Ollama instance | {
"login": "rkuo2000",
"id": 3485732,
"node_id": "MDQ6VXNlcjM0ODU3MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3485732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkuo2000",
"html_url": "https://github.com/rkuo2000",
"followers_url": "https://api.github.com/users/rkuo2... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2024-05-05T20:16:46 | 2024-05-07T20:10:21 | 2024-05-07T19:53:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
curl -fsSL https://ollama.com/install.sh | sh
ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.1.33
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33 | {
"login": "rkuo2000",
"id": 3485732,
"node_id": "MDQ6VXNlcjM0ODU3MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3485732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkuo2000",
"html_url": "https://github.com/rkuo2000",
"followers_url": "https://api.github.com/users/rkuo2... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4184/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1673 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1673/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1673/comments | https://api.github.com/repos/ollama/ollama/issues/1673/events | https://github.com/ollama/ollama/pull/1673 | 2,053,991,502 | PR_kwDOJ0Z1Ps5ip8DP | 1,673 | docs: add Helm Chart link to Package managers list | {
"login": "jdetroyes",
"id": 24377095,
"node_id": "MDQ6VXNlcjI0Mzc3MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/24377095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdetroyes",
"html_url": "https://github.com/jdetroyes",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-12-22T14:19:04 | 2024-02-20T03:05:14 | 2024-02-20T03:05:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1673",
"html_url": "https://github.com/ollama/ollama/pull/1673",
"diff_url": "https://github.com/ollama/ollama/pull/1673.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1673.patch",
"merged_at": "2024-02-20T03:05:14"
} | Add a link to ArtifactHub in the Package managers section for Helm Chart. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1673/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8025 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8025/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8025/comments | https://api.github.com/repos/ollama/ollama/issues/8025/events | https://github.com/ollama/ollama/issues/8025 | 2,729,448,527 | I_kwDOJ0Z1Ps6isBRP | 8,025 | Ollama run very very slow in ARM cpu (KunPeng 920 CPU) | {
"login": "feikiss",
"id": 2208663,
"node_id": "MDQ6VXNlcjIyMDg2NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2208663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/feikiss",
"html_url": "https://github.com/feikiss",
"followers_url": "https://api.github.com/users/feikiss/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 5 | 2024-12-10T08:57:15 | 2025-01-13T01:37:27 | 2025-01-13T01:37:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The ollama is extremely slow on my ARM server (KunPeng-920 series) even I use 8 cores. I use model "qwen-2.5-0.5b_q4" model
server details:
```text
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorc... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8025/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2718 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2718/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2718/comments | https://api.github.com/repos/ollama/ollama/issues/2718/events | https://github.com/ollama/ollama/issues/2718 | 2,151,989,584 | I_kwDOJ0Z1Ps6ARMFQ | 2,718 | Doc permission requirements for Rocm Docker Image to access /dev/dri and /dev/kfd | {
"login": "3lpsy",
"id": 8757851,
"node_id": "MDQ6VXNlcjg3NTc4NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8757851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3lpsy",
"html_url": "https://github.com/3lpsy",
"followers_url": "https://api.github.com/users/3lpsy/follower... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-02-24T00:30:35 | 2024-03-24T18:15:05 | 2024-03-24T18:15:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | TLDR: The 0.1.27-rocm cannot find the correct version of rocm libraries.
I start the docker image using the following command:
```
sudo -H -u ollama /usr/bin/podman --runtime /usr/bin/crun run --gpus all --rm -v /usr/share/ollama/.ollama:/root/.ollama -p 11434:11434 --name ollama 'ollama/ollama:0.1.27-rocm'
```... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2718/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4996 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4996/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4996/comments | https://api.github.com/repos/ollama/ollama/issues/4996/events | https://github.com/ollama/ollama/issues/4996 | 2,348,008,604 | I_kwDOJ0Z1Ps6L88Sc | 4,996 | Apple Silicon macs with 8GB or 16GB slow down when loading larger models | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-06-12T07:17:14 | 2024-06-12T07:17:14 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Less of the model should be loaded to Metal to avoid causing lag
### OS
macOS
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4996/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3516 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3516/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3516/comments | https://api.github.com/repos/ollama/ollama/issues/3516/events | https://github.com/ollama/ollama/issues/3516 | 2,229,370,133 | I_kwDOJ0Z1Ps6E4X0V | 3,516 | [Linux] Switch systemd service unit to EnvironmentFile and start providing it in the repository instead | {
"login": "C0rn3j",
"id": 1641362,
"node_id": "MDQ6VXNlcjE2NDEzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1641362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/C0rn3j",
"html_url": "https://github.com/C0rn3j",
"followers_url": "https://api.github.com/users/C0rn3j/foll... | [] | open | false | null | [] | null | 0 | 2024-04-06T19:12:15 | 2024-04-19T15:41:14 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
Run Llama service on Linux via systemd and being able to configure it without unit overrides through a configuration file.
### How should we solve this?
Stop using `Environment`, use `EnvironmentFile` instead in https://github.com/ollama/ollama/blob/cb03fc9571814edd5af1109bf1a562... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3516/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3516/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8074 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8074/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8074/comments | https://api.github.com/repos/ollama/ollama/issues/8074/events | https://github.com/ollama/ollama/issues/8074 | 2,736,411,159 | I_kwDOJ0Z1Ps6jGlIX | 8,074 | Windows NUMA 4 socket, 144 core system, default thread count causes very poor performance | {
"login": "Panican-Whyasker",
"id": 191496755,
"node_id": "U_kgDOC2oCMw",
"avatar_url": "https://avatars.githubusercontent.com/u/191496755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Panican-Whyasker",
"html_url": "https://github.com/Panican-Whyasker",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | open | false | null | [] | null | 8 | 2024-12-12T16:48:02 | 2024-12-13T20:02:11 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
A 135M-parameter model only yielded 4 words after running for 3.5 hours on one 36-core CPU @ 100% load.
A 3.8B model yielded only 10 words after 10.5 hours on the same machine.
Prompt in both cases: "Introduce yourself."
Windows Server 2016 OS (direct install, no Docker).
Ollama 0.5.... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8074/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8074/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4040 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4040/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4040/comments | https://api.github.com/repos/ollama/ollama/issues/4040/events | https://github.com/ollama/ollama/pull/4040 | 2,270,734,561 | PR_kwDOJ0Z1Ps5uGtpd | 4,040 | docs: add Guix package manager in README. | {
"login": "tusharhero",
"id": 54012021,
"node_id": "MDQ6VXNlcjU0MDEyMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/54012021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharhero",
"html_url": "https://github.com/tusharhero",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-04-30T07:40:33 | 2024-05-09T18:10:24 | 2024-05-09T18:10:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4040",
"html_url": "https://github.com/ollama/ollama/pull/4040",
"diff_url": "https://github.com/ollama/ollama/pull/4040.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4040.patch",
"merged_at": "2024-05-09T18:10:24"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4040/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7884 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7884/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7884/comments | https://api.github.com/repos/ollama/ollama/issues/7884/events | https://github.com/ollama/ollama/pull/7884 | 2,706,378,944 | PR_kwDOJ0Z1Ps6Dm84b | 7,884 | server: move /api/version to use http.Handler | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | open | false | null | [] | null | 0 | 2024-11-29T23:31:14 | 2025-01-14T06:24:37 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7884",
"html_url": "https://github.com/ollama/ollama/pull/7884",
"diff_url": "https://github.com/ollama/ollama/pull/7884.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7884.patch",
"merged_at": null
} | also adds tests for the /api/version endpoint | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7884/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7313 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7313/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7313/comments | https://api.github.com/repos/ollama/ollama/issues/7313/events | https://github.com/ollama/ollama/pull/7313 | 2,605,028,865 | PR_kwDOJ0Z1Ps5_cSjp | 7,313 | Add support for RWKV | {
"login": "MollySophia",
"id": 20746884,
"node_id": "MDQ6VXNlcjIwNzQ2ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20746884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MollySophia",
"html_url": "https://github.com/MollySophia",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2024-10-22T10:33:15 | 2025-01-11T00:37:02 | 2024-12-21T06:04:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7313",
"html_url": "https://github.com/ollama/ollama/pull/7313",
"diff_url": "https://github.com/ollama/ollama/pull/7313.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7313.patch",
"merged_at": null
} | Changes in this PR:
- Added a patch on llama.cpp with commits upstream: [llama.cpp:
10433e8
](https://github.com/ggerganov/llama.cpp/commit/10433e8b457c4cfd759cbb41fc55fc398db4a5da) and [4ff7fe1](https://github.com/ggerganov/llama.cpp/commit/4ff7fe1fb36b04ddd158b2de881c348c5f0ff5e4), [11d4705](https://github.com/gge... | {
"login": "MollySophia",
"id": 20746884,
"node_id": "MDQ6VXNlcjIwNzQ2ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20746884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MollySophia",
"html_url": "https://github.com/MollySophia",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7313/reactions",
"total_count": 8,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/7313/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2604 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2604/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2604/comments | https://api.github.com/repos/ollama/ollama/issues/2604/events | https://github.com/ollama/ollama/pull/2604 | 2,143,533,698 | PR_kwDOJ0Z1Ps5nWaPn | 2,604 | Support for `bert` and `nomic-bert` embedding models | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-20T04:46:07 | 2024-02-21T02:37:30 | 2024-02-21T02:37:29 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2604",
"html_url": "https://github.com/ollama/ollama/pull/2604",
"diff_url": "https://github.com/ollama/ollama/pull/2604.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2604.patch",
"merged_at": "2024-02-21T02:37:29"
} | Fixes #327
This adds initial support for embedding models using the `/api/embeddings` endpoint. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2604/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2604/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5906 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5906/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5906/comments | https://api.github.com/repos/ollama/ollama/issues/5906/events | https://github.com/ollama/ollama/issues/5906 | 2,427,053,469 | I_kwDOJ0Z1Ps6QqeWd | 5,906 | Sth wrong with using Ollama +qdrant:Vector dimension error: expected dim: 1536, got 768 | {
"login": "AI-Beans",
"id": 58964439,
"node_id": "MDQ6VXNlcjU4OTY0NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/58964439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI-Beans",
"html_url": "https://github.com/AI-Beans",
"followers_url": "https://api.github.com/users/AI-... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-07-24T09:28:49 | 2024-07-24T09:30:36 | 2024-07-24T09:30:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

I use ollama embedingmodel and chatmodel,get right response 。
But response form Qdrant:Vector dimension error: expected dim: 1536, got 768
where can I config the para ?
![image](https://github.com/us... | {
"login": "AI-Beans",
"id": 58964439,
"node_id": "MDQ6VXNlcjU4OTY0NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/58964439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI-Beans",
"html_url": "https://github.com/AI-Beans",
"followers_url": "https://api.github.com/users/AI-... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5906/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.