url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/6063
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6063/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6063/comments
https://api.github.com/repos/ollama/ollama/issues/6063/events
https://github.com/ollama/ollama/pull/6063
2,436,478,903
PR_kwDOJ0Z1Ps52zc2p
6,063
convert: import support for command-r models from safetensors
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[]
closed
false
null
[]
null
1
2024-07-29T22:17:37
2025-01-16T00:31:24
2025-01-16T00:31:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6063", "html_url": "https://github.com/ollama/ollama/pull/6063", "diff_url": "https://github.com/ollama/ollama/pull/6063.diff", "patch_url": "https://github.com/ollama/ollama/pull/6063.patch", "merged_at": "2025-01-16T00:31:22" }
working for https://huggingface.co/CohereForAI/aya-23-8B https://huggingface.co/CohereForAI/c4ai-command-r-v01
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6063/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6118
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6118/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6118/comments
https://api.github.com/repos/ollama/ollama/issues/6118/events
https://github.com/ollama/ollama/issues/6118
2,442,433,063
I_kwDOJ0Z1Ps6RlJIn
6,118
panic: runtime error: integer divide by zero in memory.go on bad model create
{ "login": "SongXiaoMao", "id": 55074934, "node_id": "MDQ6VXNlcjU1MDc0OTM0", "avatar_url": "https://avatars.githubusercontent.com/u/55074934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SongXiaoMao", "html_url": "https://github.com/SongXiaoMao", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
9
2024-08-01T13:15:23
2024-08-09T21:21:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I installed ollama today, the system is Ubuntu2204,I downloaded llama3.1-405b-Q2.gguf,There are 9 split files in total. Ollama create llama -f Modelfile.txt is completed successfully. The ollama list is displayed normally, but an error occurs when running.Error: Post "http://127.0.0.1:11434/api...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6118/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/6118/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7412
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7412/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7412/comments
https://api.github.com/repos/ollama/ollama/issues/7412/events
https://github.com/ollama/ollama/pull/7412
2,622,649,442
PR_kwDOJ0Z1Ps6AUBcS
7,412
Implement tokenize and de-tokenize endpoints
{ "login": "jrmo14", "id": 16376030, "node_id": "MDQ6VXNlcjE2Mzc2MDMw", "avatar_url": "https://avatars.githubusercontent.com/u/16376030?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jrmo14", "html_url": "https://github.com/jrmo14", "followers_url": "https://api.github.com/users/jrmo14/fo...
[]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
1
2024-10-30T01:38:06
2024-12-10T01:01:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7412", "html_url": "https://github.com/ollama/ollama/pull/7412", "diff_url": "https://github.com/ollama/ollama/pull/7412.diff", "patch_url": "https://github.com/ollama/ollama/pull/7412.patch", "merged_at": null }
Implement endpoints to tokenize (`/api/tokenize`) and detokenize (`/api/detokenize`) text Closes #3582
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7412/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7412/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/301
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/301/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/301/comments
https://api.github.com/repos/ollama/ollama/issues/301/events
https://github.com/ollama/ollama/pull/301
1,838,617,020
PR_kwDOJ0Z1Ps5XSoAJ
301
pass flags to `serve` to allow setting allowed-origins + host and port
{ "login": "cmiller01", "id": 3050939, "node_id": "MDQ6VXNlcjMwNTA5Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/3050939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cmiller01", "html_url": "https://github.com/cmiller01", "followers_url": "https://api.github.com/users/cm...
[]
closed
false
null
[]
null
3
2023-08-07T03:41:01
2023-08-08T14:55:57
2023-08-08T14:41:43
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/301", "html_url": "https://github.com/ollama/ollama/pull/301", "diff_url": "https://github.com/ollama/ollama/pull/301.diff", "patch_url": "https://github.com/ollama/ollama/pull/301.patch", "merged_at": "2023-08-08T14:41:43" }
resolves: https://github.com/jmorganca/ollama/issues/300 and https://github.com/jmorganca/ollama/issues/282 example usage: ``` ollama serve --port 9999 --allowed-origins "http://foo.example.com,http://192.0.0.1" ```
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/301/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8650
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8650/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8650/comments
https://api.github.com/repos/ollama/ollama/issues/8650/events
https://github.com/ollama/ollama/issues/8650
2,817,224,735
I_kwDOJ0Z1Ps6n63Af
8,650
Request Support for Running Inference Through LM Studio
{ "login": "joseph777111", "id": 80947356, "node_id": "MDQ6VXNlcjgwOTQ3MzU2", "avatar_url": "https://avatars.githubusercontent.com/u/80947356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joseph777111", "html_url": "https://github.com/joseph777111", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2025-01-29T04:41:45
2025-01-29T23:32:52
2025-01-29T23:32:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ERROR: type should be string, got "https://lmstudio.ai\nhttps://github.com/lmstudio-ai/lms\n\nLM Studio is one of the most popular locally run inference platforms, which has its own inference server. Much like Ollama, LM Studio uses llama.cpp for inferences - but it also supports MLX.\n\nPlease kindly add support to use Goose with LM Studio as the inference backend. Thanks in advance! 🙏 \n\n"
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8650/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7859
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7859/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7859/comments
https://api.github.com/repos/ollama/ollama/issues/7859/events
https://github.com/ollama/ollama/issues/7859
2,698,287,004
I_kwDOJ0Z1Ps6g1Jec
7,859
Hymba-1.5B-family of models
{ "login": "jruokola", "id": 90187138, "node_id": "MDQ6VXNlcjkwMTg3MTM4", "avatar_url": "https://avatars.githubusercontent.com/u/90187138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jruokola", "html_url": "https://github.com/jruokola", "followers_url": "https://api.github.com/users/jru...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-11-27T11:45:33
2024-12-13T11:37:38
2024-12-13T11:37:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/nvidia/Hymba-1.5B-Instruct https://huggingface.co/nvidia/Hymba-1.5B-Base
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7859/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7859/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/3084
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3084/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3084/comments
https://api.github.com/repos/ollama/ollama/issues/3084/events
https://github.com/ollama/ollama/pull/3084
2,182,552,422
PR_kwDOJ0Z1Ps5pba5V
3,084
update convert
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2024-03-12T19:48:35
2024-06-05T20:12:05
2024-03-27T21:02:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3084", "html_url": "https://github.com/ollama/ollama/pull/3084", "diff_url": "https://github.com/ollama/ollama/pull/3084.diff", "patch_url": "https://github.com/ollama/ollama/pull/3084.patch", "merged_at": null }
the output of convert remains exactly the same
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3084/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6016
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6016/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6016/comments
https://api.github.com/repos/ollama/ollama/issues/6016/events
https://github.com/ollama/ollama/issues/6016
2,433,436,116
I_kwDOJ0Z1Ps6RC0nU
6,016
Gemma2 and Mistral-nemo not running on ollama
{ "login": "gus147", "id": 176750230, "node_id": "U_kgDOCoj-lg", "avatar_url": "https://avatars.githubusercontent.com/u/176750230?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gus147", "html_url": "https://github.com/gus147", "followers_url": "https://api.github.com/users/gus147/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-07-27T11:25:40
2024-07-28T00:14:12
2024-07-28T00:14:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? [147@Clevo ~]$ ollama run mistral-nemo:12b-instruct-2407-fp16 Error: exception error loading model hyperparameters: invalid n_rot: 128, expected 160 [147@Clevo ~]$ ollama run gemma2:27b-instruct-q8_0 Error: exception error loading model architecture: unknown model architecture: 'gemma2' C...
{ "login": "gus147", "id": 176750230, "node_id": "U_kgDOCoj-lg", "avatar_url": "https://avatars.githubusercontent.com/u/176750230?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gus147", "html_url": "https://github.com/gus147", "followers_url": "https://api.github.com/users/gus147/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6016/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6707
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6707/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6707/comments
https://api.github.com/repos/ollama/ollama/issues/6707/events
https://github.com/ollama/ollama/issues/6707
2,513,271,479
I_kwDOJ0Z1Ps6VzXq3
6,707
Generate endpoint intermittently misses final token before done
{ "login": "tarbard", "id": 2259265, "node_id": "MDQ6VXNlcjIyNTkyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/2259265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tarbard", "html_url": "https://github.com/tarbard", "followers_url": "https://api.github.com/users/tarbard/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[ { "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://...
null
6
2024-09-09T08:25:48
2024-09-14T05:05:42
2024-09-12T00:20:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When using the generate endpoint it intermittently misses the last token right before the "done" message ```JSON {"model":"adrienbrault/nous-hermes2theta-llama3-8b:q8_0","created_at":"2024-09-09T08:04:47.463348938Z","response":" Bear","done":false} {"model":"adrienbrault/nous-hermes2theta...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6707/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3998
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3998/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3998/comments
https://api.github.com/repos/ollama/ollama/issues/3998/events
https://github.com/ollama/ollama/issues/3998
2,267,413,766
I_kwDOJ0Z1Ps6HJf0G
3,998
Phi-3-mini-128k no load
{ "login": "bambooqj", "id": 20792621, "node_id": "MDQ6VXNlcjIwNzkyNjIx", "avatar_url": "https://avatars.githubusercontent.com/u/20792621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bambooqj", "html_url": "https://github.com/bambooqj", "followers_url": "https://api.github.com/users/bam...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
7
2024-04-28T07:54:40
2024-07-05T04:05:50
2024-07-05T04:05:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
model download: `https://huggingface.co/PrunaAI/Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed` modfile: ``` FROM ./Phi-3-mini-128k-instruct.Q4_K_M.gguf PARAMETER num_ctx 65536 PARAMETER num_keep 4 PARAMETER stop <|user|> PARAMETER stop <|assistant|> PARAMETER stop <|system|> PARAMETER stop <|end|> PARAM...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3998/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7964
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7964/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7964/comments
https://api.github.com/repos/ollama/ollama/issues/7964/events
https://github.com/ollama/ollama/pull/7964
2,722,202,817
PR_kwDOJ0Z1Ps6ER9F8
7,964
Fix message truncation logic and ensure at least one system message i…
{ "login": "youyou301", "id": 162660372, "node_id": "U_kgDOCbIAFA", "avatar_url": "https://avatars.githubusercontent.com/u/162660372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/youyou301", "html_url": "https://github.com/youyou301", "followers_url": "https://api.github.com/users/youyou...
[]
open
false
null
[]
null
0
2024-12-06T06:48:21
2024-12-10T01:06:54
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7964", "html_url": "https://github.com/ollama/ollama/pull/7964", "diff_url": "https://github.com/ollama/ollama/pull/7964.diff", "patch_url": "https://github.com/ollama/ollama/pull/7964.patch", "merged_at": null }
### Changes: - Fixed the message truncation logic to ensure that at least one `system` message is included. - Adjusted the message handling to always include the last message, even if the context window is exceeded. ### Motivation: - This change ensures that the context window truncation logic respects the system...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7964/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3597
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3597/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3597/comments
https://api.github.com/repos/ollama/ollama/issues/3597/events
https://github.com/ollama/ollama/issues/3597
2,237,774,788
I_kwDOJ0Z1Ps6FYbvE
3,597
there is Massive Text Embedding Benchmark (MTEB) Leaderboard,could u support those mod?
{ "login": "doriszhang2020", "id": 104901283, "node_id": "U_kgDOBkCqow", "avatar_url": "https://avatars.githubusercontent.com/u/104901283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/doriszhang2020", "html_url": "https://github.com/doriszhang2020", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
1
2024-04-11T13:49:28
2024-04-15T19:14:10
2024-04-15T19:14:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? ![image](https://github.com/ollama/ollama/assets/104901283/1f59958f-cf01-4fda-a4c6-37d53de011f0) ![image](https://github.com/ollama/ollama/assets/104901283/08202971-6f82-4b15-ba37-1482ddea5f05) https://huggingface.co/spaces/mteb/leaderboard
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3597/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1207
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1207/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1207/comments
https://api.github.com/repos/ollama/ollama/issues/1207/events
https://github.com/ollama/ollama/issues/1207
2,002,340,291
I_kwDOJ0Z1Ps53WUnD
1,207
it is possible to have multiple ssh on linux (due to ollama running as a service)
{ "login": "eramax", "id": 542413, "node_id": "MDQ6VXNlcjU0MjQxMw==", "avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eramax", "html_url": "https://github.com/eramax", "followers_url": "https://api.github.com/users/eramax/follow...
[]
open
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
3
2023-11-20T14:33:47
2023-12-05T23:24:52
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I guess still there is an issue in the push function this is my repo https://ollama.ai/eramax/nous-capybara-7b-1.9 the ssh pub key shown at `cat ~/.ollama/id_ed25519.pub` is already set and added to my profile *md is the directory ```bash ➜ md llm -v ollama version 0.1.10 ➜ md l .0644 root root 4.8 G...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1207/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5349
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5349/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5349/comments
https://api.github.com/repos/ollama/ollama/issues/5349/events
https://github.com/ollama/ollama/issues/5349
2,379,353,269
I_kwDOJ0Z1Ps6N0gy1
5,349
Ollama stderr returns info logs
{ "login": "metaspartan", "id": 10162347, "node_id": "MDQ6VXNlcjEwMTYyMzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/10162347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/metaspartan", "html_url": "https://github.com/metaspartan", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-06-28T01:17:58
2024-06-28T01:17:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama is outputting regular logs that should be in `stdout` but they are outputting in `stderr` when running it through a subprocess, these logs should be outputting via stdout and only errors through stderr This is for all supported OS. ### OS Linux, macOS, Windows, Docker, WSL2 ### GPU ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5349/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5349/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4052
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4052/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4052/comments
https://api.github.com/repos/ollama/ollama/issues/4052/events
https://github.com/ollama/ollama/issues/4052
2,271,490,246
I_kwDOJ0Z1Ps6HZDDG
4,052
Unable to create gguf file for my finetuned mixtral8x7b model
{ "login": "Nimmalapudi-Pratyusha", "id": 129523872, "node_id": "U_kgDOB7hgoA", "avatar_url": "https://avatars.githubusercontent.com/u/129523872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nimmalapudi-Pratyusha", "html_url": "https://github.com/Nimmalapudi-Pratyusha", "followers_url": ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-04-30T13:31:42
2024-05-07T17:24:27
2024-05-07T17:24:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am trying to create a gguf file for my finetuned mixtral model but it keeps throwing following error: Command :` python llm/llama.cpp/convert.py /home/raft_mixtral_2epochs_v1 --outtype q8_0 --outfile converted.bin` Error: ``` raise FileNotFoundError(f"Can't find model in directory {path...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4052/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2770
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2770/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2770/comments
https://api.github.com/repos/ollama/ollama/issues/2770/events
https://github.com/ollama/ollama/pull/2770
2,155,110,949
PR_kwDOJ0Z1Ps5n9zZc
2,770
expand user home dir in OLLAMA_MODELS
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
1
2024-02-26T20:59:23
2024-11-21T18:23:47
2024-11-21T18:23:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2770", "html_url": "https://github.com/ollama/ollama/pull/2770", "diff_url": "https://github.com/ollama/ollama/pull/2770.diff", "patch_url": "https://github.com/ollama/ollama/pull/2770.patch", "merged_at": null }
This allows the `OLLAMA_MODELS` env var to contain a tilde, the same way other paths can be specified in ollama models. Ex: `OLLAMA_MODELS="~/models" ollama serve` now puts models in the proper location
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2770/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/391
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/391/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/391/comments
https://api.github.com/repos/ollama/ollama/issues/391/events
https://github.com/ollama/ollama/issues/391
1,860,369,757
I_kwDOJ0Z1Ps5u4v1d
391
Min device that llama 70b require?
{ "login": "SaraiQX", "id": 73533505, "node_id": "MDQ6VXNlcjczNTMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/73533505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaraiQX", "html_url": "https://github.com/SaraiQX", "followers_url": "https://api.github.com/users/SaraiQ...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
3
2023-08-22T00:48:36
2023-08-22T00:58:48
2023-08-22T00:58:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Love Ollama which made my intel mac can run llama 7b 😄. Just wonder what kind of mac device are required to run llama 2 70B? Will M2 ultra with 64G vRam be satisfying? Thx.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/391/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7811
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7811/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7811/comments
https://api.github.com/repos/ollama/ollama/issues/7811/events
https://github.com/ollama/ollama/pull/7811
2,686,879,982
PR_kwDOJ0Z1Ps6C6-xg
7,811
Add Observability section and OpenLIT in README
{ "login": "patcher9", "id": 165258753, "node_id": "U_kgDOCdmmAQ", "avatar_url": "https://avatars.githubusercontent.com/u/165258753?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patcher9", "html_url": "https://github.com/patcher9", "followers_url": "https://api.github.com/users/patcher9/...
[]
closed
false
null
[]
null
1
2024-11-24T02:01:49
2024-11-24T02:09:10
2024-11-24T02:03:12
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7811", "html_url": "https://github.com/ollama/ollama/pull/7811", "diff_url": "https://github.com/ollama/ollama/pull/7811.diff", "patch_url": "https://github.com/ollama/ollama/pull/7811.patch", "merged_at": "2024-11-24T02:03:12" }
Adding OpenLIT to the README as an integration. Did not find a proper category so added `Observability`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7811/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2067
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2067/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2067/comments
https://api.github.com/repos/ollama/ollama/issues/2067/events
https://github.com/ollama/ollama/pull/2067
2,089,582,823
PR_kwDOJ0Z1Ps5kfo6D
2,067
Use `gzip` for embedded files
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-19T04:55:36
2024-01-19T18:23:05
2024-01-19T18:23:04
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2067", "html_url": "https://github.com/ollama/ollama/pull/2067", "diff_url": "https://github.com/ollama/ollama/pull/2067.diff", "patch_url": "https://github.com/ollama/ollama/pull/2067.patch", "merged_at": "2024-01-19T18:23:04" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2067/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5998
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5998/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5998/comments
https://api.github.com/repos/ollama/ollama/issues/5998/events
https://github.com/ollama/ollama/issues/5998
2,432,975,911
I_kwDOJ0Z1Ps6RBEQn
5,998
"Error loading llama server" when using a T5ForConditionalGeneration architucture model, converted to GGUF format
{ "login": "iG8R", "id": 11407417, "node_id": "MDQ6VXNlcjExNDA3NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/11407417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iG8R", "html_url": "https://github.com/iG8R", "followers_url": "https://api.github.com/users/iG8R/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-07-26T21:04:33
2024-07-26T21:04:33
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? With the help of https://huggingface.co/spaces/ggml-org/gguf-my-repo I made the https://huggingface.co/iG8R/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF model which was successfully imported into `ollama`. But when I try to use it, I always get the following error, while all other models work ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5998/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/258
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/258/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/258/comments
https://api.github.com/repos/ollama/ollama/issues/258/events
https://github.com/ollama/ollama/issues/258
1,833,465,017
I_kwDOJ0Z1Ps5tSHS5
258
Ollama running in Dockerfile
{ "login": "osamanatouf2", "id": 70172406, "node_id": "MDQ6VXNlcjcwMTcyNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/70172406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osamanatouf2", "html_url": "https://github.com/osamanatouf2", "followers_url": "https://api.github.c...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
14
2023-08-02T16:00:14
2023-12-12T21:49:23
2023-09-07T13:31:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
@jmorganca @mxyng I got ./ollama serve to work in docker. The only issue is that I am not able to pull down the files for other models like llama2 via the commad ./ollama pull llama2. I have tested the same configuration on ubuntu and works fine. Just inside the docker I get the following issue: ```Error: Post "http://...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/258/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4753
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4753/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4753/comments
https://api.github.com/repos/ollama/ollama/issues/4753/events
https://github.com/ollama/ollama/issues/4753
2,328,191,658
I_kwDOJ0Z1Ps6KxWKq
4,753
FROM is not recognized
{ "login": "EugeoSynthesisThirtyTwo", "id": 24735555, "node_id": "MDQ6VXNlcjI0NzM1NTU1", "avatar_url": "https://avatars.githubusercontent.com/u/24735555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EugeoSynthesisThirtyTwo", "html_url": "https://github.com/EugeoSynthesisThirtyTwo", "foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-31T16:24:21
2024-06-24T16:43:36
2024-06-24T16:43:36
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I followed the instructions to make a gguf model work but FROM doesn't work ``` C:\Users\Armaguedin\Documents\dev\python\text-generation-webui\models>ollama Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a M...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4753/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6746
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6746/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6746/comments
https://api.github.com/repos/ollama/ollama/issues/6746/events
https://github.com/ollama/ollama/issues/6746
2,518,937,874
I_kwDOJ0Z1Ps6WI_ES
6,746
add support for Reflection-Llama-3.1
{ "login": "clipsheep6", "id": 113185666, "node_id": "U_kgDOBr8Tgg", "avatar_url": "https://avatars.githubusercontent.com/u/113185666?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clipsheep6", "html_url": "https://github.com/clipsheep6", "followers_url": "https://api.github.com/users/cli...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-09-11T08:11:48
2024-09-11T23:57:44
2024-09-11T23:57:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
any idea that we add Reflection-Llama-3.1 model?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6746/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/480
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/480/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/480/comments
https://api.github.com/repos/ollama/ollama/issues/480/events
https://github.com/ollama/ollama/issues/480
1,884,842,197
I_kwDOJ0Z1Ps5wWGjV
480
Build failure with v0.0.18
{ "login": "p-linnane", "id": 105994585, "node_id": "U_kgDOBlFZWQ", "avatar_url": "https://avatars.githubusercontent.com/u/105994585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/p-linnane", "html_url": "https://github.com/p-linnane", "followers_url": "https://api.github.com/users/p-linn...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2023-09-06T22:31:32
2023-09-07T03:34:28
2023-09-07T03:08:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello 👋 . I'm a maintainer for the [Homebrew](https://brew.sh) project. While packaging v0.0.18 of ollama, we're encountering a build failure. Here is the error: ```shell go build -trimpath -o=/home/linuxbrew/.linuxbrew/Cellar/ollama/0.0.18/bin/ollama -ldflags=-s -w go: downloading github.com/spf13/cobra v1.7.0...
{ "login": "p-linnane", "id": 105994585, "node_id": "U_kgDOBlFZWQ", "avatar_url": "https://avatars.githubusercontent.com/u/105994585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/p-linnane", "html_url": "https://github.com/p-linnane", "followers_url": "https://api.github.com/users/p-linn...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/480/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5690
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5690/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5690/comments
https://api.github.com/repos/ollama/ollama/issues/5690/events
https://github.com/ollama/ollama/issues/5690
2,407,533,519
I_kwDOJ0Z1Ps6PgAvP
5,690
Ollama
{ "login": "Amir231123", "id": 173946415, "node_id": "U_kgDOCl42Lw", "avatar_url": "https://avatars.githubusercontent.com/u/173946415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Amir231123", "html_url": "https://github.com/Amir231123", "followers_url": "https://api.github.com/users/Ami...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-07-14T17:53:02
2024-07-15T02:24:20
2024-07-14T23:07:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5690/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7513
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7513/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7513/comments
https://api.github.com/repos/ollama/ollama/issues/7513/events
https://github.com/ollama/ollama/pull/7513
2,635,988,914
PR_kwDOJ0Z1Ps6A9M97
7,513
grammar: surgically wrenching gbnf from system messages
{ "login": "tucnak", "id": 934682, "node_id": "MDQ6VXNlcjkzNDY4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/934682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tucnak", "html_url": "https://github.com/tucnak", "followers_url": "https://api.github.com/users/tucnak/follow...
[]
closed
false
null
[]
null
1
2024-11-05T16:53:53
2024-12-05T00:33:51
2024-12-05T00:33:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7513", "html_url": "https://github.com/ollama/ollama/pull/7513", "diff_url": "https://github.com/ollama/ollama/pull/7513.diff", "patch_url": "https://github.com/ollama/ollama/pull/7513.patch", "merged_at": null }
Some people have reached out to me re: my comment from earlier https://github.com/ollama/ollama/issues/6237#issuecomment-2428338129 so I'd decided it might be worth a shot. To re-cap: this pull request implements wrenching GBNF's (only one at a time!) from the system prompt. I know a bunch of pull requests to similar e...
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7513/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/7513/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1566
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1566/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1566/comments
https://api.github.com/repos/ollama/ollama/issues/1566/events
https://github.com/ollama/ollama/issues/1566
2,044,925,127
I_kwDOJ0Z1Ps554xTH
1,566
Error: llama runner exited, you may not have enough available memory to run this model
{ "login": "baardove", "id": 3517788, "node_id": "MDQ6VXNlcjM1MTc3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3517788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baardove", "html_url": "https://github.com/baardove", "followers_url": "https://api.github.com/users/baard...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2023-12-16T19:41:46
2024-01-08T21:42:04
2024-01-08T21:42:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, When I have run a modell and try to communicate with it, I always get same response, no matter which model (or small or big)... ' Error: llama runner exited, you may not have enough available memory to run this model ' Any clues on this one? My host is running ubuntu 20.04 on proxmox with approx 56 ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1566/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/807
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/807/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/807/comments
https://api.github.com/repos/ollama/ollama/issues/807/events
https://github.com/ollama/ollama/issues/807
1,945,579,628
I_kwDOJ0Z1Ps5z9zBs
807
Feature request: Add CLI option to specify a system prompt
{ "login": "louisabraham", "id": 13174805, "node_id": "MDQ6VXNlcjEzMTc0ODA1", "avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/louisabraham", "html_url": "https://github.com/louisabraham", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6100196012, "node_id": ...
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
2
2023-10-16T15:54:16
2023-12-04T20:26:44
2023-12-04T20:26:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/807/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/807/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1189
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1189/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1189/comments
https://api.github.com/repos/ollama/ollama/issues/1189/events
https://github.com/ollama/ollama/pull/1189
2,000,297,803
PR_kwDOJ0Z1Ps5f0CFv
1,189
upload: retry complete upload
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-11-18T07:52:44
2023-11-18T07:54:32
2023-11-18T07:54:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1189", "html_url": "https://github.com/ollama/ollama/pull/1189", "diff_url": "https://github.com/ollama/ollama/pull/1189.diff", "patch_url": "https://github.com/ollama/ollama/pull/1189.patch", "merged_at": null }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1189/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6031
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6031/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6031/comments
https://api.github.com/repos/ollama/ollama/issues/6031/events
https://github.com/ollama/ollama/issues/6031
2,434,078,834
I_kwDOJ0Z1Ps6RFRhy
6,031
Timeout to start model too little - progress stalls at 100% for 5 minutes when loading with swap
{ "login": "forReason", "id": 12736950, "node_id": "MDQ6VXNlcjEyNzM2OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/12736950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forReason", "html_url": "https://github.com/forReason", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-07-28T19:29:54
2024-09-05T21:00:09
2024-09-05T21:00:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am trying to run llama:405b on a hardware with only little power and through a swap file. Im not concerned about its speed. Though, the Model cant load because: ``` ollama run llama3.1:405b --keepalive 5h Error: timed out waiting for llama runner to start - progress 1.00 - ``` is it ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6031/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6031/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4137
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4137/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4137/comments
https://api.github.com/repos/ollama/ollama/issues/4137/events
https://github.com/ollama/ollama/issues/4137
2,278,343,315
I_kwDOJ0Z1Ps6HzMKT
4,137
Support for HyperGAI/HPT1_5-Air-Llama-3-8B-Instruct-multimodal
{ "login": "Extremys", "id": 7710663, "node_id": "MDQ6VXNlcjc3MTA2NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/7710663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Extremys", "html_url": "https://github.com/Extremys", "followers_url": "https://api.github.com/users/Extre...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2024-05-03T20:02:58
2024-05-11T08:26:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello team, It would be so great so have this new multimodal based llama3 model supported with ollama! thanks! https://huggingface.co/HyperGAI/HPT1_5-Air-Llama-3-8B-Instruct-multimodal https://github.com/HyperGAI/HPT?tab=readme-ov-file#installation
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4137/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4137/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7261
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7261/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7261/comments
https://api.github.com/repos/ollama/ollama/issues/7261/events
https://github.com/ollama/ollama/issues/7261
2,598,233,203
I_kwDOJ0Z1Ps6a3eRz
7,261
Install on any drive
{ "login": "DavidHF", "id": 5684280, "node_id": "MDQ6VXNlcjU2ODQyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/5684280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidHF", "html_url": "https://github.com/DavidHF", "followers_url": "https://api.github.com/users/DavidHF/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-10-18T19:23:22
2024-10-18T22:29:04
2024-10-18T22:29:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
What about installing on other drives than c: ?
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7261/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4909
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4909/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4909/comments
https://api.github.com/repos/ollama/ollama/issues/4909/events
https://github.com/ollama/ollama/pull/4909
2,340,722,069
PR_kwDOJ0Z1Ps5x0Cnw
4,909
Add ability to skip oneapi generate
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-06-07T15:33:28
2024-06-07T21:07:18
2024-06-07T21:07:15
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4909", "html_url": "https://github.com/ollama/ollama/pull/4909", "diff_url": "https://github.com/ollama/ollama/pull/4909.diff", "patch_url": "https://github.com/ollama/ollama/pull/4909.patch", "merged_at": "2024-06-07T21:07:15" }
This follows the same pattern for cuda and rocm to allow disabling the build even when we detect the dependent libraries Related to #4511
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4909/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/504
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/504/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/504/comments
https://api.github.com/repos/ollama/ollama/issues/504/events
https://github.com/ollama/ollama/issues/504
1,889,153,739
I_kwDOJ0Z1Ps5wmjLL
504
Python package
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
4
2023-09-10T13:50:17
2024-03-11T19:33:40
2024-03-11T19:33:40
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Quite a few folks have been running: ``` pip install ollama ``` However there isn't yet a python package (there was previously an old Ollama prototype from July). This issue tracks having a first-class python package for using Ollama.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/504/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4060
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4060/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4060/comments
https://api.github.com/repos/ollama/ollama/issues/4060/events
https://github.com/ollama/ollama/pull/4060
2,272,340,961
PR_kwDOJ0Z1Ps5uMF0o
4,060
Update llama.cpp submodule to `f364eb6`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-04-30T19:50:05
2024-04-30T21:25:40
2024-04-30T21:25:40
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4060", "html_url": "https://github.com/ollama/ollama/pull/4060", "diff_url": "https://github.com/ollama/ollama/pull/4060.diff", "patch_url": "https://github.com/ollama/ollama/pull/4060.patch", "merged_at": "2024-04-30T21:25:40" }
Also filters out stop words for now from being returned in the API as they will print on older clients
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4060/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7429
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7429/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7429/comments
https://api.github.com/repos/ollama/ollama/issues/7429/events
https://github.com/ollama/ollama/issues/7429
2,625,299,820
I_kwDOJ0Z1Ps6ceuVs
7,429
cuda device ordering inconsistent between runtime and management library
{ "login": "Nepherpitou", "id": 6158945, "node_id": "MDQ6VXNlcjYxNTg5NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6158945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nepherpitou", "html_url": "https://github.com/Nepherpitou", "followers_url": "https://api.github.com/us...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-10-30T20:44:12
2024-11-02T23:35:42
2024-11-02T23:35:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ### My GPU setup is: 1. RTX 3090 - first PCIE 5.0 x16, but secondary GPU 2. RTX 4090 - second PCIE 4.0 x4, but primary GPU So, I have a weird bug with memory estimations. There are two calls for device memory usage info: 1. `C.cudart_bootstrap(*cHandles.cudart, C.int(i), &memInfo)` her...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7429/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2380
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2380/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2380/comments
https://api.github.com/repos/ollama/ollama/issues/2380/events
https://github.com/ollama/ollama/issues/2380
2,122,171,715
I_kwDOJ0Z1Ps5-fcVD
2,380
Ollama is unstable recently
{ "login": "lestan", "id": 1471736, "node_id": "MDQ6VXNlcjE0NzE3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lestan", "html_url": "https://github.com/lestan", "followers_url": "https://api.github.com/users/lestan/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-02-07T04:38:04
2024-02-08T00:13:19
2024-02-08T00:13:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
As of at least the last two recent versions, I have been experiencing a lot of issues with Ollama. Primarily, it seems to report that it can't connect to the server when using the Ollama CLI commands, even though the server is running and I can curl it. Also when using the Ollama Python SDK, I often get a Connection ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2380/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5848
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5848/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5848/comments
https://api.github.com/repos/ollama/ollama/issues/5848/events
https://github.com/ollama/ollama/issues/5848
2,422,550,533
I_kwDOJ0Z1Ps6QZTAF
5,848
The logs do not contain the request content sent by the client.
{ "login": "H9990HH969", "id": 133352113, "node_id": "U_kgDOB_LKsQ", "avatar_url": "https://avatars.githubusercontent.com/u/133352113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/H9990HH969", "html_url": "https://github.com/H9990HH969", "followers_url": "https://api.github.com/users/H99...
[]
closed
false
null
[]
null
3
2024-07-22T10:44:31
2024-08-01T22:48:07
2024-08-01T22:48:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
To facilitate debugging of the program, I need to see the requests sent to the large model from the frontend. However, I've noticed that the request URLs and contents are not visible in the logs. Where can I find them? I have deployed DBGPT using Docker.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5848/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5868
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5868/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5868/comments
https://api.github.com/repos/ollama/ollama/issues/5868/events
https://github.com/ollama/ollama/issues/5868
2,424,504,024
I_kwDOJ0Z1Ps6Qgv7Y
5,868
webUI
{ "login": "812781385", "id": 33051062, "node_id": "MDQ6VXNlcjMzMDUxMDYy", "avatar_url": "https://avatars.githubusercontent.com/u/33051062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/812781385", "html_url": "https://github.com/812781385", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-23T07:45:28
2024-07-26T08:42:26
2024-07-26T08:42:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I developed open source webUI and services based on ollama, including functioncall. Interested can see, think useful hope can give a star: https://github.com/812781385/ollama-webUI
{ "login": "812781385", "id": 33051062, "node_id": "MDQ6VXNlcjMzMDUxMDYy", "avatar_url": "https://avatars.githubusercontent.com/u/33051062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/812781385", "html_url": "https://github.com/812781385", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5868/reactions", "total_count": 4, "+1": 0, "-1": 4, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5868/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8645
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8645/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8645/comments
https://api.github.com/repos/ollama/ollama/issues/8645/events
https://github.com/ollama/ollama/issues/8645
2,816,966,847
I_kwDOJ0Z1Ps6n54C_
8,645
Unsloth's dynamic quantizations of Deepseek R1
{ "login": "jjparady", "id": 83677301, "node_id": "MDQ6VXNlcjgzNjc3MzAx", "avatar_url": "https://avatars.githubusercontent.com/u/83677301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jjparady", "html_url": "https://github.com/jjparady", "followers_url": "https://api.github.com/users/jjp...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2025-01-29T00:20:43
2025-01-29T23:26:04
2025-01-29T23:26:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would love to have these dynamic quantizations readily available in ollama: https://huggingface.co/unsloth/DeepSeek-R1-GGUF
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8645/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6215
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6215/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6215/comments
https://api.github.com/repos/ollama/ollama/issues/6215/events
https://github.com/ollama/ollama/issues/6215
2,451,983,310
I_kwDOJ0Z1Ps6SJkvO
6,215
Ollama update (0.3.3) prevents running llama3.1:70b or llama3.1:8b with tools
{ "login": "imsaumil", "id": 66752084, "node_id": "MDQ6VXNlcjY2NzUyMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/66752084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/imsaumil", "html_url": "https://github.com/imsaumil", "followers_url": "https://api.github.com/users/ims...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-08-07T01:36:11
2024-11-11T07:46:49
2024-11-06T00:53:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I had an old version of Ollama (do not remember the previous version) and had llama3.1:70b installed which was running fine. But I wanted to install llama3.1:8b and it did not let me pull without updating my Ollama. After the update with fresh pull of llama3.1:70b does not work as expected and g...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6215/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7397
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7397/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7397/comments
https://api.github.com/repos/ollama/ollama/issues/7397/events
https://github.com/ollama/ollama/issues/7397
2,618,381,007
I_kwDOJ0Z1Ps6cEVLP
7,397
Please update NuExtract to v1.5
{ "login": "KIC", "id": 10957396, "node_id": "MDQ6VXNlcjEwOTU3Mzk2", "avatar_url": "https://avatars.githubusercontent.com/u/10957396?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KIC", "html_url": "https://github.com/KIC", "followers_url": "https://api.github.com/users/KIC/followers", ...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2024-10-28T13:08:54
2024-11-18T09:13:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please update [NuExtract](https://ollama.com/library/nuextract) to the newest version on [huggingface](https://huggingface.co/numind/NuExtract-v1.5/tree/main)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7397/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/7397/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3095
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3095/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3095/comments
https://api.github.com/repos/ollama/ollama/issues/3095/events
https://github.com/ollama/ollama/issues/3095
2,183,235,371
I_kwDOJ0Z1Ps6CIYcr
3,095
Limit ollama usage of GPUs using CUDA_VISIBLE_DEVICES
{ "login": "fengbolan", "id": 65692219, "node_id": "MDQ6VXNlcjY1NjkyMjE5", "avatar_url": "https://avatars.githubusercontent.com/u/65692219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fengbolan", "html_url": "https://github.com/fengbolan", "followers_url": "https://api.github.com/users/...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
12
2024-03-13T06:42:44
2024-04-12T22:26:09
2024-04-12T22:26:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've read the updated docs. The previous issue regarding the inability to limit OLLAMA usage of GPUs using CUDA_VISIBLE_DEVICES has not been resolved. Despite setting the environment variable CUDA_VISIBLE_DEVICES to a specific range or list of GPU IDs, OLLIMA continues to use all available GPUs during training instead ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3095/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3095/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6487
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6487/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6487/comments
https://api.github.com/repos/ollama/ollama/issues/6487/events
https://github.com/ollama/ollama/issues/6487
2,484,288,763
I_kwDOJ0Z1Ps6UEzz7
6,487
When invoked from the command line in an active conversation session, missing model for `/load` shouldn't be fatal error
{ "login": "erkinalp", "id": 5833034, "node_id": "MDQ6VXNlcjU4MzMwMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5833034?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erkinalp", "html_url": "https://github.com/erkinalp", "followers_url": "https://api.github.com/users/erkin...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-08-24T06:32:06
2024-08-24T06:32:06
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? if you try to load a nonexistent model ``` Loading model 'nonexistent.' Error: model "nonexistent." not found, try pulling it first ``` then quits the existing session ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.3.6
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6487/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1631
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1631/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1631/comments
https://api.github.com/repos/ollama/ollama/issues/1631/events
https://github.com/ollama/ollama/issues/1631
2,050,545,894
I_kwDOJ0Z1Ps56ONjm
1,631
WSL2: GPU not working anymore
{ "login": "mircomir", "id": 19854897, "node_id": "MDQ6VXNlcjE5ODU0ODk3", "avatar_url": "https://avatars.githubusercontent.com/u/19854897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mircomir", "html_url": "https://github.com/mircomir", "followers_url": "https://api.github.com/users/mir...
[]
closed
false
null
[]
null
6
2023-12-20T13:24:32
2024-01-13T19:50:00
2024-01-10T15:07:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I updated Ollama to latest version (0.1.17) on a Ubuntu WSL2 and the GPU support is not recognized anymore. At the end of installation I have the followinf message: "WARNING: No NVIDIA GPU detected. Ollama will run in CPU-only mode." Running nvidia-smi: Wed Dec 20 14:23:15 2023 +----------------------------------...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1631/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1631/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7486
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7486/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7486/comments
https://api.github.com/repos/ollama/ollama/issues/7486/events
https://github.com/ollama/ollama/pull/7486
2,631,757,246
PR_kwDOJ0Z1Ps6Avz1V
7,486
I added my ollama web ui
{ "login": "samirgaire10", "id": 118608337, "node_id": "U_kgDOBxHR0Q", "avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samirgaire10", "html_url": "https://github.com/samirgaire10", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
1
2024-11-04T03:47:24
2024-11-05T01:45:13
2024-11-05T01:45:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7486", "html_url": "https://github.com/ollama/ollama/pull/7486", "diff_url": "https://github.com/ollama/ollama/pull/7486.diff", "patch_url": "https://github.com/ollama/ollama/pull/7486.patch", "merged_at": null }
null
{ "login": "samirgaire10", "id": 118608337, "node_id": "U_kgDOBxHR0Q", "avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samirgaire10", "html_url": "https://github.com/samirgaire10", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7486/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6054
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6054/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6054/comments
https://api.github.com/repos/ollama/ollama/issues/6054/events
https://github.com/ollama/ollama/pull/6054
2,435,666,891
PR_kwDOJ0Z1Ps52woSh
6,054
Added reference to Llama.cpp docs for passed through API options
{ "login": "noggynoggy", "id": 50501527, "node_id": "MDQ6VXNlcjUwNTAxNTI3", "avatar_url": "https://avatars.githubusercontent.com/u/50501527?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noggynoggy", "html_url": "https://github.com/noggynoggy", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
2
2024-07-29T15:01:04
2024-11-21T11:15:22
2024-11-21T11:15:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6054", "html_url": "https://github.com/ollama/ollama/pull/6054", "diff_url": "https://github.com/ollama/ollama/pull/6054.diff", "patch_url": "https://github.com/ollama/ollama/pull/6054.patch", "merged_at": null }
The API docs do not explain what all options listed [here](https://github.com/ollama/ollama/blob/0e4d653687f81db40622e287a846245b319f1fbe/docs/api.md?plain=1#L334-L362) do, some are explained in [the modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values) but all "passed thr...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6054/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6227
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6227/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6227/comments
https://api.github.com/repos/ollama/ollama/issues/6227/events
https://github.com/ollama/ollama/issues/6227
2,452,825,229
I_kwDOJ0Z1Ps6SMySN
6,227
ollama cannot start on ubuntu 22.04
{ "login": "garyyang85", "id": 20335728, "node_id": "MDQ6VXNlcjIwMzM1NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/20335728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/garyyang85", "html_url": "https://github.com/garyyang85", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-08-07T07:55:54
2024-08-11T12:42:35
2024-08-11T12:00:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? First time to run ollama, follow the guide: https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install service cannot start, logs: ``` journalctl -u ollama --no-pager Aug 07 15:35:48 i-2y1kobn5 systemd[1]: Started Ollama Service. Aug 07 15:35:48 i-2y1kobn5 systemd[1]: olla...
{ "login": "garyyang85", "id": 20335728, "node_id": "MDQ6VXNlcjIwMzM1NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/20335728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/garyyang85", "html_url": "https://github.com/garyyang85", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6227/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3761
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3761/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3761/comments
https://api.github.com/repos/ollama/ollama/issues/3761/events
https://github.com/ollama/ollama/issues/3761
2,253,640,461
I_kwDOJ0Z1Ps6GU9MN
3,761
GPU not detected in Kubernetes.
{ "login": "dylanbstorey", "id": 6005970, "node_id": "MDQ6VXNlcjYwMDU5NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/6005970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dylanbstorey", "html_url": "https://github.com/dylanbstorey", "followers_url": "https://api.github.com...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
17
2024-04-19T18:28:29
2024-10-07T11:21:24
2024-05-08T12:31:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When deploying into kubernetes the container is complaining about being unable to load the cudart library. (Or maybe its out of date) Based on the documentation and provided examples I expect it to detect and utilize the GPU in container. Every test I can think of (which is limited) seems...
{ "login": "dylanbstorey", "id": 6005970, "node_id": "MDQ6VXNlcjYwMDU5NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/6005970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dylanbstorey", "html_url": "https://github.com/dylanbstorey", "followers_url": "https://api.github.com...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3761/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1203
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1203/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1203/comments
https://api.github.com/repos/ollama/ollama/issues/1203/events
https://github.com/ollama/ollama/issues/1203
2,001,581,043
I_kwDOJ0Z1Ps53TbPz
1,203
Generating context from aborted request
{ "login": "FairyTail2000", "id": 22645621, "node_id": "MDQ6VXNlcjIyNjQ1NjIx", "avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FairyTail2000", "html_url": "https://github.com/FairyTail2000", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
6
2023-11-20T07:58:30
2024-11-22T07:07:10
2023-12-04T23:01:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
For my own frontend I noticed that it might be useful to have an endpoint where I can generate context from optionally previous context, the typed prompt from the user and the answer of the model before it was interrupted. This could create a similiar experience to OpenAI's ChatGPT
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1203/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6807
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6807/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6807/comments
https://api.github.com/repos/ollama/ollama/issues/6807/events
https://github.com/ollama/ollama/issues/6807
2,526,592,624
I_kwDOJ0Z1Ps6WmL5w
6,807
Slow model load and cache ram does not free.
{ "login": "pisoiu", "id": 51887464, "node_id": "MDQ6VXNlcjUxODg3NDY0", "avatar_url": "https://avatars.githubusercontent.com/u/51887464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pisoiu", "html_url": "https://github.com/pisoiu", "followers_url": "https://api.github.com/users/pisoiu/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
15
2024-09-14T20:14:31
2024-11-05T23:24:10
2024-11-05T23:24:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi all. My system: AMD TR PRO 3975WX CPU, 512G RAM DDR4 ECC, 3xRTX A4000 (48G VRAM) GPU, 4TB Nvme corsair mp600 core xt, Ubuntu 22.04.1 LTS I'm not specialist in Linux, so don't throw stones. Problem 1: According to various tests, transfer speed of DDR4 can go up to 25GB/s. According to the ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6807/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1699
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1699/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1699/comments
https://api.github.com/repos/ollama/ollama/issues/1699/events
https://github.com/ollama/ollama/issues/1699
2,055,176,692
I_kwDOJ0Z1Ps56f4H0
1,699
Modelfile parameters not set during creation
{ "login": "tylertitsworth", "id": 43555799, "node_id": "MDQ6VXNlcjQzNTU1Nzk5", "avatar_url": "https://avatars.githubusercontent.com/u/43555799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tylertitsworth", "html_url": "https://github.com/tylertitsworth", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2023-12-24T17:59:31
2024-03-12T00:27:08
2024-03-12T00:27:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a model like so: (also providing system details) ```Dockerfile $ cat /etc/os-release | head -n 4 PRETTY_NAME="Ubuntu 22.04.2 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.2 LTS (Jammy Jellyfish)" $ ollama -v ollama version is 0.1.17 $ ollama show test --modelfile # Modelfile generated by "ollama...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1699/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1699/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2317
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2317/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2317/comments
https://api.github.com/repos/ollama/ollama/issues/2317/events
https://github.com/ollama/ollama/pull/2317
2,113,911,318
PR_kwDOJ0Z1Ps5lxtxi
2,317
Add multimodel support to `ollama run` in noninteractive mode
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-02-02T02:39:22
2024-02-02T05:33:07
2024-02-02T05:33:06
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2317", "html_url": "https://github.com/ollama/ollama/pull/2317", "diff_url": "https://github.com/ollama/ollama/pull/2317.diff", "patch_url": "https://github.com/ollama/ollama/pull/2317.patch", "merged_at": "2024-02-02T05:33:06" }
Fixes https://github.com/ollama/ollama/issues/2295 ``` % ollama run llava Describe this image: /Users/jmorgan/Desktop/old-tower.jpg Added image '/Users/jmorgan/Desktop/old-tower.jpg' The image depicts a vibrant cityscape. In the foreground, there's an iconic skyscraper, which is the CN Tower, a landmark of Toro...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2317/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/485
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/485/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/485/comments
https://api.github.com/repos/ollama/ollama/issues/485/events
https://github.com/ollama/ollama/issues/485
1,886,350,976
I_kwDOJ0Z1Ps5wb26A
485
check subprocess id to see if server is running rather than timing out
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
0
2023-09-07T17:57:03
2023-09-18T19:16:34
2023-09-18T19:16:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The timeout for a server to start running will need to be a long time for larger models, better to just check the process ID then wait for the server to respond (with a really long timeout) rather than relying on the timeout by itself.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/485/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7071
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7071/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7071/comments
https://api.github.com/repos/ollama/ollama/issues/7071/events
https://github.com/ollama/ollama/pull/7071
2,560,381,790
PR_kwDOJ0Z1Ps59UNdW
7,071
llm: Don't add BOS/EOS for tokenize requests
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-10-01T23:29:53
2024-10-01T23:46:25
2024-10-01T23:46:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7071", "html_url": "https://github.com/ollama/ollama/pull/7071", "diff_url": "https://github.com/ollama/ollama/pull/7071.diff", "patch_url": "https://github.com/ollama/ollama/pull/7071.patch", "merged_at": "2024-10-01T23:46:23" }
This is consistent with what server.cpp currently does. It affects things like token processing counts for embedding requests.
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7071/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1005
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1005/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1005/comments
https://api.github.com/repos/ollama/ollama/issues/1005/events
https://github.com/ollama/ollama/issues/1005
1,977,548,240
I_kwDOJ0Z1Ps513v3Q
1,005
Improved context window size management
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5808482718, "node_id": ...
open
false
null
[]
null
9
2023-11-04T23:13:47
2024-11-27T10:08:51
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Context window size is largely manual right now – it can be specified via `{"options": {"num_ctx": 32768}}` in the API or via `PARAMETER num_ctx 32768` in the Modelfile. Otherwise the default value is set to `2048` unless specified (some models in the [library](https://ollama.ai/ will use a larger context window size b...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1005/reactions", "total_count": 60, "+1": 57, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1005/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/611
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/611/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/611/comments
https://api.github.com/repos/ollama/ollama/issues/611/events
https://github.com/ollama/ollama/pull/611
1,914,479,379
PR_kwDOJ0Z1Ps5bSIEV
611
fix error messages for unknown commands in the repl
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-09-27T00:33:10
2023-09-28T21:19:46
2023-09-28T21:19:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/611", "html_url": "https://github.com/ollama/ollama/pull/611", "diff_url": "https://github.com/ollama/ollama/pull/611.diff", "patch_url": "https://github.com/ollama/ollama/pull/611.patch", "merged_at": "2023-09-28T21:19:46" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/611/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5669
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5669/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5669/comments
https://api.github.com/repos/ollama/ollama/issues/5669/events
https://github.com/ollama/ollama/issues/5669
2,406,831,342
I_kwDOJ0Z1Ps6PdVTu
5,669
"error loading llama server" error="llama runner process has terminated: exit status 0xc0000135 "
{ "login": "lorenzodimauro97", "id": 50343905, "node_id": "MDQ6VXNlcjUwMzQzOTA1", "avatar_url": "https://avatars.githubusercontent.com/u/50343905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorenzodimauro97", "html_url": "https://github.com/lorenzodimauro97", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-07-13T10:08:56
2024-07-15T09:56:18
2024-07-15T09:56:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Cannot load any model with ollama 0.2.3, this is some of the logs: time=2024-07-13T12:06:59.113+02:00 level=INFO source=sched.go:179 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" time=2024-07-13T12:06:59.126+02:00 level=INFO ...
{ "login": "lorenzodimauro97", "id": 50343905, "node_id": "MDQ6VXNlcjUwMzQzOTA1", "avatar_url": "https://avatars.githubusercontent.com/u/50343905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorenzodimauro97", "html_url": "https://github.com/lorenzodimauro97", "followers_url": "https://...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5669/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5904
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5904/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5904/comments
https://api.github.com/repos/ollama/ollama/issues/5904/events
https://github.com/ollama/ollama/issues/5904
2,426,848,795
I_kwDOJ0Z1Ps6QpsYb
5,904
llama runner process has terminated: signal: aborted (core dumped)
{ "login": "Dudu0831", "id": 88758930, "node_id": "MDQ6VXNlcjg4NzU4OTMw", "avatar_url": "https://avatars.githubusercontent.com/u/88758930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dudu0831", "html_url": "https://github.com/Dudu0831", "followers_url": "https://api.github.com/users/Dud...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6947643302, "node_id": "LA_kwDOJ0Z1Ps8AAAABnhyfpg...
open
false
null
[]
null
6
2024-07-24T07:52:33
2024-11-06T01:01:00
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I successfully converted jina-embeddings v2 base zh to gguf through llama. cpp and imported it into llama。 Here is my Modelfile > root@buaa-KVM:~/1T/ollama/Jina-AI-embedding# cat Modelfile > FROM /root/ggml-vocab-jina-v2-zh.gguf > PARAMETER num_ctx 8192 When I access it using/app/embed...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5904/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5670
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5670/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5670/comments
https://api.github.com/repos/ollama/ollama/issues/5670/events
https://github.com/ollama/ollama/issues/5670
2,406,851,962
I_kwDOJ0Z1Ps6PdaV6
5,670
The usage of VRAM has significantly increased
{ "login": "lingyezhixing", "id": 144504450, "node_id": "U_kgDOCJz2gg", "avatar_url": "https://avatars.githubusercontent.com/u/144504450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lingyezhixing", "html_url": "https://github.com/lingyezhixing", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw...
open
false
null
[]
null
5
2024-07-13T11:15:40
2024-10-24T02:45:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? In previous versions, I set the context length of each of my models to the maximum value that could be fully loaded onto the GPU memory. However, after the update, I found that parts of them were being partially loaded onto the CPU instead. I wonder what could be causing this. The following tabl...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5670/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5670/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3334
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3334/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3334/comments
https://api.github.com/repos/ollama/ollama/issues/3334/events
https://github.com/ollama/ollama/issues/3334
2,205,083,185
I_kwDOJ0Z1Ps6DbuYx
3,334
Certificate expired
{ "login": "cxzx150133", "id": 13826967, "node_id": "MDQ6VXNlcjEzODI2OTY3", "avatar_url": "https://avatars.githubusercontent.com/u/13826967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cxzx150133", "html_url": "https://github.com/cxzx150133", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-03-25T07:30:01
2024-03-25T08:45:42
2024-03-25T08:45:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? $ docker exec -it ollama ollama pull qwen:7b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen/manifests/7b": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2024-03-25T07:29:23Z is after 2024-03-25T0...
{ "login": "cxzx150133", "id": 13826967, "node_id": "MDQ6VXNlcjEzODI2OTY3", "avatar_url": "https://avatars.githubusercontent.com/u/13826967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cxzx150133", "html_url": "https://github.com/cxzx150133", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3334/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3334/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1154
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1154/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1154/comments
https://api.github.com/repos/ollama/ollama/issues/1154/events
https://github.com/ollama/ollama/issues/1154
1,997,530,054
I_kwDOJ0Z1Ps53D-PG
1,154
Cannot push models `FROM` library models
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
0
2023-11-16T18:51:53
2023-11-16T21:33:31
2023-11-16T21:33:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Attempting to push models with `FROM <library-model>` fails with scope errors. **Steps to reproduce:** 1. Create a Modelfile from a library model. ``` FROM llama2 SYSTEM """ You are Mario from super mario bros, acting as an assistant. """ ``` `ollama create <namespace>/mario -f path/to/modelfile` 2. Push ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1154/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6464
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6464/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6464/comments
https://api.github.com/repos/ollama/ollama/issues/6464/events
https://github.com/ollama/ollama/issues/6464
2,480,991,617
I_kwDOJ0Z1Ps6T4O2B
6,464
Error: unsupported content type: unknown
{ "login": "CorrectPath", "id": 179119218, "node_id": "U_kgDOCq0kcg", "avatar_url": "https://avatars.githubusercontent.com/u/179119218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CorrectPath", "html_url": "https://github.com/CorrectPath", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-08-22T14:50:31
2024-08-28T20:38:33
2024-08-28T20:38:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? This is the first time I tried to create a model with a gguf file, but it failed ![屏幕截图 2024-08-22 224737](https://github.com/user-attachments/assets/a2b95b02-11b1-48d4-a318-3dd52c276da7) model.modelfile ![屏幕截图 2024-08-22 224322](https://github.com/user-attachments/assets/c4d85c12-3542-4f4f-b...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6464/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/649
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/649/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/649/comments
https://api.github.com/repos/ollama/ollama/issues/649/events
https://github.com/ollama/ollama/issues/649
1,919,731,310
I_kwDOJ0Z1Ps5ybMZu
649
Request: ensemble Llamas 🦙 (`llama2:13b-ensemble`)
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
6
2023-09-29T18:26:00
2023-12-04T20:04:02
2023-12-04T20:04:01
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
From Hugging Face's Open LLM leaderboard: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard A 13b model ranked somewhat highly is [`yeontaek/llama-2-13B-ensemble-v5`](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v5). ![image](https://github.com/jmorgan...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/649/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2033
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2033/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2033/comments
https://api.github.com/repos/ollama/ollama/issues/2033/events
https://github.com/ollama/ollama/issues/2033
2,086,722,446
I_kwDOJ0Z1Ps58YNuO
2,033
Add Vulkan runner
{ "login": "maxwell-kalin", "id": 62115669, "node_id": "MDQ6VXNlcjYyMTE1NjY5", "avatar_url": "https://avatars.githubusercontent.com/u/62115669?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxwell-kalin", "html_url": "https://github.com/maxwell-kalin", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6433346500, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
30
2024-01-17T18:15:00
2025-01-21T19:49:38
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://github.com/nomic-ai/llama.cpp GPT4All runs Mistral and Mixtral q4 models over 10x faster on my 6600M GPU
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2033/reactions", "total_count": 40, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 36, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2033/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2102
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2102/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2102/comments
https://api.github.com/repos/ollama/ollama/issues/2102/events
https://github.com/ollama/ollama/pull/2102
2,091,606,470
PR_kwDOJ0Z1Ps5kmhq_
2,102
fix: remove overwritten model layers
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-01-19T23:00:15
2024-01-22T17:37:50
2024-01-22T17:37:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2102", "html_url": "https://github.com/ollama/ollama/pull/2102", "diff_url": "https://github.com/ollama/ollama/pull/2102.diff", "patch_url": "https://github.com/ollama/ollama/pull/2102.patch", "merged_at": "2024-01-22T17:37:49" }
if create overrides a manifest, first add the older manifest's layers to the delete map so they can be cleaned up resolves #2097
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2102/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5815
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5815/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5815/comments
https://api.github.com/repos/ollama/ollama/issues/5815/events
https://github.com/ollama/ollama/pull/5815
2,420,963,614
PR_kwDOJ0Z1Ps51_b-4
5,815
Adjust windows ROCm discovery
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-07-20T16:23:10
2024-07-20T23:02:58
2024-07-20T23:02:55
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5815", "html_url": "https://github.com/ollama/ollama/pull/5815", "diff_url": "https://github.com/ollama/ollama/pull/5815.diff", "patch_url": "https://github.com/ollama/ollama/pull/5815.patch", "merged_at": "2024-07-20T23:02:55" }
The v5 hip library returns unsupported GPUs which wont enumerate at inference time in the runner so this makes sure we align discovery. The gfx906 cards are no longer supported so we shouldn't compile with that GPU type as it wont enumerate at runtime.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5815/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4232
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4232/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4232/comments
https://api.github.com/repos/ollama/ollama/issues/4232/events
https://github.com/ollama/ollama/pull/4232
2,283,891,969
PR_kwDOJ0Z1Ps5uyn6j
4,232
Revert "fix golangci workflow not enable gofmt and goimports"
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2024-05-07T17:36:15
2024-05-09T08:45:07
2024-05-07T17:39:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4232", "html_url": "https://github.com/ollama/ollama/pull/4232", "diff_url": "https://github.com/ollama/ollama/pull/4232.diff", "patch_url": "https://github.com/ollama/ollama/pull/4232.patch", "merged_at": "2024-05-07T17:39:37" }
Reverts ollama/ollama#4190 gofmt is still a problem on windows see https://github.com/ollama/ollama/actions/runs/8989369091/job/24692319408?pr=4153
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4232/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4414
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4414/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4414/comments
https://api.github.com/repos/ollama/ollama/issues/4414/events
https://github.com/ollama/ollama/pull/4414
2,294,008,756
PR_kwDOJ0Z1Ps5vUtZD
4,414
update llama.cpp submodule to `614d3b9`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2024-05-13T23:23:19
2024-05-16T20:53:10
2024-05-16T20:53:10
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4414", "html_url": "https://github.com/ollama/ollama/pull/4414", "diff_url": "https://github.com/ollama/ollama/pull/4414.diff", "patch_url": "https://github.com/ollama/ollama/pull/4414.patch", "merged_at": "2024-05-16T20:53:09" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4414/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4414/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2904
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2904/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2904/comments
https://api.github.com/repos/ollama/ollama/issues/2904/events
https://github.com/ollama/ollama/issues/2904
2,165,614,507
I_kwDOJ0Z1Ps6BFKer
2,904
cuMemCreate with gpu nvidia m2000
{ "login": "aymengazzah", "id": 152094579, "node_id": "U_kgDOCRDHcw", "avatar_url": "https://avatars.githubusercontent.com/u/152094579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aymengazzah", "html_url": "https://github.com/aymengazzah", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-03-03T23:10:38
2024-03-05T20:25:02
2024-03-05T20:25:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
"Hi, is anyone else experiencing this error with the GPU? The GPU successfully passes through for video transcoding in another container app (Emby/Plex), but it shows an error for all ollama models." ### Error library ` level=WARN source=llm.go:162 msg="Failed to load dynamic library /tmp/ollama42357201...
{ "login": "aymengazzah", "id": 152094579, "node_id": "U_kgDOCRDHcw", "avatar_url": "https://avatars.githubusercontent.com/u/152094579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aymengazzah", "html_url": "https://github.com/aymengazzah", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2904/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5802
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5802/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5802/comments
https://api.github.com/repos/ollama/ollama/issues/5802/events
https://github.com/ollama/ollama/pull/5802
2,420,416,791
PR_kwDOJ0Z1Ps519kW8
5,802
preserve last assistant message
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-20T00:50:59
2024-07-20T03:19:28
2024-07-20T03:19:26
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5802", "html_url": "https://github.com/ollama/ollama/pull/5802", "diff_url": "https://github.com/ollama/ollama/pull/5802.diff", "patch_url": "https://github.com/ollama/ollama/pull/5802.patch", "merged_at": "2024-07-20T03:19:26" }
Fixes https://github.com/ollama/ollama/issues/5775
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5802/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5802/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5976
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5976/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5976/comments
https://api.github.com/repos/ollama/ollama/issues/5976/events
https://github.com/ollama/ollama/issues/5976
2,431,751,366
I_kwDOJ0Z1Ps6Q8ZTG
5,976
Unnecessary quotes when calling a tool
{ "login": "napa3um", "id": 665538, "node_id": "MDQ6VXNlcjY2NTUzOA==", "avatar_url": "https://avatars.githubusercontent.com/u/665538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/napa3um", "html_url": "https://github.com/napa3um", "followers_url": "https://api.github.com/users/napa3um/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-07-26T08:52:09
2024-07-26T08:52:09
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm using **mistral-nemo:12b-instruct-2407-q4_1** I'm trying to reproduce this example - https://github.com/ollama/ollama-js/blob/main/examples/tools/tools.ts ```javascript tools: [ { type: 'function', function: { name: '...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5976/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/5976/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5392
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5392/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5392/comments
https://api.github.com/repos/ollama/ollama/issues/5392/events
https://github.com/ollama/ollama/pull/5392
2,382,257,842
PR_kwDOJ0Z1Ps5z_MFu
5,392
add ppc64le to code issues 796
{ "login": "ALutz273", "id": 72616997, "node_id": "MDQ6VXNlcjcyNjE2OTk3", "avatar_url": "https://avatars.githubusercontent.com/u/72616997?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ALutz273", "html_url": "https://github.com/ALutz273", "followers_url": "https://api.github.com/users/ALu...
[]
closed
false
null
[]
null
0
2024-06-30T13:34:26
2024-11-08T15:51:36
2024-11-08T15:51:36
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5392", "html_url": "https://github.com/ollama/ollama/pull/5392", "diff_url": "https://github.com/ollama/ollama/pull/5392.diff", "patch_url": "https://github.com/ollama/ollama/pull/5392.patch", "merged_at": null }
I tested it on a Power9 machine and the change worked. Unfortunately I don't have a GPU in there yet (cuda)
{ "login": "ALutz273", "id": 72616997, "node_id": "MDQ6VXNlcjcyNjE2OTk3", "avatar_url": "https://avatars.githubusercontent.com/u/72616997?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ALutz273", "html_url": "https://github.com/ALutz273", "followers_url": "https://api.github.com/users/ALu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5392/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7993
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7993/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7993/comments
https://api.github.com/repos/ollama/ollama/issues/7993/events
https://github.com/ollama/ollama/issues/7993
2,724,935,659
I_kwDOJ0Z1Ps6iazfr
7,993
Structured generation cannot handle self referencing (recursion)
{ "login": "CakeCrusher", "id": 37946988, "node_id": "MDQ6VXNlcjM3OTQ2OTg4", "avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CakeCrusher", "html_url": "https://github.com/CakeCrusher", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
2
2024-12-08T03:51:25
2025-01-29T17:57:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama structured genereation cannot handle self referencing recursion ```py import json from pydantic import BaseModel, Field from typing import Optional class Dossier(BaseModel): """Build a profile for the user""" name: str = Field(..., description="The name of the user") ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7993/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7993/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1092
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1092/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1092/comments
https://api.github.com/repos/ollama/ollama/issues/1092/events
https://github.com/ollama/ollama/issues/1092
1,989,063,350
I_kwDOJ0Z1Ps52jrK2
1,092
build failure: `APPLE_IDENTITY: unbound variable`
{ "login": "jpmcb", "id": 23109390, "node_id": "MDQ6VXNlcjIzMTA5Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/23109390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jpmcb", "html_url": "https://github.com/jpmcb", "followers_url": "https://api.github.com/users/jpmcb/follow...
[]
closed
false
null
[]
null
1
2023-11-11T18:04:33
2023-11-12T22:25:08
2023-11-12T22:25:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Attempting to build on the darwin platform using the `build/build_darwing.sh` script results in the following error: ``` ./scripts/build_darwin.sh: line 17: APPLE_IDENTITY: unbound variable ``` This is after go generate (with `cmake` for th llama.cpp targets) and the `ollama` binary have completed building: ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1092/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/8326
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8326/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8326/comments
https://api.github.com/repos/ollama/ollama/issues/8326/events
https://github.com/ollama/ollama/issues/8326
2,771,707,302
I_kwDOJ0Z1Ps6lNOWm
8,326
Error: pull model manifest: 400: The specified repository contains sharded GGUF. Ollama does not support this yet.
{ "login": "OnceCrazyer", "id": 16172911, "node_id": "MDQ6VXNlcjE2MTcyOTEx", "avatar_url": "https://avatars.githubusercontent.com/u/16172911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OnceCrazyer", "html_url": "https://github.com/OnceCrazyer", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2025-01-07T01:17:03
2025-01-24T09:44:14
2025-01-24T09:44:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Error: pull model manifest: 400: The specified repository contains sharded GGUF. Ollama does not support this yet. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.4
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8326/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/532
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/532/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/532/comments
https://api.github.com/repos/ollama/ollama/issues/532/events
https://github.com/ollama/ollama/pull/532
1,897,719,368
PR_kwDOJ0Z1Ps5aZ0p7
532
remove `.First`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2023-09-15T05:12:25
2024-01-09T18:58:37
2024-01-09T18:58:37
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/532", "html_url": "https://github.com/ollama/ollama/pull/532", "diff_url": "https://github.com/ollama/ollama/pull/532.diff", "patch_url": "https://github.com/ollama/ollama/pull/532.patch", "merged_at": null }
This change removes the need for `.First`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/532/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8453
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8453/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8453/comments
https://api.github.com/repos/ollama/ollama/issues/8453/events
https://github.com/ollama/ollama/issues/8453
2,792,105,255
I_kwDOJ0Z1Ps6mbCUn
8,453
support ReaderLM-v2
{ "login": "sunburst-yz", "id": 37734140, "node_id": "MDQ6VXNlcjM3NzM0MTQw", "avatar_url": "https://avatars.githubusercontent.com/u/37734140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sunburst-yz", "html_url": "https://github.com/sunburst-yz", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2025-01-16T09:02:14
2025-01-19T18:33:04
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/jinaai/ReaderLM-v2 ReaderLM-v2 is specialized for tasks involving HTML parsing, transformation, and text extraction.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8453/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8453/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2281
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2281/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2281/comments
https://api.github.com/repos/ollama/ollama/issues/2281/events
https://github.com/ollama/ollama/issues/2281
2,108,424,779
I_kwDOJ0Z1Ps59rAJL
2,281
Support GPU runners with AVX2
{ "login": "hyjwei", "id": 76876891, "node_id": "MDQ6VXNlcjc2ODc2ODkx", "avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyjwei", "html_url": "https://github.com/hyjwei", "followers_url": "https://api.github.com/users/hyjwei/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6677745918, "node_id": ...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2024-01-30T17:47:16
2024-12-10T17:47:22
2024-12-10T17:47:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am running ollama on i7-14700K, which supports AVX2 and AVX_VNNI, and a GeForce RTX 1060. After reading #2205, I enable `OLLAMA_DEBUG=1` to check if ollama utilize AVX2 of this CPU. But unlike that one, I couldn't get ollama to use AVX2. The debug message has: ``` time=2024-01-30T12:27:26.016-05:00 level=INFO so...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2281/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1508
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1508/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1508/comments
https://api.github.com/repos/ollama/ollama/issues/1508/events
https://github.com/ollama/ollama/issues/1508
2,040,308,485
I_kwDOJ0Z1Ps55nKMF
1,508
Error: llama runner process has terminated on M2
{ "login": "milioe", "id": 80537193, "node_id": "MDQ6VXNlcjgwNTM3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/80537193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/milioe", "html_url": "https://github.com/milioe", "followers_url": "https://api.github.com/users/milioe/fo...
[]
closed
false
null
[]
null
5
2023-12-13T19:07:19
2023-12-17T16:02:36
2023-12-14T04:29:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm currently running Ollama on a MacBook Air m2 (8GB) I firstly installed Ollama through `brew install ollama` and got `Error: llama runner process has terminated` after pulling and running `mistral:instruct` and `mistral:latest` . After that, I uninstalled using `brew uninstall ollama` then installing it thro...
{ "login": "milioe", "id": 80537193, "node_id": "MDQ6VXNlcjgwNTM3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/80537193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/milioe", "html_url": "https://github.com/milioe", "followers_url": "https://api.github.com/users/milioe/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1508/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2385
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2385/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2385/comments
https://api.github.com/repos/ollama/ollama/issues/2385/events
https://github.com/ollama/ollama/issues/2385
2,122,637,940
I_kwDOJ0Z1Ps5-hOJ0
2,385
ollama breaks running qwen on ubuntu 20
{ "login": "cognitivetech", "id": 55156785, "node_id": "MDQ6VXNlcjU1MTU2Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cognitivetech", "html_url": "https://github.com/cognitivetech", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
2
2024-02-07T09:59:57
2024-02-09T20:46:26
2024-02-09T20:46:26
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Either using the version included with `ollama pull qwen` or using my own custom modelfile with q8 and chatml template qwen causes ollama to get "stuck" it doesn't use GPU for qwen, or any other working model after trying qwen until reboot. see also: https://github.com/ollama/ollama/issues/1691
{ "login": "cognitivetech", "id": 55156785, "node_id": "MDQ6VXNlcjU1MTU2Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cognitivetech", "html_url": "https://github.com/cognitivetech", "followers_url": "https://api.githu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2385/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2385/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/558
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/558/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/558/comments
https://api.github.com/repos/ollama/ollama/issues/558/events
https://github.com/ollama/ollama/pull/558
1,905,531,929
PR_kwDOJ0Z1Ps5a0Bxc
558
add dockerfile for building linux binaries
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2023-09-20T18:40:01
2023-09-22T19:20:13
2023-09-22T19:20:13
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/558", "html_url": "https://github.com/ollama/ollama/pull/558", "diff_url": "https://github.com/ollama/ollama/pull/558.diff", "patch_url": "https://github.com/ollama/ollama/pull/558.patch", "merged_at": "2023-09-22T19:20:13" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/558/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8040
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8040/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8040/comments
https://api.github.com/repos/ollama/ollama/issues/8040/events
https://github.com/ollama/ollama/issues/8040
2,731,895,942
I_kwDOJ0Z1Ps6i1WyG
8,040
Add API endpoint for Ollama server version and feature information
{ "login": "anxkhn", "id": 83116240, "node_id": "MDQ6VXNlcjgzMTE2MjQw", "avatar_url": "https://avatars.githubusercontent.com/u/83116240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anxkhn", "html_url": "https://github.com/anxkhn", "followers_url": "https://api.github.com/users/anxkhn/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-12-11T05:53:14
2024-12-29T19:33:45
2024-12-29T19:33:45
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Description:** Ollama is rapidly evolving, with new features and capabilities being added regularly. The recent introduction of structured outputs in version 0.5.0 is a prime example of this progress. As Ollama continues to grow, it's becoming increasingly important for clients to have a reliable way to determine ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8040/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4195
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4195/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4195/comments
https://api.github.com/repos/ollama/ollama/issues/4195/events
https://github.com/ollama/ollama/issues/4195
2,280,143,854
I_kwDOJ0Z1Ps6H6Dvu
4,195
how to download and run ollama and llma 3 in docker can u give me the docker file code for that
{ "login": "sushantsk1", "id": 83342285, "node_id": "MDQ6VXNlcjgzMzQyMjg1", "avatar_url": "https://avatars.githubusercontent.com/u/83342285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sushantsk1", "html_url": "https://github.com/sushantsk1", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-06T06:31:36
2024-05-06T23:42:30
2024-05-06T23:42:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
i want to download and run llmaa 3 using ollama on docker help me and give the code for docker file
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4195/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4195/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/345
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/345/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/345/comments
https://api.github.com/repos/ollama/ollama/issues/345/events
https://github.com/ollama/ollama/pull/345
1,850,306,315
PR_kwDOJ0Z1Ps5X6QTT
345
set non-zero error code on error
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-14T18:17:38
2023-08-16T16:20:29
2023-08-16T16:20:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/345", "html_url": "https://github.com/ollama/ollama/pull/345", "diff_url": "https://github.com/ollama/ollama/pull/345.diff", "patch_url": "https://github.com/ollama/ollama/pull/345.patch", "merged_at": "2023-08-16T16:20:28" }
ollama should exit non-zero when operations fail
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/345/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4924
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4924/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4924/comments
https://api.github.com/repos/ollama/ollama/issues/4924/events
https://github.com/ollama/ollama/issues/4924
2,341,385,774
I_kwDOJ0Z1Ps6LjrYu
4,924
Dictionary learning and concept extraction for model tuning
{ "login": "IgorAlexey", "id": 18470725, "node_id": "MDQ6VXNlcjE4NDcwNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/18470725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IgorAlexey", "html_url": "https://github.com/IgorAlexey", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-06-08T02:27:17
2024-06-08T02:27:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Described in Anthropic's [Mapping the Mind of a Large Language Model](https://www.anthropic.com/news/mapping-mind-language-model) and OpenAI's [Extracting Concepts from GPT-4](https://openai.com/index/extracting-concepts-from-gpt-4/). Once we can identify the neurons associated with certain concepts for the publicly...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4924/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1550
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1550/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1550/comments
https://api.github.com/repos/ollama/ollama/issues/1550/events
https://github.com/ollama/ollama/issues/1550
2,044,206,570
I_kwDOJ0Z1Ps552B3q
1,550
Error: failed to start a llama runner
{ "login": "webmastermario", "id": 121729061, "node_id": "U_kgDOB0FwJQ", "avatar_url": "https://avatars.githubusercontent.com/u/121729061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/webmastermario", "html_url": "https://github.com/webmastermario", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2023-12-15T18:46:26
2024-02-01T23:17:34
2024-02-01T23:17:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, i tried to install ollama on my centos dedicated server but everything is working but when i try [root@213-227-129-200 ~]# ollama run llava Error: failed to start a llama runner i get this. what can i do? -- Logs begin at Fri 2023-08-04 06:00:01 UTC, end at Fri 2023-12-15 18:45:42 UTC. -- Dec 15 ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1550/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3729
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3729/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3729/comments
https://api.github.com/repos/ollama/ollama/issues/3729/events
https://github.com/ollama/ollama/issues/3729
2,250,055,473
I_kwDOJ0Z1Ps6GHR8x
3,729
failed at cuda 12.2 with GTX1080 Ti
{ "login": "MissingTwins", "id": 146804746, "node_id": "U_kgDOCMAQCg", "avatar_url": "https://avatars.githubusercontent.com/u/146804746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MissingTwins", "html_url": "https://github.com/MissingTwins", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-04-18T08:20:14
2024-04-18T18:24:09
2024-04-18T18:24:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? This is a fresh installed ollama, but failed at first launch. cuda 12.2 ``` ben@amd:~/work/ollama$ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... #######################################################################################################################...
{ "login": "MissingTwins", "id": 146804746, "node_id": "U_kgDOCMAQCg", "avatar_url": "https://avatars.githubusercontent.com/u/146804746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MissingTwins", "html_url": "https://github.com/MissingTwins", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3729/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6324
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6324/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6324/comments
https://api.github.com/repos/ollama/ollama/issues/6324/events
https://github.com/ollama/ollama/pull/6324
2,461,538,236
PR_kwDOJ0Z1Ps54Iu7h
6,324
cmd: speed up gguf creates
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[]
closed
false
null
[]
null
0
2024-08-12T17:25:42
2024-08-12T18:46:11
2024-08-12T18:46:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6324", "html_url": "https://github.com/ollama/ollama/pull/6324", "diff_url": "https://github.com/ollama/ollama/pull/6324.diff", "patch_url": "https://github.com/ollama/ollama/pull/6324.patch", "merged_at": "2024-08-12T18:46:09" }
null
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6324/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/683/comments
https://api.github.com/repos/ollama/ollama/issues/683/events
https://github.com/ollama/ollama/issues/683
1,922,909,871
I_kwDOJ0Z1Ps5ynUav
683
Uninstall
{ "login": "fakerybakery", "id": 76186054, "node_id": "MDQ6VXNlcjc2MTg2MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fakerybakery", "html_url": "https://github.com/fakerybakery", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
2
2023-10-02T22:47:10
2023-10-02T22:56:12
2023-10-02T22:56:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How can I uninstall this program?
{ "login": "fakerybakery", "id": 76186054, "node_id": "MDQ6VXNlcjc2MTg2MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fakerybakery", "html_url": "https://github.com/fakerybakery", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/683/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3083
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3083/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3083/comments
https://api.github.com/repos/ollama/ollama/issues/3083/events
https://github.com/ollama/ollama/pull/3083
2,182,546,528
PR_kwDOJ0Z1Ps5pbZni
3,083
refactor readseeker
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-03-12T19:44:29
2024-03-16T19:08:57
2024-03-16T19:08:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3083", "html_url": "https://github.com/ollama/ollama/pull/3083", "diff_url": "https://github.com/ollama/ollama/pull/3083.diff", "patch_url": "https://github.com/ollama/ollama/pull/3083.patch", "merged_at": "2024-03-16T19:08:56" }
no functional change
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3083/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3073
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3073/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3073/comments
https://api.github.com/repos/ollama/ollama/issues/3073/events
https://github.com/ollama/ollama/pull/3073
2,181,001,953
PR_kwDOJ0Z1Ps5pWCJ9
3,073
chore: fix typo
{ "login": "racerole", "id": 148756161, "node_id": "U_kgDOCN3WwQ", "avatar_url": "https://avatars.githubusercontent.com/u/148756161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/racerole", "html_url": "https://github.com/racerole", "followers_url": "https://api.github.com/users/racerole/...
[]
closed
false
null
[]
null
0
2024-03-12T08:22:16
2024-03-12T18:09:23
2024-03-12T18:09:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3073", "html_url": "https://github.com/ollama/ollama/pull/3073", "diff_url": "https://github.com/ollama/ollama/pull/3073.diff", "patch_url": "https://github.com/ollama/ollama/pull/3073.patch", "merged_at": "2024-03-12T18:09:23" }
null
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3073/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2955
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2955/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2955/comments
https://api.github.com/repos/ollama/ollama/issues/2955/events
https://github.com/ollama/ollama/issues/2955
2,171,833,240
I_kwDOJ0Z1Ps6Bc4uY
2,955
Is there guidance to run Ollama as a background "Daemon" on MacOS pre-login?
{ "login": "dukekautington3rd", "id": 33333503, "node_id": "MDQ6VXNlcjMzMzMzNTAz", "avatar_url": "https://avatars.githubusercontent.com/u/33333503?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dukekautington3rd", "html_url": "https://github.com/dukekautington3rd", "followers_url": "https...
[]
closed
false
null
[]
null
10
2024-03-06T15:49:17
2024-12-04T05:02:40
2024-03-06T23:08:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I would really like Ollama to run as a service on my Mac or at least set the appropriate listening variable before it starts. Today I have to `launchctl setenv OLLAMA_HOST 0.0.0.0:8080` and restart Ollama any time there is a reboot. And I must be logged in in-order for Ollama to be serving up the LLM. I've t...
{ "login": "dukekautington3rd", "id": 33333503, "node_id": "MDQ6VXNlcjMzMzMzNTAz", "avatar_url": "https://avatars.githubusercontent.com/u/33333503?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dukekautington3rd", "html_url": "https://github.com/dukekautington3rd", "followers_url": "https...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2955/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8294
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8294/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8294/comments
https://api.github.com/repos/ollama/ollama/issues/8294/events
https://github.com/ollama/ollama/issues/8294
2,767,643,050
I_kwDOJ0Z1Ps6k9uGq
8,294
Ollama should avoid calling hallucinated tools
{ "login": "ehsavoie", "id": 73053, "node_id": "MDQ6VXNlcjczMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/73053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ehsavoie", "html_url": "https://github.com/ehsavoie", "followers_url": "https://api.github.com/users/ehsavoie/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
9
2025-01-03T14:13:36
2025-01-08T17:51:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Sometimes the model seems to hallucinate and call a tool on the client that doesn't exist. In my opinion since Ollama has the list of tools being callable it should check that the tool being called is in this list before calling it. This is described also there: https://github.com/langchain4j/...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8294/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8294/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3490
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3490/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3490/comments
https://api.github.com/repos/ollama/ollama/issues/3490/events
https://github.com/ollama/ollama/pull/3490
2,225,662,964
PR_kwDOJ0Z1Ps5rt1q7
3,490
CI missing archive
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-04-04T14:23:46
2024-04-04T14:24:27
2024-04-04T14:24:24
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3490", "html_url": "https://github.com/ollama/ollama/pull/3490", "diff_url": "https://github.com/ollama/ollama/pull/3490.diff", "patch_url": "https://github.com/ollama/ollama/pull/3490.patch", "merged_at": "2024-04-04T14:24:24" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3490/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5924
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5924/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5924/comments
https://api.github.com/repos/ollama/ollama/issues/5924/events
https://github.com/ollama/ollama/pull/5924
2,428,343,864
PR_kwDOJ0Z1Ps52YdiK
5,924
llm(llama): pass rope factors
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-07-24T19:42:31
2024-07-24T20:06:00
2024-07-24T20:05:59
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5924", "html_url": "https://github.com/ollama/ollama/pull/5924", "diff_url": "https://github.com/ollama/ollama/pull/5924.diff", "patch_url": "https://github.com/ollama/ollama/pull/5924.patch", "merged_at": "2024-07-24T20:05:59" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5924/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7635
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7635/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7635/comments
https://api.github.com/repos/ollama/ollama/issues/7635/events
https://github.com/ollama/ollama/pull/7635
2,653,082,010
PR_kwDOJ0Z1Ps6BrJO5
7,635
CI: give windows lint more time
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-11-12T19:12:05
2024-11-12T19:22:42
2024-11-12T19:22:39
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7635", "html_url": "https://github.com/ollama/ollama/pull/7635", "diff_url": "https://github.com/ollama/ollama/pull/7635.diff", "patch_url": "https://github.com/ollama/ollama/pull/7635.patch", "merged_at": "2024-11-12T19:22:39" }
It looks like 8 minutes isn't quite enough and we're seeing sporadic timeouts
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7635/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3419
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3419/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3419/comments
https://api.github.com/repos/ollama/ollama/issues/3419/events
https://github.com/ollama/ollama/issues/3419
2,216,639,921
I_kwDOJ0Z1Ps6EHz2x
3,419
Ollama local discovery
{ "login": "rakyll", "id": 108380, "node_id": "MDQ6VXNlcjEwODM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/108380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rakyll", "html_url": "https://github.com/rakyll", "followers_url": "https://api.github.com/users/rakyll/follow...
[]
closed
false
null
[]
null
1
2024-03-30T19:38:48
2024-05-15T00:43:41
2024-05-15T00:43:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? It's a common use case for LLM tool builders to wish they can rely on a local model rather than relying on a hosted one to save costs. Currently, there is no official way to discover whether a local ollama server is running. ### How should we solve this? Provide a mechanism so it becom...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3419/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3230
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3230/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3230/comments
https://api.github.com/repos/ollama/ollama/issues/3230/events
https://github.com/ollama/ollama/issues/3230
2,193,623,988
I_kwDOJ0Z1Ps6CwAu0
3,230
GPU does not run with Ollama
{ "login": "DerLehrer", "id": 90964131, "node_id": "MDQ6VXNlcjkwOTY0MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/90964131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DerLehrer", "html_url": "https://github.com/DerLehrer", "followers_url": "https://api.github.com/users/...
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": ...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-03-18T23:03:34
2024-04-15T22:47:17
2024-04-15T22:47:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi everyone, I am running a Windows 10 computer with GTX950 and Intel(R) Core(TM) i5-3475S, 32 GB RAM, I downloaded the new Windows-version of Ollama and the llama2-uncensored and also the tinyllama LLM. Good: Everything works. Bad: Ollama only makes use of the CPU and ignores the GPU. As far as I can tell...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3230/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4895
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4895/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4895/comments
https://api.github.com/repos/ollama/ollama/issues/4895/events
https://github.com/ollama/ollama/issues/4895
2,339,544,174
I_kwDOJ0Z1Ps6Lcpxu
4,895
Add "use_mmap" to environment variable
{ "login": "sisi399", "id": 50093165, "node_id": "MDQ6VXNlcjUwMDkzMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/50093165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sisi399", "html_url": "https://github.com/sisi399", "followers_url": "https://api.github.com/users/sisi39...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
2
2024-06-07T04:05:44
2024-10-26T06:30:54
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I recently discovered the potential benefits of the --no-mmap option, particularly for specific system configurations, such as PCs or laptops equipped with only 8GB of system RAM and a GPU with VRAM of 6GB or more, capable of loading entire models onto it. Loading models with mmap can render the use of 8B models nea...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4895/reactions", "total_count": 12, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 12, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4895/timeline
null
null
false