url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/8250
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8250/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8250/comments
https://api.github.com/repos/ollama/ollama/issues/8250/events
https://github.com/ollama/ollama/issues/8250
2,759,878,106
I_kwDOJ0Z1Ps6kgGXa
8,250
qwen qvq model
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/foll...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-12-26T15:15:15
2025-01-16T08:57:20
2024-12-29T19:16:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Extremely good with reasoning and maths mainly better than openai o1 https://huggingface.co/Qwen/QVQ-72B-Preview
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8250/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8250/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5493
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5493/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5493/comments
https://api.github.com/repos/ollama/ollama/issues/5493/events
https://github.com/ollama/ollama/issues/5493
2,391,664,217
I_kwDOJ0Z1Ps6OjeZZ
5,493
unable to load nvcuda
{ "login": "yake-cyber", "id": 174697336, "node_id": "U_kgDOCmmreA", "avatar_url": "https://avatars.githubusercontent.com/u/174697336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yake-cyber", "html_url": "https://github.com/yake-cyber", "followers_url": "https://api.github.com/users/yak...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-07-05T02:38:20
2024-08-13T03:39:30
2024-07-11T08:34:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? my ollama does not run on the NVIDIA gpu and i use the debug mode and find this message "time=2024-07-04T17:13:20.134+08:00 level=DEBUG source=gpu.go:385 msg="Unable to load nvcuda" library=/usr/lib/libcuda.so.418.74 error="Unable to load /usr/lib/libcuda.so.418.74 library to query for Nvidia ...
{ "login": "yake-cyber", "id": 174697336, "node_id": "U_kgDOCmmreA", "avatar_url": "https://avatars.githubusercontent.com/u/174697336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yake-cyber", "html_url": "https://github.com/yake-cyber", "followers_url": "https://api.github.com/users/yak...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5493/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6252
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6252/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6252/comments
https://api.github.com/repos/ollama/ollama/issues/6252/events
https://github.com/ollama/ollama/issues/6252
2,454,796,873
I_kwDOJ0Z1Ps6SUTpJ
6,252
cross compiling issue
{ "login": "andyyumiao", "id": 11346379, "node_id": "MDQ6VXNlcjExMzQ2Mzc5", "avatar_url": "https://avatars.githubusercontent.com/u/11346379?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyyumiao", "html_url": "https://github.com/andyyumiao", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-08-08T03:31:23
2024-08-10T00:04:39
2024-08-10T00:04:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? **When cross platform cross compiling, for example, when I compile a Linux version of a program on a Mac, the following error is reported:** `gpu/amd_linux.go:200:19: undefined: RocmComputeMin gpu/amd_linux.go:273:20: undefined: IGPUMemLimit gpu/amd_linux.go:295:20: undefined: rocmMinim...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6252/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6014
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6014/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6014/comments
https://api.github.com/repos/ollama/ollama/issues/6014/events
https://github.com/ollama/ollama/pull/6014
2,433,401,119
PR_kwDOJ0Z1Ps52pC1G
6,014
server: add OLLAMA_RUNNERS_DIR to help description
{ "login": "jing-rui", "id": 51155955, "node_id": "MDQ6VXNlcjUxMTU1OTU1", "avatar_url": "https://avatars.githubusercontent.com/u/51155955?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jing-rui", "html_url": "https://github.com/jing-rui", "followers_url": "https://api.github.com/users/jin...
[]
closed
false
null
[]
null
1
2024-07-27T09:55:05
2024-11-23T19:31:15
2024-11-23T19:31:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6014", "html_url": "https://github.com/ollama/ollama/pull/6014", "diff_url": "https://github.com/ollama/ollama/pull/6014.diff", "patch_url": "https://github.com/ollama/ollama/pull/6014.patch", "merged_at": null }
The env OLLAMA_RUNNERS_DIR is useful to avoid extract embedded files every time at `ollama serve` start.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6014/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7474
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7474/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7474/comments
https://api.github.com/repos/ollama/ollama/issues/7474/events
https://github.com/ollama/ollama/pull/7474
2,630,850,182
PR_kwDOJ0Z1Ps6AtjcF
7,474
Fix: return direct URL when OCI registry is not redirecting
{ "login": "peterwilli", "id": 1212814, "node_id": "MDQ6VXNlcjEyMTI4MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1212814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peterwilli", "html_url": "https://github.com/peterwilli", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
4
2024-11-02T23:00:57
2025-01-09T07:46:52
2024-11-21T11:04:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7474", "html_url": "https://github.com/ollama/ollama/pull/7474", "diff_url": "https://github.com/ollama/ollama/pull/7474.diff", "patch_url": "https://github.com/ollama/ollama/pull/7474.patch", "merged_at": null }
When debugging why I couldn't use a regular OCI registry (See screenshot below), I found out the ollama server was doing a redirect to AWS. This change returns the direct URL when such redirect does not happen, while assuring regular behavior when it does, so that regular pulls still work! <img width="1710" alt="SCR...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7474/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3712
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3712/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3712/comments
https://api.github.com/repos/ollama/ollama/issues/3712/events
https://github.com/ollama/ollama/pull/3712
2,249,256,817
PR_kwDOJ0Z1Ps5s-ZFw
3,712
add stablelm graph calculation
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-04-17T20:57:31
2024-04-17T22:57:51
2024-04-17T22:57:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3712", "html_url": "https://github.com/ollama/ollama/pull/3712", "diff_url": "https://github.com/ollama/ollama/pull/3712.diff", "patch_url": "https://github.com/ollama/ollama/pull/3712.patch", "merged_at": "2024-04-17T22:57:51" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3712/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3102
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3102/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3102/comments
https://api.github.com/repos/ollama/ollama/issues/3102/events
https://github.com/ollama/ollama/issues/3102
2,183,879,753
I_kwDOJ0Z1Ps6CK1xJ
3,102
Response_format not supported
{ "login": "halcwb", "id": 683631, "node_id": "MDQ6VXNlcjY4MzYzMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/683631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/halcwb", "html_url": "https://github.com/halcwb", "followers_url": "https://api.github.com/users/halcwb/follow...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-03-13T12:25:32
2024-03-13T15:01:10
2024-03-13T15:01:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When sending this request to the open AI endpoint I don't get the requested JSON. The payload is the actual content send to the api. Exactly the same content works with for example the fireworks api. ℹ INFO: EndPoint: http://localhost:11434/api/chat Payload: {"format":"json","messages":[{"content":"What is the m...
{ "login": "halcwb", "id": 683631, "node_id": "MDQ6VXNlcjY4MzYzMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/683631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/halcwb", "html_url": "https://github.com/halcwb", "followers_url": "https://api.github.com/users/halcwb/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3102/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1253
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1253/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1253/comments
https://api.github.com/repos/ollama/ollama/issues/1253/events
https://github.com/ollama/ollama/issues/1253
2,007,582,639
I_kwDOJ0Z1Ps53qUev
1,253
Error when downloading and running any dataset of any size.
{ "login": "ll3N1GmAll", "id": 10640635, "node_id": "MDQ6VXNlcjEwNjQwNjM1", "avatar_url": "https://avatars.githubusercontent.com/u/10640635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ll3N1GmAll", "html_url": "https://github.com/ll3N1GmAll", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
4
2023-11-23T06:51:02
2023-11-24T01:11:28
2023-11-23T17:27:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is the error I get after d/l a dataset and when trying to run a dataset - "Error: llama runner process has terminated" It pulls them down, verifies the hash, then says "success", the very next line is the error above. I am running Xubuntu 22.04, 16GB RAM, Intel Pentium CPU G4560 @ 3.50GHz, 8x Nvidia 1080Ti GP...
{ "login": "ll3N1GmAll", "id": 10640635, "node_id": "MDQ6VXNlcjEwNjQwNjM1", "avatar_url": "https://avatars.githubusercontent.com/u/10640635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ll3N1GmAll", "html_url": "https://github.com/ll3N1GmAll", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1253/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8688
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8688/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8688/comments
https://api.github.com/repos/ollama/ollama/issues/8688/events
https://github.com/ollama/ollama/pull/8688
2,820,160,395
PR_kwDOJ0Z1Ps6JduvO
8,688
Add library in Zig.
{ "login": "dravenk", "id": 14295318, "node_id": "MDQ6VXNlcjE0Mjk1MzE4", "avatar_url": "https://avatars.githubusercontent.com/u/14295318?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dravenk", "html_url": "https://github.com/dravenk", "followers_url": "https://api.github.com/users/draven...
[]
open
false
null
[]
null
0
2025-01-30T08:05:43
2025-01-30T08:05:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8688", "html_url": "https://github.com/ollama/ollama/pull/8688", "diff_url": "https://github.com/ollama/ollama/pull/8688.diff", "patch_url": "https://github.com/ollama/ollama/pull/8688.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8688/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6440
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6440/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6440/comments
https://api.github.com/repos/ollama/ollama/issues/6440/events
https://github.com/ollama/ollama/issues/6440
2,475,276,834
I_kwDOJ0Z1Ps6Tiboi
6,440
Model architecture Gemma2ForCausalLm
{ "login": "luisgg98", "id": 45603226, "node_id": "MDQ6VXNlcjQ1NjAzMjI2", "avatar_url": "https://avatars.githubusercontent.com/u/45603226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luisgg98", "html_url": "https://github.com/luisgg98", "followers_url": "https://api.github.com/users/lui...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-08-20T10:22:23
2024-08-21T20:15:34
2024-08-21T20:15:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Good afternoon, I would like to start marking I am not 100% sure whether this is an issue or maybe I am misunderstanding the concept of architecture. I tried to create a model on ollama by using a Modelfile at version 0.3.0. ![imagen](https://github.com/user-attachments/assets/dfdd4b08-5eb8-...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6440/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8125
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8125/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8125/comments
https://api.github.com/repos/ollama/ollama/issues/8125/events
https://github.com/ollama/ollama/pull/8125
2,743,674,309
PR_kwDOJ0Z1Ps6FbSk2
8,125
darwin: restore multiple runners for x86
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-12-17T00:30:34
2024-12-17T02:45:02
2024-12-17T02:45:02
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8125", "html_url": "https://github.com/ollama/ollama/pull/8125", "diff_url": "https://github.com/ollama/ollama/pull/8125.diff", "patch_url": "https://github.com/ollama/ollama/pull/8125.patch", "merged_at": "2024-12-17T02:45:02" }
In 0.5.2 we simplified packaging to have avx only for macos x86. It looks like there may still be some non-AVX systems out there, so this puts back the prior logic of building no-AVX for the primary binary, and now 2 runners for avx and avx2. These will be packaged in the App bundle only, so the stand-alone binary wil...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8125/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1214
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1214/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1214/comments
https://api.github.com/repos/ollama/ollama/issues/1214/events
https://github.com/ollama/ollama/issues/1214
2,003,109,082
I_kwDOJ0Z1Ps53ZQTa
1,214
Cache models for system restarts to not download again in docker
{ "login": "peteh", "id": 918728, "node_id": "MDQ6VXNlcjkxODcyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/918728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peteh", "html_url": "https://github.com/peteh", "followers_url": "https://api.github.com/users/peteh/followers"...
[]
closed
false
null
[]
null
2
2023-11-20T22:17:03
2023-11-21T00:07:35
2023-11-21T00:07:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I wrote a docker compose file and thought I mapped the right cache folder. However after a system restart, the model is downloaded again. My goal is to map the model cache dir to my local disk so when using the same model after a restart, it is not redownloaded again. The .ollama folder contains a lot of sha2...
{ "login": "peteh", "id": 918728, "node_id": "MDQ6VXNlcjkxODcyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/918728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peteh", "html_url": "https://github.com/peteh", "followers_url": "https://api.github.com/users/peteh/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1214/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4666
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4666/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4666/comments
https://api.github.com/repos/ollama/ollama/issues/4666/events
https://github.com/ollama/ollama/issues/4666
2,319,302,049
I_kwDOJ0Z1Ps6KPb2h
4,666
ollama doesn't create a model from modelfile and gives an error
{ "login": "tMrMorgan", "id": 170948386, "node_id": "U_kgDOCjB3Ig", "avatar_url": "https://avatars.githubusercontent.com/u/170948386?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tMrMorgan", "html_url": "https://github.com/tMrMorgan", "followers_url": "https://api.github.com/users/tMrMor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
open
false
null
[]
null
5
2024-05-27T14:32:53
2024-10-26T07:05:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Sorry in advance for any mistakes in text when I trying to create a model in terminal, no matter what it based on, and even if the "modelfile" is a stock template of downloaded llm, after command "ollama create test" i got same output everytime " Error: command must be one of "from", "license"...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4666/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8677
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8677/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8677/comments
https://api.github.com/repos/ollama/ollama/issues/8677/events
https://github.com/ollama/ollama/issues/8677
2,819,603,374
I_kwDOJ0Z1Ps6oD7uu
8,677
Wrote scripts to import gguf files/folder
{ "login": "gl2007", "id": 4097227, "node_id": "MDQ6VXNlcjQwOTcyMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/4097227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gl2007", "html_url": "https://github.com/gl2007", "followers_url": "https://api.github.com/users/gl2007/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-30T00:09:02
2025-01-30T00:09:02
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Don't see a "discussion" tab like I see for other repos, so just creating an issue. Had a bunch of gguf's in a folder, so wrote 2 scripts (windows and shell) to import a single gguf and all ggufs in a given folder. Don't know how to get a PR in but I can attach them here is any of you think they are useful.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8677/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8386
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8386/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8386/comments
https://api.github.com/repos/ollama/ollama/issues/8386/events
https://github.com/ollama/ollama/issues/8386
2,782,119,430
I_kwDOJ0Z1Ps6l08YG
8,386
Return in a response a flag if the input request was truncated
{ "login": "MarkWard0110", "id": 90335263, "node_id": "MDQ6VXNlcjkwMzM1MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkWard0110", "html_url": "https://github.com/MarkWard0110", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2025-01-11T19:53:52
2025-01-15T23:59:07
2025-01-15T23:59:07
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Add to the response a flag that indicates true if Ollama truncated the input request. As a developer, I would like the Ollama response to have a flag indicating that it truncated the input prompt so that I can initiate client-side behavior based on this information. In some situations, the accuracy of the chat re...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8386/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8386/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2698
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2698/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2698/comments
https://api.github.com/repos/ollama/ollama/issues/2698/events
https://github.com/ollama/ollama/issues/2698
2,150,287,605
I_kwDOJ0Z1Ps6AKsj1
2,698
Piping to `stdin` does not work in windows
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-02-23T03:04:37
2024-03-14T18:55:20
2024-03-14T18:55:20
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Minor issue, but piping to stdin doesn't work on windows with git bash ``` $ cat README.md | ollama run gemma "What is in this document?" failed to get console mode for stdin: The handle is invalid. ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2698/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2894
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2894/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2894/comments
https://api.github.com/repos/ollama/ollama/issues/2894/events
https://github.com/ollama/ollama/issues/2894
2,165,388,050
I_kwDOJ0Z1Ps6BETMS
2,894
How to get Ollama to use my RTX 4090 on windows 11
{ "login": "TimmekHW", "id": 94626112, "node_id": "U_kgDOBaPhQA", "avatar_url": "https://avatars.githubusercontent.com/u/94626112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TimmekHW", "html_url": "https://github.com/TimmekHW", "followers_url": "https://api.github.com/users/TimmekHW/fo...
[]
closed
false
null
[]
null
1
2024-03-03T14:36:27
2024-03-03T19:19:12
2024-03-03T19:19:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have 12600K + 64GB RAM + RTX 4090. I use Ollama + OpenCHat. For some reason Ollama won't use my RTX 4090. How can I show the program my graphics card? ![image](https://github.com/ollama/ollama/assets/94626112/7fe5afe3-1fbb-46f1-a9e4-a1f8a58a6d05) ``` messages = chat_histories[chat_id] options = { ...
{ "login": "TimmekHW", "id": 94626112, "node_id": "U_kgDOBaPhQA", "avatar_url": "https://avatars.githubusercontent.com/u/94626112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TimmekHW", "html_url": "https://github.com/TimmekHW", "followers_url": "https://api.github.com/users/TimmekHW/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2894/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6394
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6394/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6394/comments
https://api.github.com/repos/ollama/ollama/issues/6394/events
https://github.com/ollama/ollama/issues/6394
2,470,933,671
I_kwDOJ0Z1Ps6TR3Sn
6,394
mistral-nemo:12b-instruct-2407-fp16 will return empty string using json mode while mistral-nemo:12b will return code
{ "login": "franz101", "id": 18228395, "node_id": "MDQ6VXNlcjE4MjI4Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franz101", "html_url": "https://github.com/franz101", "followers_url": "https://api.github.com/users/fra...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
10
2024-08-16T20:22:03
2024-08-17T22:00:18
2024-08-17T21:59:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
currently using openai api support `mistral-nemo:12b-instruct-2407-fp16` returns an empty string
{ "login": "franz101", "id": 18228395, "node_id": "MDQ6VXNlcjE4MjI4Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/franz101", "html_url": "https://github.com/franz101", "followers_url": "https://api.github.com/users/fra...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6394/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7621
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7621/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7621/comments
https://api.github.com/repos/ollama/ollama/issues/7621/events
https://github.com/ollama/ollama/issues/7621
2,649,575,190
I_kwDOJ0Z1Ps6d7U8W
7,621
ollama run connect to server failed use 780M iGPU after update rocm-core from 6.0.2 to 6.2.2 on arch linux.
{ "login": "zw963", "id": 549126, "node_id": "MDQ6VXNlcjU0OTEyNg==", "avatar_url": "https://avatars.githubusercontent.com/u/549126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zw963", "html_url": "https://github.com/zw963", "followers_url": "https://api.github.com/users/zw963/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-11T14:51:56
2024-11-11T15:14:30
2024-11-11T15:14:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I use Arch linux, i update my package to latest today. Following is my upgraded version: ``` [2024-11-11T17:25:48+0800] [ALPM] upgraded rocm-opencl-sdk (6.0.2-1 -> 6.2.2-1) [2024-11-11T17:25:48+0800] [ALPM] upgraded python-pytorch-rocm (2.3.1-8 -> 2.5.1-3) [2024-11-11T17:25:45+0800] [...
{ "login": "zw963", "id": 549126, "node_id": "MDQ6VXNlcjU0OTEyNg==", "avatar_url": "https://avatars.githubusercontent.com/u/549126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zw963", "html_url": "https://github.com/zw963", "followers_url": "https://api.github.com/users/zw963/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7621/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5214
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5214/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5214/comments
https://api.github.com/repos/ollama/ollama/issues/5214/events
https://github.com/ollama/ollama/pull/5214
2,367,868,136
PR_kwDOJ0Z1Ps5zQH94
5,214
Update README.md
{ "login": "rapidarchitect", "id": 126218667, "node_id": "U_kgDOB4Xxqw", "avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rapidarchitect", "html_url": "https://github.com/rapidarchitect", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
0
2024-06-22T15:08:49
2024-07-01T02:00:58
2024-07-01T02:00:58
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5214", "html_url": "https://github.com/ollama/ollama/pull/5214", "diff_url": "https://github.com/ollama/ollama/pull/5214.diff", "patch_url": "https://github.com/ollama/ollama/pull/5214.patch", "merged_at": "2024-07-01T02:00:58" }
Added Mesop example to web & desktop
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5214/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1780
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1780/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1780/comments
https://api.github.com/repos/ollama/ollama/issues/1780/events
https://github.com/ollama/ollama/pull/1780
2,064,804,933
PR_kwDOJ0Z1Ps5jLeS2
1,780
update cmake flags for `amd64` macOS
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-04T00:06:12
2024-01-04T00:22:16
2024-01-04T00:22:15
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1780", "html_url": "https://github.com/ollama/ollama/pull/1780", "diff_url": "https://github.com/ollama/ollama/pull/1780.diff", "patch_url": "https://github.com/ollama/ollama/pull/1780.patch", "merged_at": "2024-01-04T00:22:15" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1780/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4832
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4832/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4832/comments
https://api.github.com/repos/ollama/ollama/issues/4832/events
https://github.com/ollama/ollama/issues/4832
2,335,432,838
I_kwDOJ0Z1Ps6LM-CG
4,832
llama3:7b cache size set
{ "login": "ciscoivan", "id": 55469637, "node_id": "MDQ6VXNlcjU1NDY5NjM3", "avatar_url": "https://avatars.githubusercontent.com/u/55469637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ciscoivan", "html_url": "https://github.com/ciscoivan", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-06-05T09:53:43
2024-06-09T17:35:53
2024-06-09T17:35:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![1](https://github.com/ollama/ollama/assets/55469637/440bb5f3-d605-4962-a895-3205d7c9d621) I installed two NVIDIA RTX 2080 TI graphics cards in an experimental deployment and successfully ran the llama3:7b model. I want to know how to adjust the cache size. thanks
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4832/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8614
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8614/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8614/comments
https://api.github.com/repos/ollama/ollama/issues/8614/events
https://github.com/ollama/ollama/issues/8614
2,813,943,892
I_kwDOJ0Z1Ps6nuWBU
8,614
Problems with deepseek-r1:671b, ollama keeps crashing on long answers
{ "login": "fabiounixpi", "id": 48057600, "node_id": "MDQ6VXNlcjQ4MDU3NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/48057600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabiounixpi", "html_url": "https://github.com/fabiounixpi", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
11
2025-01-27T20:04:40
2025-01-30T13:07:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi all, I'm using an r960 with 2TB of ram, so ram is not a problem here. I'm experiencing constant crashes of ollama 0.5.7 and deepseek-r1:671b, even increasing the context window with the command /set parameter num_ctx 4096. I also tried a second system, an r670 csp with 1TB of ram, but the...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8614/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8614/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5309
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5309/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5309/comments
https://api.github.com/repos/ollama/ollama/issues/5309/events
https://github.com/ollama/ollama/pull/5309
2,376,190,397
PR_kwDOJ0Z1Ps5zrkYW
5,309
Update OpenAI Compatibility Docs with /v1/models/{model}
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-06-26T20:17:09
2024-08-01T23:00:44
2024-08-01T22:58:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5309", "html_url": "https://github.com/ollama/ollama/pull/5309", "diff_url": "https://github.com/ollama/ollama/pull/5309.diff", "patch_url": "https://github.com/ollama/ollama/pull/5309.patch", "merged_at": "2024-08-01T22:58:13" }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5309/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/72
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/72/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/72/comments
https://api.github.com/repos/ollama/ollama/issues/72/events
https://github.com/ollama/ollama/issues/72
1,800,080,847
I_kwDOJ0Z1Ps5rSw3P
72
`ollama run` doesn't continue after one reponse
{ "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.github.com/users/hoyyev...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
2
2023-07-12T03:06:00
2023-07-17T16:43:08
2023-07-17T16:43:08
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
here are how you reproduce ```$ ollama run orca hello Hello! How can I assist you today?Error: stream: EOF $ logs ollama run orca "why is the sky blue" The sky appears blue because of a process called scattering. When sunlight enters the Earth's atmosphere, it collides with gas molecules such as oxyge...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/72/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/72/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1679
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1679/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1679/comments
https://api.github.com/repos/ollama/ollama/issues/1679/events
https://github.com/ollama/ollama/pull/1679
2,054,480,828
PR_kwDOJ0Z1Ps5irqMT
1,679
build cuda and rocm
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-12-22T20:22:28
2024-01-26T00:38:15
2024-01-26T00:38:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1679", "html_url": "https://github.com/ollama/ollama/pull/1679", "diff_url": "https://github.com/ollama/ollama/pull/1679.diff", "patch_url": "https://github.com/ollama/ollama/pull/1679.patch", "merged_at": "2024-01-26T00:38:14" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1679/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2559
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2559/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2559/comments
https://api.github.com/repos/ollama/ollama/issues/2559/events
https://github.com/ollama/ollama/issues/2559
2,139,912,869
I_kwDOJ0Z1Ps5_jHql
2,559
Feature - Support Custom Actions
{ "login": "joeldhenry", "id": 12555860, "node_id": "MDQ6VXNlcjEyNTU1ODYw", "avatar_url": "https://avatars.githubusercontent.com/u/12555860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeldhenry", "html_url": "https://github.com/joeldhenry", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
1
2024-02-17T09:19:23
2024-11-06T18:56:34
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Support for custom actions to cal custom API/Code as part of llama response. possibly part of modelfiles via python scripts? chatGPT has similar with integrations with Zapier
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2559/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5238
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5238/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5238/comments
https://api.github.com/repos/ollama/ollama/issues/5238/events
https://github.com/ollama/ollama/issues/5238
2,368,588,675
I_kwDOJ0Z1Ps6NLcuD
5,238
How to update Ollama to the latest version?
{ "login": "qzc438", "id": 61488260, "node_id": "MDQ6VXNlcjYxNDg4MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qzc438", "html_url": "https://github.com/qzc438", "followers_url": "https://api.github.com/users/qzc438/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-06-23T14:24:45
2024-06-24T11:19:25
2024-06-24T11:19:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? As the title described. How to update Ollama to the latest version? ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "qzc438", "id": 61488260, "node_id": "MDQ6VXNlcjYxNDg4MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qzc438", "html_url": "https://github.com/qzc438", "followers_url": "https://api.github.com/users/qzc438/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5238/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5242
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5242/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5242/comments
https://api.github.com/repos/ollama/ollama/issues/5242/events
https://github.com/ollama/ollama/issues/5242
2,368,823,955
I_kwDOJ0Z1Ps6NMWKT
5,242
Slow performance on `/api/show`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-23T18:36:53
2024-07-24T19:10:57
2024-07-24T19:10:57
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Because we now show more model details, `/api/show` has gotten slower. The part that's slow specifically is reading the arrays (vocab, tensors, etc) ``` case ggufTypeArray: v, err = readGGUFArray(llm, rs) ``` ### OS _No response_ ### GPU _No response_ ### CPU _No r...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5242/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3512
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3512/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3512/comments
https://api.github.com/repos/ollama/ollama/issues/3512/events
https://github.com/ollama/ollama/issues/3512
2,229,136,933
I_kwDOJ0Z1Ps6E3e4l
3,512
Experimental LLM Library Override does not appear to work on Windows
{ "login": "lrq3000", "id": 1118942, "node_id": "MDQ6VXNlcjExMTg5NDI=", "avatar_url": "https://avatars.githubusercontent.com/u/1118942?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lrq3000", "html_url": "https://github.com/lrq3000", "followers_url": "https://api.github.com/users/lrq3000/...
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-04-06T08:14:22
2024-04-23T19:40:16
2024-04-23T02:06:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I tried the [Experimental LLM Library Override](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#llm-libraries) on Windows via two means: * Temporary environment variable definition: `SET OLLAMA_LLM_LIBRARY="cpu_avx2" & ollama run deepseek-coder` * Permanent environment var...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3512/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4497
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4497/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4497/comments
https://api.github.com/repos/ollama/ollama/issues/4497/events
https://github.com/ollama/ollama/issues/4497
2,302,421,423
I_kwDOJ0Z1Ps6JPCmv
4,497
Ollama 0.1.38 has high video memory usage and runs very slowly.
{ "login": "chenwei0930", "id": 17743683, "node_id": "MDQ6VXNlcjE3NzQzNjgz", "avatar_url": "https://avatars.githubusercontent.com/u/17743683?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chenwei0930", "html_url": "https://github.com/chenwei0930", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-05-17T10:54:55
2024-06-22T07:07:40
2024-06-21T23:34:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am using Windows 10 with an NVIDIA 2080Ti graphics card that has 22GB of video memory. I upgraded from version 0.1.32 to 0.1.38 with the goal of supporting loading multiple models and handling multiple concurrent requests. However, I noticed that under version 0.1.38, the video memory usage is...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4497/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4497/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8190
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8190/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8190/comments
https://api.github.com/repos/ollama/ollama/issues/8190/events
https://github.com/ollama/ollama/pull/8190
2,753,561,133
PR_kwDOJ0Z1Ps6F9VTg
8,190
macos: detect potential version skew
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
0
2024-12-20T22:17:24
2024-12-23T15:35:57
null
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8190", "html_url": "https://github.com/ollama/ollama/pull/8190", "diff_url": "https://github.com/ollama/ollama/pull/8190.diff", "patch_url": "https://github.com/ollama/ollama/pull/8190.patch", "merged_at": null }
During upgrade, we could get into a scenario where an old serve tries to start newer runner executables, which could have new expectations. This added check will handle non-zero exit status from the runner and double check the current process has the same version as the executable on disk. If the version has skewed, ex...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8190/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6134
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6134/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6134/comments
https://api.github.com/repos/ollama/ollama/issues/6134/events
https://github.com/ollama/ollama/issues/6134
2,443,859,187
I_kwDOJ0Z1Ps6RqlTz
6,134
can't change ollama server address :127.0.0.1:11434 after binary ollama install
{ "login": "cnopens", "id": 3257702, "node_id": "MDQ6VXNlcjMyNTc3MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/3257702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cnopens", "html_url": "https://github.com/cnopens", "followers_url": "https://api.github.com/users/cnopens/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-08-02T02:23:32
2024-08-09T21:13:42
2024-08-09T21:13:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? i install ollama using binary package ,find that ollama server address 127.0.0.1:11434 ,ip why can't change it ? who run into the problem ? later i only used nginx proxy ,but ,open-webui reponse-webui was very slow vs command consle ### OS Linux ### GPU Intel ### CPU Intel ### Ollama ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6134/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1423
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1423/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1423/comments
https://api.github.com/repos/ollama/ollama/issues/1423/events
https://github.com/ollama/ollama/issues/1423
2,031,667,493
I_kwDOJ0Z1Ps55GMkl
1,423
Allow Response Templating
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
0
2023-12-07T23:11:26
2023-12-22T22:07:06
2023-12-22T22:07:06
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In order to support formats like chatml Ollama must support post-response templating: ``` <|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> ```
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1423/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7419
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7419/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7419/comments
https://api.github.com/repos/ollama/ollama/issues/7419/events
https://github.com/ollama/ollama/issues/7419
2,624,183,476
I_kwDOJ0Z1Ps6cady0
7,419
Integrating Into Desktop App
{ "login": "brian-at-pieces", "id": 98757707, "node_id": "U_kgDOBeLsSw", "avatar_url": "https://avatars.githubusercontent.com/u/98757707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian-at-pieces", "html_url": "https://github.com/brian-at-pieces", "followers_url": "https://api.github....
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-10-30T13:52:19
2024-11-01T18:56:26
2024-11-01T18:56:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'd love to use Ollama for serving LLMs in my company's Mac/Linux/Windows desktop app, but I'm a little confused about some things. I'd like to integrate it directly rather than requring the user to manually install Ollama because that UX isn't very good IMO. You [mention in the Windows docs](https://github.com/olla...
{ "login": "brian-at-pieces", "id": 98757707, "node_id": "U_kgDOBeLsSw", "avatar_url": "https://avatars.githubusercontent.com/u/98757707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian-at-pieces", "html_url": "https://github.com/brian-at-pieces", "followers_url": "https://api.github....
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7419/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8556
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8556/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8556/comments
https://api.github.com/repos/ollama/ollama/issues/8556/events
https://github.com/ollama/ollama/issues/8556
2,808,386,790
I_kwDOJ0Z1Ps6nZJTm
8,556
Please separate deepseek-r1 from deepseek-r1-Distill!
{ "login": "win10ogod", "id": 125795763, "node_id": "U_kgDOB399sw", "avatar_url": "https://avatars.githubusercontent.com/u/125795763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/win10ogod", "html_url": "https://github.com/win10ogod", "followers_url": "https://api.github.com/users/win10o...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
0
2025-01-24T03:04:56
2025-01-24T03:20:18
2025-01-24T03:20:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please separate deepseek-r1 from deepseek-r1-Distill! This is not the same model and the architecture is different! The model on the ollama official website is a perfect obfuscation!
{ "login": "win10ogod", "id": 125795763, "node_id": "U_kgDOB399sw", "avatar_url": "https://avatars.githubusercontent.com/u/125795763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/win10ogod", "html_url": "https://github.com/win10ogod", "followers_url": "https://api.github.com/users/win10o...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8556/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5790
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5790/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5790/comments
https://api.github.com/repos/ollama/ollama/issues/5790/events
https://github.com/ollama/ollama/pull/5790
2,418,358,087
PR_kwDOJ0Z1Ps513q8w
5,790
Update llama.cpp submodule to 1bdd8ae1
{ "login": "zhongTao99", "id": 56594937, "node_id": "MDQ6VXNlcjU2NTk0OTM3", "avatar_url": "https://avatars.githubusercontent.com/u/56594937?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongTao99", "html_url": "https://github.com/zhongTao99", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
1
2024-07-19T08:28:04
2024-09-03T17:20:14
2024-09-03T17:20:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5790", "html_url": "https://github.com/ollama/ollama/pull/5790", "diff_url": "https://github.com/ollama/ollama/pull/5790.diff", "patch_url": "https://github.com/ollama/ollama/pull/5790.patch", "merged_at": null }
fix:https://github.com/ollama/ollama/issues/5769
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5790/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7669
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7669/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7669/comments
https://api.github.com/repos/ollama/ollama/issues/7669/events
https://github.com/ollama/ollama/issues/7669
2,659,895,139
I_kwDOJ0Z1Ps6eisdj
7,669
Only CPU is used after rebooting
{ "login": "3DAlgoLab", "id": 83936830, "node_id": "MDQ6VXNlcjgzOTM2ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/83936830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/3DAlgoLab", "html_url": "https://github.com/3DAlgoLab", "followers_url": "https://api.github.com/users/...
[ { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg", "url": "https://api.github.com/repos/ollama/ollama/labels/linux", "name": "linux", "color": "516E70", "default": false, "description": "" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "htt...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-11-14T19:49:01
2024-11-19T03:42:01
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[I found someone wrote a thread describing only cpu is used after rebooting in windows ](https://github.com/ollama/ollama/issues/4984#issue-2347076913) I also had similar problems even in Ubuntu OS. I used the latest version(0.4.1). I guess this bug comes from that the ollama service is started faster than the in...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7669/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/249
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/249/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/249/comments
https://api.github.com/repos/ollama/ollama/issues/249/events
https://github.com/ollama/ollama/pull/249
1,830,020,783
PR_kwDOJ0Z1Ps5W2Bla
249
Add "Awesome projects built with Ollama" section to README, including Continue
{ "login": "sestinj", "id": 33237525, "node_id": "MDQ6VXNlcjMzMjM3NTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33237525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sestinj", "html_url": "https://github.com/sestinj", "followers_url": "https://api.github.com/users/sestin...
[]
closed
false
null
[]
null
0
2023-07-31T21:01:22
2023-08-01T15:07:50
2023-08-01T15:07:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/249", "html_url": "https://github.com/ollama/ollama/pull/249", "diff_url": "https://github.com/ollama/ollama/pull/249.diff", "patch_url": "https://github.com/ollama/ollama/pull/249.patch", "merged_at": "2023-08-01T15:07:50" }
Format and text are up for debate, but here's a description of Continue
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/249/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2569
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2569/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2569/comments
https://api.github.com/repos/ollama/ollama/issues/2569/events
https://github.com/ollama/ollama/issues/2569
2,140,757,706
I_kwDOJ0Z1Ps5_mV7K
2,569
Connection with http://127.0.0.1:11434/api/chat forcibly closed
{ "login": "spampinato55", "id": 47316524, "node_id": "MDQ6VXNlcjQ3MzE2NTI0", "avatar_url": "https://avatars.githubusercontent.com/u/47316524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spampinato55", "html_url": "https://github.com/spampinato55", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
15
2024-02-18T05:47:39
2024-08-25T11:21:20
2024-02-20T21:56:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've installed Ollama in Windows 10, I launch it and it runs, I can pull a model but when I want to run it this is the error message I see: "Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:52725->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host." I disabled t...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2569/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2569/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3085
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3085/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3085/comments
https://api.github.com/repos/ollama/ollama/issues/3085/events
https://github.com/ollama/ollama/issues/3085
2,182,679,116
I_kwDOJ0Z1Ps6CGQpM
3,085
Please support Zephyr 7B Gemma
{ "login": "RahulBhalley", "id": 9640948, "node_id": "MDQ6VXNlcjk2NDA5NDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9640948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RahulBhalley", "html_url": "https://github.com/RahulBhalley", "followers_url": "https://api.github.com...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2024-03-12T21:09:21
2024-03-13T10:22:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please support [Zephyr 7B Gemma](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)! This [HG Chat](https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat) is a lot better than Zephyr beta (fine-tuned on Mistral 7B).
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3085/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3085/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7072
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7072/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7072/comments
https://api.github.com/repos/ollama/ollama/issues/7072/events
https://github.com/ollama/ollama/issues/7072
2,560,411,081
I_kwDOJ0Z1Ps6YnMXJ
7,072
Deepseek-v2.5 fails to load on a system with 24GB VRAM (RTX 3090) and 128GB RAM
{ "login": "LeonidShamis", "id": 1818114, "node_id": "MDQ6VXNlcjE4MTgxMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1818114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeonidShamis", "html_url": "https://github.com/LeonidShamis", "followers_url": "https://api.github.com...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-10-02T00:06:08
2024-10-27T05:25:22
2024-10-02T05:54:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm unable to load the [deepseek-v2.5](https://ollama.com/library/deepseek-v2.5) model on a system with 24GB VRAM (RTX 3090) and 128GB RAM: ``` $ ollama --version ollama version is 0.3.11 $ $ ollama list | grep -e ID -e deepseek-v2.5 NAME ID SIZ...
{ "login": "LeonidShamis", "id": 1818114, "node_id": "MDQ6VXNlcjE4MTgxMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1818114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeonidShamis", "html_url": "https://github.com/LeonidShamis", "followers_url": "https://api.github.com...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7072/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8634
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8634/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8634/comments
https://api.github.com/repos/ollama/ollama/issues/8634/events
https://github.com/ollama/ollama/issues/8634
2,815,768,303
I_kwDOJ0Z1Ps6n1Tbv
8,634
Ollama is not installing on Termux
{ "login": "imvickykumar999", "id": 50515418, "node_id": "MDQ6VXNlcjUwNTE1NDE4", "avatar_url": "https://avatars.githubusercontent.com/u/50515418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/imvickykumar999", "html_url": "https://github.com/imvickykumar999", "followers_url": "https://api...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
3
2025-01-28T14:01:22
2025-01-30T07:10:12
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
~ $ curl -fsSL https://ollama.com/install.sh | sh ``` >>> Installing ollama to /usr No superuser binary detected. Are you rooted? ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8634/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6190
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6190/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6190/comments
https://api.github.com/repos/ollama/ollama/issues/6190/events
https://github.com/ollama/ollama/pull/6190
2,449,645,987
PR_kwDOJ0Z1Ps53gV83
6,190
fix concurrency test
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-08-05T23:36:34
2024-08-05T23:45:52
2024-08-05T23:45:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6190", "html_url": "https://github.com/ollama/ollama/pull/6190", "diff_url": "https://github.com/ollama/ollama/pull/6190.diff", "patch_url": "https://github.com/ollama/ollama/pull/6190.patch", "merged_at": "2024-08-05T23:45:50" }
errors were hidden by `integration` build tag
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6190/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8350
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8350/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8350/comments
https://api.github.com/repos/ollama/ollama/issues/8350/events
https://github.com/ollama/ollama/pull/8350
2,776,137,506
PR_kwDOJ0Z1Ps6HHapv
8,350
readme: add phi4 model
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2025-01-08T19:17:03
2025-01-08T19:21:41
2025-01-08T19:21:39
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8350", "html_url": "https://github.com/ollama/ollama/pull/8350", "diff_url": "https://github.com/ollama/ollama/pull/8350.diff", "patch_url": "https://github.com/ollama/ollama/pull/8350.patch", "merged_at": "2025-01-08T19:21:39" }
readme: add phi4 model
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8350/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8689
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8689/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8689/comments
https://api.github.com/repos/ollama/ollama/issues/8689/events
https://github.com/ollama/ollama/issues/8689
2,820,234,513
I_kwDOJ0Z1Ps6oGV0R
8,689
Error LLama runner process has terminated: %!w(<nil>)
{ "login": "Saatvik-droid", "id": 55750489, "node_id": "MDQ6VXNlcjU1NzUwNDg5", "avatar_url": "https://avatars.githubusercontent.com/u/55750489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saatvik-droid", "html_url": "https://github.com/Saatvik-droid", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2025-01-30T08:49:09
2025-01-30T08:57:59
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Sometimes when infering from ollama using the python module I get this error. After retrying a couple of times it works and looks random to me. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8689/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3376
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3376/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3376/comments
https://api.github.com/repos/ollama/ollama/issues/3376/events
https://github.com/ollama/ollama/pull/3376
2,211,769,650
PR_kwDOJ0Z1Ps5q-g51
3,376
only generate on changes to llm subdirectory
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-03-27T19:45:40
2024-03-27T21:12:54
2024-03-27T21:12:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3376", "html_url": "https://github.com/ollama/ollama/pull/3376", "diff_url": "https://github.com/ollama/ollama/pull/3376.diff", "patch_url": "https://github.com/ollama/ollama/pull/3376.patch", "merged_at": "2024-03-27T21:12:53" }
follow up to #3375 to also skip generate (linux, macos, windows) if there's no changes to the llm subdirectory
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3376/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7374
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7374/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7374/comments
https://api.github.com/repos/ollama/ollama/issues/7374/events
https://github.com/ollama/ollama/issues/7374
2,616,050,717
I_kwDOJ0Z1Ps6b7cQd
7,374
Reinstate OLLAMA_RUNNERS_DIR
{ "login": "StarPet", "id": 85790781, "node_id": "MDQ6VXNlcjg1NzkwNzgx", "avatar_url": "https://avatars.githubusercontent.com/u/85790781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StarPet", "html_url": "https://github.com/StarPet", "followers_url": "https://api.github.com/users/StarPe...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6677367769, "node_id": ...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
5
2024-10-26T18:28:24
2024-11-06T15:38:33
2024-11-06T15:38:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It appears that the OLLAMA_RUNNERS_DIR was removed from the code - at least I couldn't find it in github's search function. Currently (0.3.14) it is using /tmp/ollama<number>/runners again, as before the introduction of the OLLAMA_RUNNERS_DIR (or when not set). IMHO, using /tmp for executables is not a good idea. I'd ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7374/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3482
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3482/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3482/comments
https://api.github.com/repos/ollama/ollama/issues/3482/events
https://github.com/ollama/ollama/issues/3482
2,224,411,948
I_kwDOJ0Z1Ps6EldUs
3,482
Please add Qwen-VL!
{ "login": "tikeoewoew", "id": 12619882, "node_id": "MDQ6VXNlcjEyNjE5ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/12619882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tikeoewoew", "html_url": "https://github.com/tikeoewoew", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-04-04T03:59:14
2024-07-29T01:25:43
2024-07-25T15:15:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? This image recognition model is very popular in China, so please add it to ollama:https://huggingface.co/Qwen/Qwen-VL
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3482/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3482/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/595
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/595/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/595/comments
https://api.github.com/repos/ollama/ollama/issues/595/events
https://github.com/ollama/ollama/pull/595
1,912,389,172
PR_kwDOJ0Z1Ps5bLAcR
595
ignore systemctl is-system-running exit code
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-25T22:47:54
2023-09-25T22:49:47
2023-09-25T22:49:47
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/595", "html_url": "https://github.com/ollama/ollama/pull/595", "diff_url": "https://github.com/ollama/ollama/pull/595.diff", "patch_url": "https://github.com/ollama/ollama/pull/595.patch", "merged_at": "2023-09-25T22:49:47" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/595/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8014
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8014/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8014/comments
https://api.github.com/repos/ollama/ollama/issues/8014/events
https://github.com/ollama/ollama/pull/8014
2,727,590,489
PR_kwDOJ0Z1Ps6EkWZ1
8,014
Avoid underflow when FreeMemory < overhead
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-12-09T16:15:23
2024-12-10T17:10:40
2024-12-10T17:10:40
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8014", "html_url": "https://github.com/ollama/ollama/pull/8014", "diff_url": "https://github.com/ollama/ollama/pull/8014.diff", "patch_url": "https://github.com/ollama/ollama/pull/8014.patch", "merged_at": "2024-12-10T17:10:40" }
Fixes: #8011
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8014/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1031
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1031/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1031/comments
https://api.github.com/repos/ollama/ollama/issues/1031/events
https://github.com/ollama/ollama/pull/1031
1,981,518,033
PR_kwDOJ0Z1Ps5e0LjA
1,031
Added logit_bias support
{ "login": "Vokturz", "id": 21696514, "node_id": "MDQ6VXNlcjIxNjk2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/21696514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vokturz", "html_url": "https://github.com/Vokturz", "followers_url": "https://api.github.com/users/Voktur...
[]
open
false
null
[]
null
2
2023-11-07T14:38:14
2024-03-16T16:23:18
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1031", "html_url": "https://github.com/ollama/ollama/pull/1031", "diff_url": "https://github.com/ollama/ollama/pull/1031.diff", "patch_url": "https://github.com/ollama/ollama/pull/1031.patch", "merged_at": null }
This PR brings the `logit_bias` functionality, already present in llama.cpp, which allows users to adjust the likelihood of token occurrences in generated text. For example, for the prompt `"Once upon a "` we have: 1. Without `logit_bias` ```bash curl -X POST http://localhost:11434/api/generate -d '{ ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1031/reactions", "total_count": 9, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1031/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2628
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2628/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2628/comments
https://api.github.com/repos/ollama/ollama/issues/2628/events
https://github.com/ollama/ollama/issues/2628
2,146,310,477
I_kwDOJ0Z1Ps5_7hlN
2,628
libext_server.a(llava.cpp.o) { in archive is not an object | not an ELF file }
{ "login": "pavelsr", "id": 1158473, "node_id": "MDQ6VXNlcjExNTg0NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1158473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pavelsr", "html_url": "https://github.com/pavelsr", "followers_url": "https://api.github.com/users/pavelsr/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-21T09:57:34
2024-03-27T20:53:23
2024-03-27T20:53:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm trying to build ollama with AMD GPU support via command ``` ROCM_PATH=/opt/rocm CLBlast_DIR=/usr/lib/x86_64-linux-gnu/cmake/CLBlast go generate -tags rocm ./... ``` and in final stage of build I got an error mentioned in issue header: ![Screenshot from 2024-02-21 12-55-49](https://github.com/ollama/ollam...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2628/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1414
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1414/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1414/comments
https://api.github.com/repos/ollama/ollama/issues/1414/events
https://github.com/ollama/ollama/issues/1414
2,029,939,345
I_kwDOJ0Z1Ps54_mqR
1,414
Windows install runs into errors
{ "login": "csaben", "id": 76020733, "node_id": "MDQ6VXNlcjc2MDIwNzMz", "avatar_url": "https://avatars.githubusercontent.com/u/76020733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/csaben", "html_url": "https://github.com/csaben", "followers_url": "https://api.github.com/users/csaben/fo...
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
4
2023-12-07T05:33:03
2023-12-11T15:48:16
2023-12-11T15:48:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
after manual install instructions and ```go build``` I receive the following errors: ``` # github.com/jmorganca/ollama/readline readline\readline.go:199:12: undefined: syscall.Kill readline\readline.go:199:28: undefined: syscall.SIGSTOP ``` After messing with readline.go I just manage to get a .exe file that do...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1414/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/377
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/377/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/377/comments
https://api.github.com/repos/ollama/ollama/issues/377/events
https://github.com/ollama/ollama/pull/377
1,855,878,607
PR_kwDOJ0Z1Ps5YNGuk
377
Strip protocol from model path
{ "login": "rlbaker", "id": 967417, "node_id": "MDQ6VXNlcjk2NzQxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/967417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rlbaker", "html_url": "https://github.com/rlbaker", "followers_url": "https://api.github.com/users/rlbaker/fo...
[]
closed
false
null
[]
null
5
2023-08-18T00:40:27
2023-08-22T04:56:57
2023-08-22T04:56:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/377", "html_url": "https://github.com/ollama/ollama/pull/377", "diff_url": "https://github.com/ollama/ollama/pull/377.diff", "patch_url": "https://github.com/ollama/ollama/pull/377.patch", "merged_at": "2023-08-22T04:56:57" }
Took a whack at fixing https://github.com/jmorganca/ollama/issues/371 and reorganized the switch logic slightly as well. Wasn't sure if it was better to strip all protocols or just `https://`, so if you'd like the latter I can switch it to just a `strings.TrimPrefix`. Happy to back out the updated switch code as wel...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/377/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6312
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6312/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6312/comments
https://api.github.com/repos/ollama/ollama/issues/6312/events
https://github.com/ollama/ollama/issues/6312
2,459,666,429
I_kwDOJ0Z1Ps6Sm4f9
6,312
how to force ollama to use different cpu runners / how to compile windows avx512 runner?
{ "login": "AncientMystic", "id": 62780271, "node_id": "MDQ6VXNlcjYyNzgwMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AncientMystic", "html_url": "https://github.com/AncientMystic", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5808482718, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
16
2024-08-11T16:31:17
2024-11-04T19:16:38
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? according to logs ollama seems to only be using AVX not AVX2, how would i fix this and force avx2 or higher? also wondering how i compile the avx512 runner for windows? i have compiled other runners and the cuda runner fine but it seems regardless of what i try to set it just generates cpu, ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6312/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6545
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6545/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6545/comments
https://api.github.com/repos/ollama/ollama/issues/6545/events
https://github.com/ollama/ollama/pull/6545
2,492,937,842
PR_kwDOJ0Z1Ps55wQ1a
6,545
add llama3.1 chat template
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-08-28T20:30:23
2024-08-28T21:03:22
2024-08-28T21:03:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6545", "html_url": "https://github.com/ollama/ollama/pull/6545", "diff_url": "https://github.com/ollama/ollama/pull/6545.diff", "patch_url": "https://github.com/ollama/ollama/pull/6545.patch", "merged_at": "2024-08-28T21:03:20" }
This change adds the llama3.1 chat template do that it will be autodetected in `ollama create`.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6545/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7665
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7665/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7665/comments
https://api.github.com/repos/ollama/ollama/issues/7665/events
https://github.com/ollama/ollama/issues/7665
2,658,023,476
I_kwDOJ0Z1Ps6ebjg0
7,665
Dealing with passing huge attachments to models?
{ "login": "robotom", "id": 45123215, "node_id": "MDQ6VXNlcjQ1MTIzMjE1", "avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/robotom", "html_url": "https://github.com/robotom", "followers_url": "https://api.github.com/users/roboto...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
1
2024-11-14T08:36:09
2024-11-17T12:24:08
2024-11-17T12:24:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi all, I'm wondering what the best way is to pass huge amounts of "attachment" data to any of the models? A typical doc might be > 2 million words (~16 million characters). I know this amount of data can't be fed to any under 70B as prompts due to context windows. If I go with llama 3.1 405B on a 4 x H100 NVL or 8...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7665/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5585
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5585/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5585/comments
https://api.github.com/repos/ollama/ollama/issues/5585/events
https://github.com/ollama/ollama/pull/5585
2,399,405,473
PR_kwDOJ0Z1Ps505WvW
5,585
Create SECURITY.md
{ "login": "thejefflarson", "id": 55365, "node_id": "MDQ6VXNlcjU1MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/55365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thejefflarson", "html_url": "https://github.com/thejefflarson", "followers_url": "https://api.github.com/user...
[]
closed
false
null
[]
null
2
2024-07-09T23:25:27
2024-08-02T22:19:44
2024-08-02T16:53:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5585", "html_url": "https://github.com/ollama/ollama/pull/5585", "diff_url": "https://github.com/ollama/ollama/pull/5585.diff", "patch_url": "https://github.com/ollama/ollama/pull/5585.patch", "merged_at": null }
I know that Ollama is under active development, but it would be great to have a security policy in place for folks to report bugs. Also, would Ollama be open to enabling code scanning? https://docs.github.com/en/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning
{ "login": "thejefflarson", "id": 55365, "node_id": "MDQ6VXNlcjU1MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/55365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thejefflarson", "html_url": "https://github.com/thejefflarson", "followers_url": "https://api.github.com/user...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5585/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5585/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6540
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6540/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6540/comments
https://api.github.com/repos/ollama/ollama/issues/6540/events
https://github.com/ollama/ollama/issues/6540
2,491,433,636
I_kwDOJ0Z1Ps6UgEKk
6,540
actively retrieves the content returned from the web page
{ "login": "Nurburgring-Zhang", "id": 171787109, "node_id": "U_kgDOCj1DZQ", "avatar_url": "https://avatars.githubusercontent.com/u/171787109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nurburgring-Zhang", "html_url": "https://github.com/Nurburgring-Zhang", "followers_url": "https://api...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-08-28T08:31:10
2024-08-28T16:35:48
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
expected that ollama can automatically identify the model, and then when the problem exceeds the capacity of the model, ollama actively retrieves the content returned from the web page to the model, and the model analyzes the content returned and finally gives the answer.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6540/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4763
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4763/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4763/comments
https://api.github.com/repos/ollama/ollama/issues/4763/events
https://github.com/ollama/ollama/issues/4763
2,329,008,117
I_kwDOJ0Z1Ps6K0df1
4,763
I created Ollama - Open WebUI Script - Give it a try!
{ "login": "Special-Niewbie", "id": 64843123, "node_id": "MDQ6VXNlcjY0ODQzMTIz", "avatar_url": "https://avatars.githubusercontent.com/u/64843123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Special-Niewbie", "html_url": "https://github.com/Special-Niewbie", "followers_url": "https://api...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-06-01T08:31:52
2024-09-14T17:13:43
2024-09-14T17:13:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I created "Ollama-Open-WebUI-Script" for those who, like me, prefer not to overload their PC at startup with too many resources. Instead of manually starting Docker, then Ollama, and finally Open WebUI, this script simplifies the entire process, giving you direct and easy access to these two fantastic projects. Give...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4763/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4763/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6584
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6584/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6584/comments
https://api.github.com/repos/ollama/ollama/issues/6584/events
https://github.com/ollama/ollama/pull/6584
2,499,328,604
PR_kwDOJ0Z1Ps56E8ab
6,584
Add serve step to quickstart
{ "login": "anitagraser", "id": 590385, "node_id": "MDQ6VXNlcjU5MDM4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/590385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anitagraser", "html_url": "https://github.com/anitagraser", "followers_url": "https://api.github.com/user...
[]
closed
false
null
[]
null
2
2024-09-01T09:11:18
2024-09-02T19:42:04
2024-09-02T19:32:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6584", "html_url": "https://github.com/ollama/ollama/pull/6584", "diff_url": "https://github.com/ollama/ollama/pull/6584.diff", "patch_url": "https://github.com/ollama/ollama/pull/6584.patch", "merged_at": null }
This missing step trips up beginners, as shown in https://github.com/ollama/ollama/issues/2727#issuecomment-1969331044
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6584/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5735/comments
https://api.github.com/repos/ollama/ollama/issues/5735/events
https://github.com/ollama/ollama/issues/5735
2,412,328,010
I_kwDOJ0Z1Ps6PyTRK
5,735
How to Set Up RAG / LLamaIndex with Windows Preview?
{ "login": "elikakohen", "id": 11563283, "node_id": "MDQ6VXNlcjExNTYzMjgz", "avatar_url": "https://avatars.githubusercontent.com/u/11563283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elikakohen", "html_url": "https://github.com/elikakohen", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
2024-07-17T00:59:03
2024-07-17T01:00:45
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was wondering how, or if there is a way, to set up RAG with the Windows version?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5735/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2627
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2627/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2627/comments
https://api.github.com/repos/ollama/ollama/issues/2627/events
https://github.com/ollama/ollama/issues/2627
2,146,269,379
I_kwDOJ0Z1Ps5_7XjD
2,627
Error: listen tcp 127.0.0.1:11434: bind:
{ "login": "szymonk92", "id": 4785319, "node_id": "MDQ6VXNlcjQ3ODUzMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4785319?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szymonk92", "html_url": "https://github.com/szymonk92", "followers_url": "https://api.github.com/users/sz...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
13
2024-02-21T09:41:05
2024-09-29T01:41:48
2024-03-27T20:54:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Windows 10, I cannot start Ollama, ``` $ ollama serve Error: listen tcp 127.0.0.1:11434: bind: An attempt was made to access a socket in a way forbidden by its access permissions. ``` from **app.log** ``` time=2024-02-21T10:04:42.504+01:00 level=WARN source=server.go:109 msg="server crash 332 - exit code 1 ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2627/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2627/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2624
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2624/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2624/comments
https://api.github.com/repos/ollama/ollama/issues/2624/events
https://github.com/ollama/ollama/issues/2624
2,145,890,647
I_kwDOJ0Z1Ps5_57FX
2,624
Support for Tinyllava
{ "login": "oliverbob", "id": 23272429, "node_id": "MDQ6VXNlcjIzMjcyNDI5", "avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oliverbob", "html_url": "https://github.com/oliverbob", "followers_url": "https://api.github.com/users/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-02-21T05:49:08
2024-09-06T14:39:07
2024-05-11T00:40:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In addition to support for moondream #2259 which is lightning faster than llava, I believe there are some good guys there that can successfully make a GGUF version of Tinyllava which is faster than light. Tried it on safetensors to work really well. its been trained on the llava dataset, just like moondream. https:/...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2624/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/2624/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1867
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1867/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1867/comments
https://api.github.com/repos/ollama/ollama/issues/1867/events
https://github.com/ollama/ollama/issues/1867
2,072,415,115
I_kwDOJ0Z1Ps57houL
1,867
ollama barely uses any Ram
{ "login": "neuleo", "id": 99101285, "node_id": "U_kgDOBegqZQ", "avatar_url": "https://avatars.githubusercontent.com/u/99101285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuleo", "html_url": "https://github.com/neuleo", "followers_url": "https://api.github.com/users/neuleo/followers"...
[]
closed
false
null
[]
null
2
2024-01-09T13:54:13
2024-01-09T18:43:22
2024-01-09T18:43:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hey Guys, I run ollama on docker and use mostly 7b models. But my Ram usage stays under 4 GB. Sometimes even below 3 GB. But the recommendations are 8 GB of Ram. It has 4 Core CPU, and it generates very slow even though I got 24 GB of Ram. I don't have a Video Card, though. I'm new to this, so can anyone te...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1867/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2516
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2516/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2516/comments
https://api.github.com/repos/ollama/ollama/issues/2516/events
https://github.com/ollama/ollama/pull/2516
2,136,985,321
PR_kwDOJ0Z1Ps5nAGCZ
2,516
Fix a couple duplicate instance bugs
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-02-15T16:41:42
2024-02-15T23:52:46
2024-02-15T23:52:43
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2516", "html_url": "https://github.com/ollama/ollama/pull/2516", "diff_url": "https://github.com/ollama/ollama/pull/2516.diff", "patch_url": "https://github.com/ollama/ollama/pull/2516.patch", "merged_at": "2024-02-15T23:52:43" }
- Prevent the installer running multiple times concurrently - Detect multiple apps running and exit ~~with a message~~
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2516/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8255
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8255/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8255/comments
https://api.github.com/repos/ollama/ollama/issues/8255/events
https://github.com/ollama/ollama/issues/8255
2,760,642,962
I_kwDOJ0Z1Ps6kjBGS
8,255
question: the Windows version is very slow when accessing the API
{ "login": "liu9187", "id": 38241603, "node_id": "MDQ6VXNlcjM4MjQxNjAz", "avatar_url": "https://avatars.githubusercontent.com/u/38241603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liu9187", "html_url": "https://github.com/liu9187", "followers_url": "https://api.github.com/users/liu918...
[ { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", ...
closed
false
null
[]
null
7
2024-12-27T09:34:12
2024-12-30T02:38:24
2024-12-30T02:38:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Excuse me, the Windows version is very slow when accessing the API. What is the reason? But using the command line is faster system : windows memory : 24G
{ "login": "liu9187", "id": 38241603, "node_id": "MDQ6VXNlcjM4MjQxNjAz", "avatar_url": "https://avatars.githubusercontent.com/u/38241603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liu9187", "html_url": "https://github.com/liu9187", "followers_url": "https://api.github.com/users/liu918...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8255/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4462
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4462/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4462/comments
https://api.github.com/repos/ollama/ollama/issues/4462/events
https://github.com/ollama/ollama/pull/4462
2,298,992,426
PR_kwDOJ0Z1Ps5vl1FP
4,462
Port cuda/rocm skip build vars to linux
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-05-15T22:59:12
2024-05-15T23:27:50
2024-05-15T23:27:47
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4462", "html_url": "https://github.com/ollama/ollama/pull/4462", "diff_url": "https://github.com/ollama/ollama/pull/4462.diff", "patch_url": "https://github.com/ollama/ollama/pull/4462.patch", "merged_at": "2024-05-15T23:27:47" }
Windows already implements these, carry over to linux.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4462/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1141
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1141/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1141/comments
https://api.github.com/repos/ollama/ollama/issues/1141/events
https://github.com/ollama/ollama/issues/1141
1,995,322,582
I_kwDOJ0Z1Ps527jTW
1,141
Support forvietnamese-llama2-7b
{ "login": "khoint0210", "id": 12799726, "node_id": "MDQ6VXNlcjEyNzk5NzI2", "avatar_url": "https://avatars.githubusercontent.com/u/12799726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khoint0210", "html_url": "https://github.com/khoint0210", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
1
2023-11-15T18:38:50
2024-12-23T01:10:18
2024-12-23T01:10:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi folk Really love this project I just wonder can you guy support this model https://huggingface.co/bkai-foundation-models/vietnamese-llama2-7b-40GB It would be fantastic to have this run inside Ollama
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1141/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2620
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2620/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2620/comments
https://api.github.com/repos/ollama/ollama/issues/2620/events
https://github.com/ollama/ollama/issues/2620
2,145,572,225
I_kwDOJ0Z1Ps5_4tWB
2,620
[Thankyou] Thanks for the ollama community
{ "login": "chuangtc", "id": 2288469, "node_id": "MDQ6VXNlcjIyODg0Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/2288469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chuangtc", "html_url": "https://github.com/chuangtc", "followers_url": "https://api.github.com/users/chuan...
[]
closed
false
null
[]
null
1
2024-02-21T01:06:32
2024-03-11T21:09:58
2024-03-11T21:09:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In Taiwanese community, we have done a survey to see which are the top 5 local LLMs used by AI enthusiasts. ollama is list as the top 5. Here are the candidate list. https://matilabs.ai/2024/02/07/run-llms-locally/ Thanks for the ollama community's hard work. We really love this project. I feel it's easy and straig...
{ "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.github.com/users/hoyyev...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2620/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2620/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3510
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3510/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3510/comments
https://api.github.com/repos/ollama/ollama/issues/3510/events
https://github.com/ollama/ollama/issues/3510
2,229,108,379
I_kwDOJ0Z1Ps6E3X6b
3,510
Databases
{ "login": "trymeouteh", "id": 31172274, "node_id": "MDQ6VXNlcjMxMTcyMjc0", "avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trymeouteh", "html_url": "https://github.com/trymeouteh", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-04-06T06:42:46
2024-05-09T02:26:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? A way to manage, and create and train databases that can run on-top of models ### How should we solve this? Docker has images, containers and volumes. Ollama currently only has models. To add a new category for databases. Databases can be managed by... - Import from file -...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3510/reactions", "total_count": 5, "+1": 2, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3510/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8046
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8046/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8046/comments
https://api.github.com/repos/ollama/ollama/issues/8046/events
https://github.com/ollama/ollama/issues/8046
2,733,068,523
I_kwDOJ0Z1Ps6i51Dr
8,046
Toggle theme
{ "login": "Abubakkar13", "id": 45032674, "node_id": "MDQ6VXNlcjQ1MDMyNjc0", "avatar_url": "https://avatars.githubusercontent.com/u/45032674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abubakkar13", "html_url": "https://github.com/Abubakkar13", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
closed
false
{ "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.github.com/users/hoyyev...
[ { "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.git...
null
1
2024-12-11T14:03:39
2024-12-25T21:51:26
2024-12-25T21:51:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hey, Could you please add a feature for toggling between dark and light modes to ollama site? It would greatly enhance user experience by allowing customization based on preference and environment. A simple toggle button would be ideal!
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8046/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8046/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8656
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8656/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8656/comments
https://api.github.com/repos/ollama/ollama/issues/8656/events
https://github.com/ollama/ollama/pull/8656
2,818,099,446
PR_kwDOJ0Z1Ps6JWxCt
8,656
Add DeepSeek R1 in README
{ "login": "zakk616", "id": 26119949, "node_id": "MDQ6VXNlcjI2MTE5OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/26119949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zakk616", "html_url": "https://github.com/zakk616", "followers_url": "https://api.github.com/users/zakk61...
[]
open
false
null
[]
null
2
2025-01-29T12:38:21
2025-01-30T05:37:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8656", "html_url": "https://github.com/ollama/ollama/pull/8656", "diff_url": "https://github.com/ollama/ollama/pull/8656.diff", "patch_url": "https://github.com/ollama/ollama/pull/8656.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8656/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3136
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3136/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3136/comments
https://api.github.com/repos/ollama/ollama/issues/3136/events
https://github.com/ollama/ollama/issues/3136
2,185,694,105
I_kwDOJ0Z1Ps6CRwuZ
3,136
How to install offline? Is there an installation package available for download
{ "login": "yuanjie-ai", "id": 20265321, "node_id": "MDQ6VXNlcjIwMjY1MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/20265321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuanjie-ai", "html_url": "https://github.com/yuanjie-ai", "followers_url": "https://api.github.com/use...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-03-14T08:05:29
2024-03-14T20:26:34
2024-03-14T20:25:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How to install offline? Is there an installation package available for download
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3136/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3136/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6068
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6068/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6068/comments
https://api.github.com/repos/ollama/ollama/issues/6068/events
https://github.com/ollama/ollama/issues/6068
2,436,948,189
I_kwDOJ0Z1Ps6RQODd
6,068
ollama serve --choice a model name
{ "login": "ruanjianlun", "id": 146827112, "node_id": "U_kgDOCMBnaA", "avatar_url": "https://avatars.githubusercontent.com/u/146827112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanjianlun", "html_url": "https://github.com/ruanjianlun", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-30T05:44:11
2024-07-31T01:39:26
2024-07-31T01:39:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
hi guys I have a problem with aggregation, I am using it on windows, when I pass ollama serve, how do I select the aggregation model? For example, the following operation (base) PS C:\Users\Administrator> ollama serve --model codellama:7b Error: unknown flag: --model (base) PS C:\Users\Administrator>
{ "login": "ruanjianlun", "id": 146827112, "node_id": "U_kgDOCMBnaA", "avatar_url": "https://avatars.githubusercontent.com/u/146827112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanjianlun", "html_url": "https://github.com/ruanjianlun", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6068/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8033
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8033/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8033/comments
https://api.github.com/repos/ollama/ollama/issues/8033/events
https://github.com/ollama/ollama/issues/8033
2,731,004,328
I_kwDOJ0Z1Ps6ix9Go
8,033
nvcc compilation problem -- error: user-defined literal operator not found
{ "login": "envolution", "id": 12188773, "node_id": "MDQ6VXNlcjEyMTg4Nzcz", "avatar_url": "https://avatars.githubusercontent.com/u/12188773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/envolution", "html_url": "https://github.com/envolution", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 7700262114, "node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-12-10T19:27:04
2025-01-10T21:35:32
2024-12-16T06:05:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Compilation fails during 'make cuda_v12' ### Environment ``` $ /opt/cuda/bin/nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Tue_Oct_29_23:50:19_PDT_2024 Cuda compilation tools, release 12.6, V12.6.85 Build cuda_12.6.r12.6/compi...
{ "login": "envolution", "id": 12188773, "node_id": "MDQ6VXNlcjEyMTg4Nzcz", "avatar_url": "https://avatars.githubusercontent.com/u/12188773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/envolution", "html_url": "https://github.com/envolution", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8033/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6771
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6771/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6771/comments
https://api.github.com/repos/ollama/ollama/issues/6771/events
https://github.com/ollama/ollama/issues/6771
2,521,544,057
I_kwDOJ0Z1Ps6WS7V5
6,771
Inconsistent Responses from Identical Models
{ "login": "wahidur028", "id": 127589724, "node_id": "U_kgDOB5rdXA", "avatar_url": "https://avatars.githubusercontent.com/u/127589724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wahidur028", "html_url": "https://github.com/wahidur028", "followers_url": "https://api.github.com/users/wah...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 7706482389, "node_id": "LA_kwDOJ0Z1Ps8AAAABy1...
closed
false
null
[]
null
1
2024-09-12T07:11:03
2024-11-06T00:32:29
2024-11-06T00:32:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am new to Ollama and have noticed that when I ask a query using Ollama, the model's responses are quite poor. However, if I ask the same query using https://www.llama2.ai/, I receive much better responses. Can anyone explain what might be causing this difference? What could I be doing wrong? ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6771/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/570
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/570/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/570/comments
https://api.github.com/repos/ollama/ollama/issues/570/events
https://github.com/ollama/ollama/pull/570
1,907,923,348
PR_kwDOJ0Z1Ps5a8JIo
570
fix HEAD request
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-21T23:41:17
2023-09-21T23:56:18
2023-09-21T23:56:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/570", "html_url": "https://github.com/ollama/ollama/pull/570", "diff_url": "https://github.com/ollama/ollama/pull/570.diff", "patch_url": "https://github.com/ollama/ollama/pull/570.patch", "merged_at": "2023-09-21T23:56:17" }
HEAD request should respond like their GET counterparts except without a response body. The previous implementation didn't quite satisfy this since it doesn't attach a response body so the content length is zero while the GET request responded with `Ollama is running`, content length 17. Despite having a response...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/570/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4511
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4511/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4511/comments
https://api.github.com/repos/ollama/ollama/issues/4511/events
https://github.com/ollama/ollama/issues/4511
2,303,962,194
I_kwDOJ0Z1Ps6JU6xS
4,511
Feature Request: Force-Off ROCm and CUDA builds in `gen_linux.sh` even if they are present.
{ "login": "dreirund", "id": 1590519, "node_id": "MDQ6VXNlcjE1OTA1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/1590519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dreirund", "html_url": "https://github.com/dreirund", "followers_url": "https://api.github.com/users/dreir...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-05-18T09:38:09
2024-06-07T15:30:10
2024-05-21T20:24:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ahoj, there is a need to force a build without CUDA or ROCm, even if some of their libraries are present on the system. But current `gen_linux.sh` forces ROCm build if some libraries are found, even if ROCm is not desired. [Arch Linux AUR package `ollama-nogpu-git`](https://aur.archlinux.org/pkgbase/ollama-nog...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4511/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4511/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/296
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/296/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/296/comments
https://api.github.com/repos/ollama/ollama/issues/296/events
https://github.com/ollama/ollama/issues/296
1,838,055,021
I_kwDOJ0Z1Ps5tjn5t
296
Provide a way to override system prompt at runtime
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
0
2023-08-06T05:16:49
2023-08-08T04:56:23
2023-08-08T04:56:23
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
```bash curl -X POST http://localhost:11434/api/generate -d '{ "model": "llama2", "system": "You are a helpful assistant.", "prompt": "hello" }' ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/296/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3100
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3100/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3100/comments
https://api.github.com/repos/ollama/ollama/issues/3100/events
https://github.com/ollama/ollama/issues/3100
2,183,638,447
I_kwDOJ0Z1Ps6CJ62v
3,100
C4AI Command
{ "login": "AdaptiveStep", "id": 39104384, "node_id": "MDQ6VXNlcjM5MTA0Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/39104384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdaptiveStep", "html_url": "https://github.com/AdaptiveStep", "followers_url": "https://api.github.c...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-03-13T10:27:24
2024-04-15T15:36:36
2024-04-15T15:36:36
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add the c4ai-command model Its really good at translation and can handle 100 languages. https://huggingface.co/CohereForAI/c4ai-command-r-v01
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3100/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3100/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5203
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5203/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5203/comments
https://api.github.com/repos/ollama/ollama/issues/5203/events
https://github.com/ollama/ollama/issues/5203
2,367,053,249
I_kwDOJ0Z1Ps6NFl3B
5,203
OLLAMA_MODELS is not honored after being changed (beyond the first time it is set)
{ "login": "Nantris", "id": 6835891, "node_id": "MDQ6VXNlcjY4MzU4OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6835891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nantris", "html_url": "https://github.com/Nantris", "followers_url": "https://api.github.com/users/Nantris/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2024-06-21T18:28:08
2024-06-21T22:42:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I set `OLLAMA_MODELS` to a new directory to ensure a large download would have space, and then tried to move `OLLAMA_MODELS` back to the default directory on Windows - and although the environmental variable is set, ollama continues to search in the previous `OLLAMA_MODELS` path, even after ...
{ "login": "Nantris", "id": 6835891, "node_id": "MDQ6VXNlcjY4MzU4OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6835891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nantris", "html_url": "https://github.com/Nantris", "followers_url": "https://api.github.com/users/Nantris/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5203/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/603
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/603/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/603/comments
https://api.github.com/repos/ollama/ollama/issues/603/events
https://github.com/ollama/ollama/issues/603
1,913,187,362
I_kwDOJ0Z1Ps5yCOwi
603
Considering graphql instead of classic http
{ "login": "FairyTail2000", "id": 22645621, "node_id": "MDQ6VXNlcjIyNjQ1NjIx", "avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FairyTail2000", "html_url": "https://github.com/FairyTail2000", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
2
2023-09-26T10:19:39
2023-09-30T05:09:45
2023-09-30T05:09:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I won't explain here what [graphql](https://graphql.org) is. ## How does this project benefit from graphql vs classical http? With graphql you can get more info / less info per http call. Why is this relevant? Third party integration. Example with my own frontend: - I want to get all models (1 call) - I want t...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/603/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8450
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8450/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8450/comments
https://api.github.com/repos/ollama/ollama/issues/8450/events
https://github.com/ollama/ollama/issues/8450
2,792,010,100
I_kwDOJ0Z1Ps6marF0
8,450
ollama v0.5.6 /save bug?
{ "login": "Feng-Yong-Qi", "id": 130546218, "node_id": "U_kgDOB8f6Kg", "avatar_url": "https://avatars.githubusercontent.com/u/130546218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Feng-Yong-Qi", "html_url": "https://github.com/Feng-Yong-Qi", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-16T08:23:31
2025-01-18T05:25:03
2025-01-18T05:25:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Excuse me, v0.5.6 version /save error. Operating System: Rocky Linux 9.4. The CPU architecture is x86. v0.5.5 VS v0.5.6 ![Image](https://github.com/user-attachments/assets/da79a935-d802-4714-9678-b37795cd2835) ![Image](https://github.com/user-attachments/assets/032cffca-a200-4e33-8d77-971d419cd...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8450/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8430
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8430/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8430/comments
https://api.github.com/repos/ollama/ollama/issues/8430/events
https://github.com/ollama/ollama/issues/8430
2,788,596,735
I_kwDOJ0Z1Ps6mNpv_
8,430
Multi GPU, default GPU setting, specific model pin to specific GPU
{ "login": "dariuszsekula", "id": 118318131, "node_id": "U_kgDOBw1kMw", "avatar_url": "https://avatars.githubusercontent.com/u/118318131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dariuszsekula", "html_url": "https://github.com/dariuszsekula", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
4
2025-01-15T01:08:46
2025-01-16T18:05:05
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a multi GPU configuration and they are different GPU models with different memory size. I wish: 1. we could select default GPU for all models (potentially the fastest one with higher memory) 2. we could select specific model per specific GPU to use small models into small VRAM GPU and use fastest/highest_VRAM ca...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8430/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7701
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7701/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7701/comments
https://api.github.com/repos/ollama/ollama/issues/7701/events
https://github.com/ollama/ollama/issues/7701
2,664,440,219
I_kwDOJ0Z1Ps6e0CGb
7,701
[question] Why are there such large differences in installation package sizes for different CPU architectures and systems?
{ "login": "netjune", "id": 17109782, "node_id": "MDQ6VXNlcjE3MTA5Nzgy", "avatar_url": "https://avatars.githubusercontent.com/u/17109782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/netjune", "html_url": "https://github.com/netjune", "followers_url": "https://api.github.com/users/netjun...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
0
2024-11-16T14:31:49
2024-11-17T09:41:56
2024-11-17T09:41:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![screenshot-20241116-222333](https://github.com/user-attachments/assets/a649b559-5b4b-4b06-99e9-67ff7929606b) More than 10x
{ "login": "netjune", "id": 17109782, "node_id": "MDQ6VXNlcjE3MTA5Nzgy", "avatar_url": "https://avatars.githubusercontent.com/u/17109782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/netjune", "html_url": "https://github.com/netjune", "followers_url": "https://api.github.com/users/netjun...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7701/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1206
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1206/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1206/comments
https://api.github.com/repos/ollama/ollama/issues/1206/events
https://github.com/ollama/ollama/pull/1206
2,002,338,841
PR_kwDOJ0Z1Ps5f6uPA
1,206
README: link to LangChainGo for talking to ollama, with an example
{ "login": "eliben", "id": 1130906, "node_id": "MDQ6VXNlcjExMzA5MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/1130906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eliben", "html_url": "https://github.com/eliben", "followers_url": "https://api.github.com/users/eliben/foll...
[]
closed
false
null
[]
null
0
2023-11-20T14:33:01
2023-11-20T15:35:07
2023-11-20T15:35:07
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1206", "html_url": "https://github.com/ollama/ollama/pull/1206", "diff_url": "https://github.com/ollama/ollama/pull/1206.diff", "patch_url": "https://github.com/ollama/ollama/pull/1206.patch", "merged_at": "2023-11-20T15:35:07" }
null
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1206/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5616
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5616/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5616/comments
https://api.github.com/repos/ollama/ollama/issues/5616/events
https://github.com/ollama/ollama/pull/5616
2,401,847,462
PR_kwDOJ0Z1Ps51Bni3
5,616
fix: quant err message
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[]
closed
false
null
[]
null
0
2024-07-10T22:30:02
2024-07-12T00:24:31
2024-07-12T00:24:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5616", "html_url": "https://github.com/ollama/ollama/pull/5616", "diff_url": "https://github.com/ollama/ollama/pull/5616.diff", "patch_url": "https://github.com/ollama/ollama/pull/5616.patch", "merged_at": "2024-07-12T00:24:29" }
`Error: quantization of this model is not supported by your version of Ollama. You may need to upgrade` Resolves: https://github.com/ollama/ollama/issues/5531
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5616/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6198
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6198/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6198/comments
https://api.github.com/repos/ollama/ollama/issues/6198/events
https://github.com/ollama/ollama/issues/6198
2,450,702,986
I_kwDOJ0Z1Ps6SEsKK
6,198
Request to Add JAIS 70B Model
{ "login": "umar052001", "id": 79453927, "node_id": "MDQ6VXNlcjc5NDUzOTI3", "avatar_url": "https://avatars.githubusercontent.com/u/79453927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umar052001", "html_url": "https://github.com/umar052001", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
5
2024-08-06T11:58:03
2024-10-22T13:32:44
2024-10-22T13:32:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Description:** I would like to suggest adding the JAIS 70B model to Ollama. The model is available on Hugging Face [here](https://huggingface.co/inceptionai/jais-adapted-70b) and offers significant advancements in Arabic-English natural language processing. It would be a valuable addition for users interested in Ara...
{ "login": "umar052001", "id": 79453927, "node_id": "MDQ6VXNlcjc5NDUzOTI3", "avatar_url": "https://avatars.githubusercontent.com/u/79453927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umar052001", "html_url": "https://github.com/umar052001", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6198/reactions", "total_count": 21, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 8, "rocket": 7, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6198/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2277
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2277/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2277/comments
https://api.github.com/repos/ollama/ollama/issues/2277/events
https://github.com/ollama/ollama/issues/2277
2,108,235,095
I_kwDOJ0Z1Ps59qR1X
2,277
How to set ROCR_VISIBLE_DEVICES to 0
{ "login": "meminens", "id": 42714627, "node_id": "MDQ6VXNlcjQyNzE0NjI3", "avatar_url": "https://avatars.githubusercontent.com/u/42714627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meminens", "html_url": "https://github.com/meminens", "followers_url": "https://api.github.com/users/mem...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-01-30T16:10:19
2024-01-31T21:15:19
2024-01-31T21:15:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have installed ollama (v0.1.22) and ROCm (v5.7.1) to Arch Linux via the following commands ``` pacman -S ollama rocm-hip-sdk rocm-opencl-sdk clblast systemctl daemon-reload systemctl enable ollama.service systemctl start ollama.service ``` and then run `ollama run mistral` Checking `htop` and `nvtop`, ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2277/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7874
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7874/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7874/comments
https://api.github.com/repos/ollama/ollama/issues/7874/events
https://github.com/ollama/ollama/issues/7874
2,702,895,284
I_kwDOJ0Z1Ps6hGui0
7,874
Provide a slash command to clear screen.
{ "login": "231tr0n", "id": 56352048, "node_id": "MDQ6VXNlcjU2MzUyMDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/56352048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/231tr0n", "html_url": "https://github.com/231tr0n", "followers_url": "https://api.github.com/users/231tr0...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-11-28T17:54:54
2024-11-28T17:58:59
2024-11-28T17:56:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi! Is it possible to provide a slash command to clear the screen of the terminal chat window which appears when running a model with ollama run?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7874/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/726
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/726/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/726/comments
https://api.github.com/repos/ollama/ollama/issues/726/events
https://github.com/ollama/ollama/issues/726
1,931,195,476
I_kwDOJ0Z1Ps5zG7RU
726
Currently create is recrating whole model , how to just update?
{ "login": "v3ss0n", "id": 419606, "node_id": "MDQ6VXNlcjQxOTYwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/419606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/v3ss0n", "html_url": "https://github.com/v3ss0n", "followers_url": "https://api.github.com/users/v3ss0n/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2023-10-07T05:18:26
2023-10-28T19:26:48
2023-10-28T19:26:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am tuning pramaeters nad i have to re run `ollama create name modelfile` . That remake the model . If there is no model file changes can i just update the parameters?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/726/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2926
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2926/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2926/comments
https://api.github.com/repos/ollama/ollama/issues/2926/events
https://github.com/ollama/ollama/pull/2926
2,167,878,027
PR_kwDOJ0Z1Ps5opVJh
2,926
refactor model parsing
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-03-04T22:12:49
2024-04-01T20:58:14
2024-04-01T20:58:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2926", "html_url": "https://github.com/ollama/ollama/pull/2926", "diff_url": "https://github.com/ollama/ollama/pull/2926.diff", "patch_url": "https://github.com/ollama/ollama/pull/2926.patch", "merged_at": "2024-04-01T20:58:13" }
- refactor metadata
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2926/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8496
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8496/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8496/comments
https://api.github.com/repos/ollama/ollama/issues/8496/events
https://github.com/ollama/ollama/issues/8496
2,798,197,635
I_kwDOJ0Z1Ps6myRuD
8,496
Requesting this new multimodal model.
{ "login": "Ramachandra-2k96", "id": 149596008, "node_id": "U_kgDOCOqnaA", "avatar_url": "https://avatars.githubusercontent.com/u/149596008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ramachandra-2k96", "html_url": "https://github.com/Ramachandra-2k96", "followers_url": "https://api.gi...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2025-01-20T05:06:01
2025-01-23T19:16:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add the openbmb/MiniCPM-o-2_6 Model. https://huggingface.co/openbmb/MiniCPM-o-2_6
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8496/reactions", "total_count": 9, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/8496/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7832
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7832/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7832/comments
https://api.github.com/repos/ollama/ollama/issues/7832/events
https://github.com/ollama/ollama/pull/7832
2,692,423,898
PR_kwDOJ0Z1Ps6DGnAU
7,832
Update README.md
{ "login": "jake83741", "id": 125723241, "node_id": "U_kgDOB35iaQ", "avatar_url": "https://avatars.githubusercontent.com/u/125723241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jake83741", "html_url": "https://github.com/jake83741", "followers_url": "https://api.github.com/users/jake83...
[]
closed
false
null
[]
null
1
2024-11-25T22:20:16
2024-11-26T02:39:37
2024-11-26T01:56:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7832", "html_url": "https://github.com/ollama/ollama/pull/7832", "diff_url": "https://github.com/ollama/ollama/pull/7832.diff", "patch_url": "https://github.com/ollama/ollama/pull/7832.patch", "merged_at": "2024-11-26T01:56:30" }
Hi, This is an update on the description of my Discord bot project - [vnc-lm](https://github.com/jake83741/vnc-lm) in the README. Thank you, Jake
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7832/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7523
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7523/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7523/comments
https://api.github.com/repos/ollama/ollama/issues/7523/events
https://github.com/ollama/ollama/issues/7523
2,637,275,073
I_kwDOJ0Z1Ps6dMZ_B
7,523
[FEAT] Add ollama installation selection
{ "login": "rong-xiaoli", "id": 58361774, "node_id": "MDQ6VXNlcjU4MzYxNzc0", "avatar_url": "https://avatars.githubusercontent.com/u/58361774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rong-xiaoli", "html_url": "https://github.com/rong-xiaoli", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-11-06T07:33:58
2024-11-06T11:45:31
2024-11-06T11:45:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
My computer has only 46MB left on my `C:` and ollama installation program fails to install. I tried my best to get more space but I could not. It seems like the installation program does not support installation path selection, thus I suggest adding a path selection before installation.
{ "login": "rong-xiaoli", "id": 58361774, "node_id": "MDQ6VXNlcjU4MzYxNzc0", "avatar_url": "https://avatars.githubusercontent.com/u/58361774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rong-xiaoli", "html_url": "https://github.com/rong-xiaoli", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7523/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8288
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8288/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8288/comments
https://api.github.com/repos/ollama/ollama/issues/8288/events
https://github.com/ollama/ollama/issues/8288
2,765,953,153
I_kwDOJ0Z1Ps6k3RiB
8,288
Enable auto-save functionality via CLI flag
{ "login": "migueltorrescosta", "id": 6451658, "node_id": "MDQ6VXNlcjY0NTE2NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6451658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/migueltorrescosta", "html_url": "https://github.com/migueltorrescosta", "followers_url": "https:/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
1
2025-01-02T12:30:26
2025-01-08T18:01:09
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Having ollama models keep track of past conversations improves the quality of answers dramatically. Using `/save <model>` manually with every run is time-consuming, and prone to be forgotten. Allowing a flag similar to `ollama run --auto-save-on-exit <model>` would make this process a lot smoother. The solutions propos...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8288/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5425
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5425/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5425/comments
https://api.github.com/repos/ollama/ollama/issues/5425/events
https://github.com/ollama/ollama/issues/5425
2,385,008,360
I_kwDOJ0Z1Ps6OKFbo
5,425
Does having the default quant type being Q4_0 (a legacy format) on the model hub still make sense?
{ "login": "sammcj", "id": 862951, "node_id": "MDQ6VXNlcjg2Mjk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sammcj", "html_url": "https://github.com/sammcj", "followers_url": "https://api.github.com/users/sammcj/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-07-02T01:28:02
2024-12-29T19:18:56
2024-12-29T19:18:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The Ollama model hub still has the default quant type of Q4_0 which is a legacy format that under-performs compared to K-quants (Qn_K, e.g. Q4_K_M, Q6_K, Q5_K_L etc...). - Would it perhaps make sense to change the default quant to Q4_K_M for future models uploaded to the hub? Reference - https://github.com/gge...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5425/reactions", "total_count": 22, "+1": 22, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5425/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5653
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5653/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5653/comments
https://api.github.com/repos/ollama/ollama/issues/5653/events
https://github.com/ollama/ollama/pull/5653
2,406,177,059
PR_kwDOJ0Z1Ps51QMhb
5,653
template: preprocess message and collect system
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-07-12T19:02:17
2024-07-12T19:32:36
2024-07-12T19:32:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5653", "html_url": "https://github.com/ollama/ollama/pull/5653", "diff_url": "https://github.com/ollama/ollama/pull/5653.diff", "patch_url": "https://github.com/ollama/ollama/pull/5653.patch", "merged_at": "2024-07-12T19:32:34" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5653/timeline
null
null
true