url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/3639
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3639/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3639/comments
https://api.github.com/repos/ollama/ollama/issues/3639/events
https://github.com/ollama/ollama/issues/3639
2,242,233,698
I_kwDOJ0Z1Ps6FpcVi
3,639
MacOS not saving 0.0.0.0 between hardware restarts
{ "login": "gwthompson", "id": 177971, "node_id": "MDQ6VXNlcjE3Nzk3MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/177971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwthompson", "html_url": "https://github.com/gwthompson", "followers_url": "https://api.github.com/users/g...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A...
open
false
null
[]
null
8
2024-04-14T15:57:39
2024-11-06T17:36:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I set launchctl setenv OLLAMA_HOST "0.0.0.0" and restart the Ollama ap everything works as expected and I can access the API from other devices on my network. However when I reboot my Mac the OLLAMA_HOST reverts back to 127.0.0.1 and I have to run launchctl setenv OLLAMA_HOST "0.0.0.0" ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3639/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7792
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7792/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7792/comments
https://api.github.com/repos/ollama/ollama/issues/7792/events
https://github.com/ollama/ollama/issues/7792
2,682,300,269
I_kwDOJ0Z1Ps6f4Kdt
7,792
Mistral Large instruct template
{ "login": "nicho2", "id": 11471811, "node_id": "MDQ6VXNlcjExNDcxODEx", "avatar_url": "https://avatars.githubusercontent.com/u/11471811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicho2", "html_url": "https://github.com/nicho2", "followers_url": "https://api.github.com/users/nicho2/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
2
2024-11-22T08:18:55
2024-11-23T21:03:49
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello , Ollama seems no apply a good template for Mistral Large: https://huggingface.co/mistralai/Mistral-Large-Instruct-2411#basic-instruct-template-v7 mistral gives a SYSTEM_PROMPT token not apply: <s>[SYSTEM_PROMPT] <system prompt>[/SYSTEM_PROMPT][INST] <user message>[/INST] ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7792/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1309
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1309/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1309/comments
https://api.github.com/repos/ollama/ollama/issues/1309/events
https://github.com/ollama/ollama/issues/1309
2,015,536,052
I_kwDOJ0Z1Ps54IqO0
1,309
[WSL2] Cuda error 222 : the provided PTX was compiled with an unsupported toolchain.
{ "login": "fxrobin", "id": 16342334, "node_id": "MDQ6VXNlcjE2MzQyMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/16342334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxrobin", "html_url": "https://github.com/fxrobin", "followers_url": "https://api.github.com/users/fxrobi...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2023-11-29T00:02:01
2024-03-12T16:18:02
2024-03-12T16:17:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
On Windows WSL2, with Cuda Toolkit Installed and Cuda-Container-Toolkit installed, I'm facing this issue running the official Docker image : ``` ollama-ollama-1 | 2023/11/29 00:36:04 llama.go:292: 3676 MB VRAM available, loading up to 21 GPU layers ollama-ollama-1 | 2023/11/29 00:36:04 llama.go:421: starting...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1309/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2825
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2825/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2825/comments
https://api.github.com/repos/ollama/ollama/issues/2825/events
https://github.com/ollama/ollama/issues/2825
2,160,283,652
I_kwDOJ0Z1Ps6Aw1AE
2,825
CPU does not have AVX or AVX2, disabling GPU support.
{ "login": "mingyue0094", "id": 63558866, "node_id": "MDQ6VXNlcjYzNTU4ODY2", "avatar_url": "https://avatars.githubusercontent.com/u/63558866?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mingyue0094", "html_url": "https://github.com/mingyue0094", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
4
2024-02-29T03:32:54
2024-03-01T17:45:47
2024-03-01T17:45:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I can enable GPU using pytorch. But using ollama, the above log is displayed. I would like to ask if it can support GPU. In CPU “does not have AVX or AVX2” ``` time=2024-02-29T11:21:58.722+08:00 level=INFO source=images.go:710 msg="total blobs: 5" time=2024-02-29T11:21:58.752+08:00 level=INFO source=images.go:71...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2825/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7195
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7195/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7195/comments
https://api.github.com/repos/ollama/ollama/issues/7195/events
https://github.com/ollama/ollama/issues/7195
2,584,627,655
I_kwDOJ0Z1Ps6aDknH
7,195
怎样在本地建一个ollama.com/library的服务
{ "login": "czhcc", "id": 4754730, "node_id": "MDQ6VXNlcjQ3NTQ3MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4754730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/czhcc", "html_url": "https://github.com/czhcc", "followers_url": "https://api.github.com/users/czhcc/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-10-14T03:19:11
2024-10-16T00:04:30
2024-10-16T00:04:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 能在本地建一个ollama.com/library的服务吗? 我用Modelfile时,内容是 FROM http://19.18.5.127/temp/myqwen7b.gguf 会出现错误 Error: pull model manifest: Get "http://19.18.5.127/v2/temp/myqwen7b.gguf/manifests/latest": EOF 要怎样提供一个本地的拉取服务? ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ###...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7195/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6662
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6662/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6662/comments
https://api.github.com/repos/ollama/ollama/issues/6662/events
https://github.com/ollama/ollama/pull/6662
2,508,748,864
PR_kwDOJ0Z1Ps56k9iG
6,662
Revert "Detect running in a container"
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-09-05T21:20:37
2024-09-05T21:26:01
2024-09-05T21:26:00
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6662", "html_url": "https://github.com/ollama/ollama/pull/6662", "diff_url": "https://github.com/ollama/ollama/pull/6662.diff", "patch_url": "https://github.com/ollama/ollama/pull/6662.patch", "merged_at": "2024-09-05T21:26:00" }
Reverts ollama/ollama#6495 Turns out this doesn't actually work on many platforms, so it doesn't serve much purpose.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6662/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/10
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/10/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/10/comments
https://api.github.com/repos/ollama/ollama/issues/10/events
https://github.com/ollama/ollama/pull/10
1,779,294,455
PR_kwDOJ0Z1Ps5UKP1q
10
add with symlink
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-06-28T16:26:42
2023-06-30T18:54:25
2023-06-30T18:54:22
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/10", "html_url": "https://github.com/ollama/ollama/pull/10", "diff_url": "https://github.com/ollama/ollama/pull/10.diff", "patch_url": "https://github.com/ollama/ollama/pull/10.patch", "merged_at": null }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/10/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/10/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8045
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8045/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8045/comments
https://api.github.com/repos/ollama/ollama/issues/8045/events
https://github.com/ollama/ollama/issues/8045
2,732,812,651
I_kwDOJ0Z1Ps6i42lr
8,045
Ollama run hf.co - Error 401: Invalid username or password
{ "login": "bengrau", "id": 62591521, "node_id": "MDQ6VXNlcjYyNTkxNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/62591521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bengrau", "html_url": "https://github.com/bengrau", "followers_url": "https://api.github.com/users/bengra...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-12-11T12:12:35
2025-01-02T14:23:32
2024-12-20T22:13:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am using a private model on hf and try to run it like this: ``` huggingface-cli login --token hf_xxx ollama run hf.co/BGR/Llama-3.2-1B-I-p:latest ``` However I get this error from ollama: ``` pulling manifest Error: pull model manifest: 401: {"error":"Invalid username or password...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8045/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6597
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6597/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6597/comments
https://api.github.com/repos/ollama/ollama/issues/6597/events
https://github.com/ollama/ollama/issues/6597
2,501,614,040
I_kwDOJ0Z1Ps6VG5nY
6,597
RPI with armhrf architecture support
{ "login": "alecrimi", "id": 16406658, "node_id": "MDQ6VXNlcjE2NDA2NjU4", "avatar_url": "https://avatars.githubusercontent.com/u/16406658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alecrimi", "html_url": "https://github.com/alecrimi", "followers_url": "https://api.github.com/users/ale...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7700262114, "node_id": ...
open
false
null
[]
null
1
2024-09-02T21:48:04
2024-11-04T19:20:15
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I followed all possible guide online, either using curl and your install.sh, the docker, or the snap package and I could manage to install ollama, not clear how to compile the code (there is no configure file). The main issue is that everyzthing has been prepared for arm64 and not for my archit...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6597/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7100
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7100/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7100/comments
https://api.github.com/repos/ollama/ollama/issues/7100/events
https://github.com/ollama/ollama/issues/7100
2,565,820,022
I_kwDOJ0Z1Ps6Y7052
7,100
mixtral:8x22b model does not work with system prompt only
{ "login": "gakugaku", "id": 14232275, "node_id": "MDQ6VXNlcjE0MjMyMjc1", "avatar_url": "https://avatars.githubusercontent.com/u/14232275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gakugaku", "html_url": "https://github.com/gakugaku", "followers_url": "https://api.github.com/users/gak...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2024-10-04T08:46:37
2024-10-24T05:08:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The `mixtral:8x22b-instruct` model does not work correctly when only the system prompt is provided. In such cases, an empty prompt is sent, leading to irrelevant output. This behavior may be related to the internal handling of prompts or recent changes made in the system prompt handling, as...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7100/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/629
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/629/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/629/comments
https://api.github.com/repos/ollama/ollama/issues/629/events
https://github.com/ollama/ollama/pull/629
1,916,664,927
PR_kwDOJ0Z1Ps5bZf4X
629
Update modelfile.md to reflect the usage of num_gpu.
{ "login": "aaroncoffey", "id": 3649791, "node_id": "MDQ6VXNlcjM2NDk3OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3649791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaroncoffey", "html_url": "https://github.com/aaroncoffey", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2023-09-28T04:09:38
2023-09-28T14:21:21
2023-09-28T14:21:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/629", "html_url": "https://github.com/ollama/ollama/pull/629", "diff_url": "https://github.com/ollama/ollama/pull/629.diff", "patch_url": "https://github.com/ollama/ollama/pull/629.patch", "merged_at": "2023-09-28T14:21:21" }
The current docs for the parameter num_gpu are inaccurate for linux. Ref: https://github.com/jmorganca/ollama/issues/618
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/629/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1722
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1722/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1722/comments
https://api.github.com/repos/ollama/ollama/issues/1722/events
https://github.com/ollama/ollama/issues/1722
2,056,612,248
I_kwDOJ0Z1Ps56lWmY
1,722
How to update a model in a timely manner?
{ "login": "PriyaranjanMaratheDish", "id": 133165012, "node_id": "U_kgDOB-_v1A", "avatar_url": "https://avatars.githubusercontent.com/u/133165012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PriyaranjanMaratheDish", "html_url": "https://github.com/PriyaranjanMaratheDish", "followers_url...
[]
closed
false
null
[]
null
5
2023-12-26T18:17:45
2024-03-12T22:00:18
2024-03-12T22:00:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
So here is what I am trying to do - 1)Create a custom Ollama model by giving it data exported from Snowflake database tables. Data in Snowflake tables is already in a Golden Format. Have additional follow up questions on my requirement - A)Instead of creating the model using -f (file with data exported from Sno...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1722/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/1722/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7205
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7205/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7205/comments
https://api.github.com/repos/ollama/ollama/issues/7205/events
https://github.com/ollama/ollama/pull/7205
2,587,277,772
PR_kwDOJ0Z1Ps5-moX0
7,205
Clear screen when `/clear` command is used in interactive mode
{ "login": "suyogdahal", "id": 41914389, "node_id": "MDQ6VXNlcjQxOTE0Mzg5", "avatar_url": "https://avatars.githubusercontent.com/u/41914389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suyogdahal", "html_url": "https://github.com/suyogdahal", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
1
2024-10-14T23:37:08
2024-11-04T17:48:11
2024-11-04T17:48:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7205", "html_url": "https://github.com/ollama/ollama/pull/7205", "diff_url": "https://github.com/ollama/ollama/pull/7205.diff", "patch_url": "https://github.com/ollama/ollama/pull/7205.patch", "merged_at": null }
Use ANSI escape codes to clear the terminal and reset the cursor's position with the `/clear` command.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7205/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2614
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2614/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2614/comments
https://api.github.com/repos/ollama/ollama/issues/2614/events
https://github.com/ollama/ollama/issues/2614
2,144,424,882
I_kwDOJ0Z1Ps5_0VOy
2,614
AutoModelForCausalLM and .ollama/models
{ "login": "Demirrr", "id": 13405667, "node_id": "MDQ6VXNlcjEzNDA1NjY3", "avatar_url": "https://avatars.githubusercontent.com/u/13405667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Demirrr", "html_url": "https://github.com/Demirrr", "followers_url": "https://api.github.com/users/Demirr...
[]
closed
false
null
[]
null
2
2024-02-20T13:47:54
2025-01-06T19:37:55
2024-02-20T18:52:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can we create an instance of `AutoModelForCausalLM` from downloaded language models `~/.ollama/models`? By this, the finetunning and using finetuned model via ollama would be easier. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoT...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2614/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5395
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5395/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5395/comments
https://api.github.com/repos/ollama/ollama/issues/5395/events
https://github.com/ollama/ollama/issues/5395
2,382,431,501
I_kwDOJ0Z1Ps6OAQUN
5,395
CUBLAS_STATUS_ALLOC_FAILED with deepseek-coder-v2:16b
{ "login": "hgourvest", "id": 1659652, "node_id": "MDQ6VXNlcjE2NTk2NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1659652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hgourvest", "html_url": "https://github.com/hgourvest", "followers_url": "https://api.github.com/users/hg...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
11
2024-06-30T20:30:04
2025-01-26T15:24:21
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when running deepseek-coder-v2:16b on NVIDIA GeForce RTX 3080 Laptop GPU, I have this crash report: ``` Error: llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_ALLOC_FAILED current device: 0, in function cublas_handle at /go/src/github.com/ollama/...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5395/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1142
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1142/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1142/comments
https://api.github.com/repos/ollama/ollama/issues/1142/events
https://github.com/ollama/ollama/issues/1142
1,995,407,192
I_kwDOJ0Z1Ps52739Y
1,142
Add support for llamacpp min_p sampler
{ "login": "JoseConseco", "id": 13521338, "node_id": "MDQ6VXNlcjEzNTIxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/13521338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoseConseco", "html_url": "https://github.com/JoseConseco", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2023-11-15T19:28:55
2024-07-27T21:37:42
2024-07-27T21:37:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://github.com/ggerganov/llama.cpp/pull/3841 ![obraz](https://github.com/jmorganca/ollama/assets/13521338/26509c9f-31a1-4544-8d8b-f3418e73a06c) It supposed to give better results compared to top_k, top_p. I tried to add this min_p - parameter to llama options, but it was unrecognized.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1142/reactions", "total_count": 15, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 3 }
https://api.github.com/repos/ollama/ollama/issues/1142/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2555
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2555/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2555/comments
https://api.github.com/repos/ollama/ollama/issues/2555/events
https://github.com/ollama/ollama/issues/2555
2,139,747,300
I_kwDOJ0Z1Ps5_ifPk
2,555
`EOF` error on `/api/chat` or `/api/generate`
{ "login": "saamerm", "id": 8262287, "node_id": "MDQ6VXNlcjgyNjIyODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8262287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saamerm", "html_url": "https://github.com/saamerm", "followers_url": "https://api.github.com/users/saamerm/...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
40
2024-02-17T02:11:48
2024-04-15T22:26:31
2024-04-15T22:26:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
* Upon running `ollama run dolphin-phi` on a Linux (works fine on Mac), I get this error `Error: Post "http://127.0.0.1:11434/api/chat": EOF`. * It seems to have installed successfully too, but it just seems like there's some error in the starting of the server? * I tried to add a --v for a more verbose understandin...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2555/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8564
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8564/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8564/comments
https://api.github.com/repos/ollama/ollama/issues/8564/events
https://github.com/ollama/ollama/issues/8564
2,809,178,195
I_kwDOJ0Z1Ps6ncKhT
8,564
Error: server metal not listed in available servers map
{ "login": "felix021", "id": 367085, "node_id": "MDQ6VXNlcjM2NzA4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/367085?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix021", "html_url": "https://github.com/felix021", "followers_url": "https://api.github.com/users/felix02...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2025-01-24T11:11:50
2025-01-26T02:26:54
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I downloaded Ollama today on my Macbook (Apple M3 Pro, with MacOS Sonoma 14.3 23D56), and tried to run deepseek-r1:8b, but ollama failed with this error: > $ ollama run deepseek-r1:8b > Error: [0] server metal not listed in available servers map[] p.s. I can run this model with llama-cli on th...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8564/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7507
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7507/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7507/comments
https://api.github.com/repos/ollama/ollama/issues/7507/events
https://github.com/ollama/ollama/issues/7507
2,635,203,425
I_kwDOJ0Z1Ps6dEgNh
7,507
OLLAMA_VERSION for pre-release doesn't work
{ "login": "ExposedCat", "id": 44642024, "node_id": "MDQ6VXNlcjQ0NjQyMDI0", "avatar_url": "https://avatars.githubusercontent.com/u/44642024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ExposedCat", "html_url": "https://github.com/ExposedCat", "followers_url": "https://api.github.com/use...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVw...
closed
false
null
[]
null
2
2024-11-05T11:33:14
2024-11-05T16:33:52
2024-11-05T16:33:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? According to docs, this should download even pre-release versions: ```bash curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=X.Y.Z sh ``` However, it fails with `404` for `0.4.0` which is a pre-release version (latest stable works) ### OS Linux ### GPU AMD ### CPU AMD ### Olla...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7507/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7325
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7325/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7325/comments
https://api.github.com/repos/ollama/ollama/issues/7325/events
https://github.com/ollama/ollama/pull/7325
2,606,736,159
PR_kwDOJ0Z1Ps5_hmCU
7,325
added ollamarama-matrix to community integrations
{ "login": "h1ddenpr0cess20", "id": 127710567, "node_id": "U_kgDOB5y1Zw", "avatar_url": "https://avatars.githubusercontent.com/u/127710567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h1ddenpr0cess20", "html_url": "https://github.com/h1ddenpr0cess20", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
0
2024-10-22T23:26:47
2024-11-22T01:49:30
2024-11-22T01:49:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7325", "html_url": "https://github.com/ollama/ollama/pull/7325", "diff_url": "https://github.com/ollama/ollama/pull/7325.diff", "patch_url": "https://github.com/ollama/ollama/pull/7325.patch", "merged_at": "2024-11-22T01:49:30" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7325/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/537
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/537/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/537/comments
https://api.github.com/repos/ollama/ollama/issues/537/events
https://github.com/ollama/ollama/pull/537
1,899,174,019
PR_kwDOJ0Z1Ps5aewPf
537
fix error on upload chunk
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-15T22:59:52
2023-09-16T00:48:40
2023-09-16T00:48:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/537", "html_url": "https://github.com/ollama/ollama/pull/537", "diff_url": "https://github.com/ollama/ollama/pull/537.diff", "patch_url": "https://github.com/ollama/ollama/pull/537.patch", "merged_at": "2023-09-16T00:48:40" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/537/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1350
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1350/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1350/comments
https://api.github.com/repos/ollama/ollama/issues/1350/events
https://github.com/ollama/ollama/pull/1350
2,021,761,364
PR_kwDOJ0Z1Ps5g8h4X
1,350
make linewrap still work when the terminal width has changed
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-12-02T00:18:40
2023-12-04T22:14:57
2023-12-04T22:14:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1350", "html_url": "https://github.com/ollama/ollama/pull/1350", "diff_url": "https://github.com/ollama/ollama/pull/1350.diff", "patch_url": "https://github.com/ollama/ollama/pull/1350.patch", "merged_at": "2023-12-04T22:14:56" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1350/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4188
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4188/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4188/comments
https://api.github.com/repos/ollama/ollama/issues/4188/events
https://github.com/ollama/ollama/pull/4188
2,279,827,184
PR_kwDOJ0Z1Ps5ulNf2
4,188
User our bundled libraries (cuda) instead of the host library
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
2
2024-05-06T00:47:21
2024-05-06T21:41:16
2024-05-06T21:41:05
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4188", "html_url": "https://github.com/ollama/ollama/pull/4188", "diff_url": "https://github.com/ollama/ollama/pull/4188.diff", "patch_url": "https://github.com/ollama/ollama/pull/4188.patch", "merged_at": "2024-05-06T21:41:05" }
Trying to live off the land for cuda libraries was not the right strategy. We need to use the version we compiled against to ensure things work properly. This is most likely going to break Jetson v11 systems, but it turns out the change to favor host cuda libraries is breaking quite a few users.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4188/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8076
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8076/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8076/comments
https://api.github.com/repos/ollama/ollama/issues/8076/events
https://github.com/ollama/ollama/pull/8076
2,736,874,840
PR_kwDOJ0Z1Ps6FEhL9
8,076
api: return structured error on unauthorized push
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2024-12-12T21:02:29
2024-12-19T01:42:09
2024-12-19T01:42:08
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8076", "html_url": "https://github.com/ollama/ollama/pull/8076", "diff_url": "https://github.com/ollama/ollama/pull/8076.diff", "patch_url": "https://github.com/ollama/ollama/pull/8076.patch", "merged_at": null }
This commit implements a structured error response system for the Ollama API, replacing ad-hoc error handling and string parsing with proper error types and codes. The key changes include: 1. Creation of a new `errors.go` file defining structured error types and codes 2. Introduction of `ErrorResponse` struct with ...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8076/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2047
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2047/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2047/comments
https://api.github.com/repos/ollama/ollama/issues/2047/events
https://github.com/ollama/ollama/issues/2047
2,088,134,294
I_kwDOJ0Z1Ps58dmaW
2,047
ollama run stable-code
{ "login": "JiangZongKang", "id": 22634440, "node_id": "MDQ6VXNlcjIyNjM0NDQw", "avatar_url": "https://avatars.githubusercontent.com/u/22634440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JiangZongKang", "html_url": "https://github.com/JiangZongKang", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
2
2024-01-18T11:56:31
2024-02-07T01:10:30
2024-02-07T01:10:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The command does not produce any response when executed on a Mac. ![CleanShot 2024-01-18 at 19 56 17@2x](https://github.com/jmorganca/ollama/assets/22634440/f423f706-10b1-496a-bb8e-50a85afbea6b)
{ "login": "JiangZongKang", "id": 22634440, "node_id": "MDQ6VXNlcjIyNjM0NDQw", "avatar_url": "https://avatars.githubusercontent.com/u/22634440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JiangZongKang", "html_url": "https://github.com/JiangZongKang", "followers_url": "https://api.githu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2047/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2866
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2866/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2866/comments
https://api.github.com/repos/ollama/ollama/issues/2866/events
https://github.com/ollama/ollama/pull/2866
2,163,669,751
PR_kwDOJ0Z1Ps5obHs7
2,866
chore: update readme, add open-webui
{ "login": "longregen", "id": 114724657, "node_id": "U_kgDOBtaPMQ", "avatar_url": "https://avatars.githubusercontent.com/u/114724657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/longregen", "html_url": "https://github.com/longregen", "followers_url": "https://api.github.com/users/longre...
[]
closed
false
null
[]
null
1
2024-03-01T15:44:59
2024-03-09T22:24:46
2024-03-09T22:24:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2866", "html_url": "https://github.com/ollama/ollama/pull/2866", "diff_url": "https://github.com/ollama/ollama/pull/2866.diff", "patch_url": "https://github.com/ollama/ollama/pull/2866.patch", "merged_at": null }
After testing most of these suggested frontends, "Open WebUI", formerly "ollama-webui", looks like the best open option for amateurs looking to self-host a frontend similar to OpenAI's ChatGPT interface.
{ "login": "longregen", "id": 114724657, "node_id": "U_kgDOBtaPMQ", "avatar_url": "https://avatars.githubusercontent.com/u/114724657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/longregen", "html_url": "https://github.com/longregen", "followers_url": "https://api.github.com/users/longre...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2866/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5444
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5444/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5444/comments
https://api.github.com/repos/ollama/ollama/issues/5444/events
https://github.com/ollama/ollama/issues/5444
2,387,196,468
I_kwDOJ0Z1Ps6OSbo0
5,444
Ollama on Mac not free up space / what is equivalence of /usr/share/ollama/.ollama/models
{ "login": "tomaszstachera", "id": 61825692, "node_id": "MDQ6VXNlcjYxODI1Njky", "avatar_url": "https://avatars.githubusercontent.com/u/61825692?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaszstachera", "html_url": "https://github.com/tomaszstachera", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-07-02T21:25:47
2024-07-03T20:44:08
2024-07-03T20:44:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've ran `ollama run llama3:70b` on Mac and CLI pulled 40GB of data that is not stored in ~/.ollama. `ollama list` shows no models. Where the heck is the data? How to clean it up? ### OS macOS ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.1.48
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5444/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7707
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7707/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7707/comments
https://api.github.com/repos/ollama/ollama/issues/7707/events
https://github.com/ollama/ollama/pull/7707
2,666,149,630
PR_kwDOJ0Z1Ps6CKcMn
7,707
Update README.md
{ "login": "adarshM84", "id": 95633830, "node_id": "U_kgDOBbNBpg", "avatar_url": "https://avatars.githubusercontent.com/u/95633830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adarshM84", "html_url": "https://github.com/adarshM84", "followers_url": "https://api.github.com/users/adarshM8...
[]
closed
false
null
[]
null
0
2024-11-17T16:51:57
2024-11-20T18:42:56
2024-11-20T18:42:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7707", "html_url": "https://github.com/ollama/ollama/pull/7707", "diff_url": "https://github.com/ollama/ollama/pull/7707.diff", "patch_url": "https://github.com/ollama/ollama/pull/7707.patch", "merged_at": "2024-11-20T18:42:56" }
This Chrome extension will help users interact with the UI. Users can download and delete models from the UI, along with many other features.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7707/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6527
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6527/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6527/comments
https://api.github.com/repos/ollama/ollama/issues/6527/events
https://github.com/ollama/ollama/issues/6527
2,489,702,822
I_kwDOJ0Z1Ps6UZdmm
6,527
stella_en_400M_v5 model request
{ "login": "raymond-infinitecode", "id": 4714784, "node_id": "MDQ6VXNlcjQ3MTQ3ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4714784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raymond-infinitecode", "html_url": "https://github.com/raymond-infinitecode", "followers_url":...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
4
2024-08-27T15:22:55
2024-11-16T20:45:03
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Need help supporting https://hf.rst.im/dunzhang/stella_en_400M_v5 since we have also https://ollama.com/Losspost/stella_en_1.5b_v5
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6527/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6527/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8298
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8298/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8298/comments
https://api.github.com/repos/ollama/ollama/issues/8298/events
https://github.com/ollama/ollama/pull/8298
2,768,088,834
PR_kwDOJ0Z1Ps6GsIFr
8,298
api: remove unused create fields
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2025-01-03T19:49:23
2025-01-03T20:04:00
2025-01-03T20:03:58
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8298", "html_url": "https://github.com/ollama/ollama/pull/8298", "diff_url": "https://github.com/ollama/ollama/pull/8298.diff", "patch_url": "https://github.com/ollama/ollama/pull/8298.patch", "merged_at": "2025-01-03T20:03:58" }
These fields are deprecated, but specifying them will not do anything. Removing them as the other deprecated fields will still work, but these do not, so they dont match our existing pattern.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8298/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8334
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8334/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8334/comments
https://api.github.com/repos/ollama/ollama/issues/8334/events
https://github.com/ollama/ollama/pull/8334
2,772,419,184
PR_kwDOJ0Z1Ps6G6oQR
8,334
readme: add Reins to community integrations
{ "login": "ibrahimcetin", "id": 33904390, "node_id": "MDQ6VXNlcjMzOTA0Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/33904390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibrahimcetin", "html_url": "https://github.com/ibrahimcetin", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
0
2025-01-07T10:05:06
2025-01-07T10:05:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8334", "html_url": "https://github.com/ollama/ollama/pull/8334", "diff_url": "https://github.com/ollama/ollama/pull/8334.diff", "patch_url": "https://github.com/ollama/ollama/pull/8334.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8334/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8547
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8547/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8547/comments
https://api.github.com/repos/ollama/ollama/issues/8547/events
https://github.com/ollama/ollama/issues/8547
2,806,414,251
I_kwDOJ0Z1Ps6nRnur
8,547
deepseek-r1 `qwen` variants use a new pre-tokenizer, which is not implemented in the llama.cpp version used
{ "login": "sealad886", "id": 155285242, "node_id": "U_kgDOCUF2-g", "avatar_url": "https://avatars.githubusercontent.com/u/155285242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sealad886", "html_url": "https://github.com/sealad886", "followers_url": "https://api.github.com/users/sealad...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2025-01-23T09:42:59
2025-01-23T09:43:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The newly supported `deepseek-r1` model variants that have `distill-qwen` in the name use a new pre-tokenizer. Support for this has been added to the latest llama.cpp (not sure if the release version or just the latest commit on the main branch). The backend llama.cpp that Ollama uses should b...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8547/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8547/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5713
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5713/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5713/comments
https://api.github.com/repos/ollama/ollama/issues/5713/events
https://github.com/ollama/ollama/pull/5713
2,409,876,869
PR_kwDOJ0Z1Ps51crga
5,713
server: return empty slice on empty `/api/embed` request
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-16T00:19:47
2024-07-16T00:39:46
2024-07-16T00:39:45
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5713", "html_url": "https://github.com/ollama/ollama/pull/5713", "diff_url": "https://github.com/ollama/ollama/pull/5713.diff", "patch_url": "https://github.com/ollama/ollama/pull/5713.patch", "merged_at": "2024-07-16T00:39:45" }
Before: ``` curl http://localhost:11434/api/embed \ -H "Content-Type: application/json" \ -d '{ "input": "", "model": "all-minilm" }' {"model":"all-minilm"} ``` After: ``` curl http://localhost:11434/api/embed \ -H "Con...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5713/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4725
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4725/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4725/comments
https://api.github.com/repos/ollama/ollama/issues/4725/events
https://github.com/ollama/ollama/pull/4725
2,326,020,262
PR_kwDOJ0Z1Ps5xB8GC
4,725
Make examples/go-chat iterative
{ "login": "w84miracle", "id": 1922754, "node_id": "MDQ6VXNlcjE5MjI3NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1922754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/w84miracle", "html_url": "https://github.com/w84miracle", "followers_url": "https://api.github.com/users...
[]
open
false
null
[]
null
1
2024-05-30T15:54:58
2024-06-05T12:47:12
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4725", "html_url": "https://github.com/ollama/ollama/pull/4725", "diff_url": "https://github.com/ollama/ollama/pull/4725.diff", "patch_url": "https://github.com/ollama/ollama/pull/4725.patch", "merged_at": null }
Aligned with other language's chat examples, making it iterative.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4725/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4672
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4672/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4672/comments
https://api.github.com/repos/ollama/ollama/issues/4672/events
https://github.com/ollama/ollama/pull/4672
2,320,025,163
PR_kwDOJ0Z1Ps5wtcbf
4,672
Add OllamaSpring Project to Readme
{ "login": "CrazyNeil", "id": 5747549, "node_id": "MDQ6VXNlcjU3NDc1NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5747549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CrazyNeil", "html_url": "https://github.com/CrazyNeil", "followers_url": "https://api.github.com/users/Cr...
[]
closed
false
null
[]
null
0
2024-05-28T02:55:13
2024-05-28T02:58:27
2024-05-28T02:58:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4672", "html_url": "https://github.com/ollama/ollama/pull/4672", "diff_url": "https://github.com/ollama/ollama/pull/4672.diff", "patch_url": "https://github.com/ollama/ollama/pull/4672.patch", "merged_at": "2024-05-28T02:58:27" }
Add OllamaSpring Project to Readme
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4672/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3263
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3263/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3263/comments
https://api.github.com/repos/ollama/ollama/issues/3263/events
https://github.com/ollama/ollama/issues/3263
2,196,880,108
I_kwDOJ0Z1Ps6C8brs
3,263
MiniCPM 2B Model add
{ "login": "GavinBF", "id": 18061367, "node_id": "MDQ6VXNlcjE4MDYxMzY3", "avatar_url": "https://avatars.githubusercontent.com/u/18061367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GavinBF", "html_url": "https://github.com/GavinBF", "followers_url": "https://api.github.com/users/GavinB...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-03-20T07:49:11
2024-06-09T17:11:38
2024-06-09T17:11:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? Can we add MiniCPM model? https://github.com/OpenBMB/MiniCPM https://huggingface.co/collections/openbmb/minicpm-2b-65d48bf958302b9fd25b698f
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3263/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3263/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6197
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6197/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6197/comments
https://api.github.com/repos/ollama/ollama/issues/6197/events
https://github.com/ollama/ollama/issues/6197
2,450,613,273
I_kwDOJ0Z1Ps6SEWQZ
6,197
'FROM' is not recognized as an internal or external command, operable program or batch file.
{ "login": "LaksLaksman", "id": 152250473, "node_id": "U_kgDOCRMoaQ", "avatar_url": "https://avatars.githubusercontent.com/u/152250473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LaksLaksman", "html_url": "https://github.com/LaksLaksman", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-08-06T11:10:52
2024-08-06T11:14:44
2024-08-06T11:14:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
*'FROM' is not recognized as an internal or external command,* C:\Users\LaksmanP>FROM llama3.1 PARAMETER temperature 1 'FROM' is not recognized as an internal or external command, operable program or batch file. this message showing when I set parameter after pulling the model. ### OS Windows ### GPU ...
{ "login": "LaksLaksman", "id": 152250473, "node_id": "U_kgDOCRMoaQ", "avatar_url": "https://avatars.githubusercontent.com/u/152250473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LaksLaksman", "html_url": "https://github.com/LaksLaksman", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6197/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5909
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5909/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5909/comments
https://api.github.com/repos/ollama/ollama/issues/5909/events
https://github.com/ollama/ollama/issues/5909
2,427,434,752
I_kwDOJ0Z1Ps6Qr7cA
5,909
" Error: json: cannot unmarshal array into Go struct field Params.eos_token_id of type int " while importing llama 3.1 8B safetensor model from huggingface
{ "login": "SadeghPouriyanZadeh", "id": 74629673, "node_id": "MDQ6VXNlcjc0NjI5Njcz", "avatar_url": "https://avatars.githubusercontent.com/u/74629673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SadeghPouriyanZadeh", "html_url": "https://github.com/SadeghPouriyanZadeh", "followers_url": ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[ { "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api....
null
12
2024-07-24T12:15:53
2024-11-21T12:38:57
2024-09-02T00:19:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## What is the problem? I was importing the llama 3.1 8B model from huggingface (`meta-llama/Meta-Llama-3.1-8B-Instruct`) using `ollama create -f Modelfile` but i got this error: `Error: json: cannot unmarshal array into Go struct field Params.eos_token_id of type int` I found the shallow cau...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5909/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5909/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8200
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8200/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8200/comments
https://api.github.com/repos/ollama/ollama/issues/8200/events
https://github.com/ollama/ollama/issues/8200
2,754,208,755
I_kwDOJ0Z1Ps6kKePz
8,200
Ollama hangs when running llama3.2 and llama3.2:1b
{ "login": "pr0fsmith", "id": 54153368, "node_id": "MDQ6VXNlcjU0MTUzMzY4", "avatar_url": "https://avatars.githubusercontent.com/u/54153368?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pr0fsmith", "html_url": "https://github.com/pr0fsmith", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-12-21T16:25:10
2025-01-13T01:45:06
2025-01-13T01:45:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After a while of using Ollama, the LLM becomes completely unresponsive and there's no CPU or GPU usage during that time. This happens with LLAMA3.2 and LLAMA3.2:1B. Here are the logs. ` Dec 21 00:37:03 olivi ollama[627]: [GIN] 2024/12/21 - 00:37:03 | 200 | 816.217µs | 172.17.0.2 | GE...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8200/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3550
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3550/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3550/comments
https://api.github.com/repos/ollama/ollama/issues/3550/events
https://github.com/ollama/ollama/issues/3550
2,232,717,804
I_kwDOJ0Z1Ps6FFJHs
3,550
ollama serve cannot detect GPU
{ "login": "g-makerr", "id": 71173795, "node_id": "MDQ6VXNlcjcxMTczNzk1", "avatar_url": "https://avatars.githubusercontent.com/u/71173795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/g-makerr", "html_url": "https://github.com/g-makerr", "followers_url": "https://api.github.com/users/g-m...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-04-09T06:45:11
2024-05-01T17:52:34
2024-04-13T16:11:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I run "ollama serve", while it reports that "no GPU detected" and ""[cudart] error looking up CUDART GPU memory: cudart device memory info lookup failure 2"". But no such problems four days ago. ![屏幕截图 2024-04-09 144347](https://github.com/ollama/ollama/assets/71173795/b9c853a7-e4be-42ec-b4b1...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3550/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2722
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2722/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2722/comments
https://api.github.com/repos/ollama/ollama/issues/2722/events
https://github.com/ollama/ollama/issues/2722
2,152,180,592
I_kwDOJ0Z1Ps6AR6tw
2,722
How can I specify the context window size using OpenAI compatible API?
{ "login": "egoist", "id": 8784712, "node_id": "MDQ6VXNlcjg3ODQ3MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8784712?v=4", "gravatar_id": "", "url": "https://api.github.com/users/egoist", "html_url": "https://github.com/egoist", "followers_url": "https://api.github.com/users/egoist/foll...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
4
2024-02-24T07:52:58
2024-07-23T11:16:12
2024-07-18T22:41:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I wonder if there's a way to apply context size to https://github.com/ollama/ollama/blob/main/docs/openai.md
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2722/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2939
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2939/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2939/comments
https://api.github.com/repos/ollama/ollama/issues/2939/events
https://github.com/ollama/ollama/issues/2939
2,169,809,269
I_kwDOJ0Z1Ps6BVKl1
2,939
Model Request : WhiteRabbitNeo 33B v1.5
{ "login": "ligmaSec", "id": 87036992, "node_id": "MDQ6VXNlcjg3MDM2OTky", "avatar_url": "https://avatars.githubusercontent.com/u/87036992?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ligmaSec", "html_url": "https://github.com/ligmaSec", "followers_url": "https://api.github.com/users/lig...
[]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
2
2024-03-05T17:46:08
2024-03-07T09:18:51
2024-03-06T23:46:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-33B-v1.5
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2939/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6711
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6711/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6711/comments
https://api.github.com/repos/ollama/ollama/issues/6711/events
https://github.com/ollama/ollama/issues/6711
2,513,781,256
I_kwDOJ0Z1Ps6V1UII
6,711
Can I stop then start a "pull" when the LLM is not completely downloaded?
{ "login": "bulrush15", "id": 7031486, "node_id": "MDQ6VXNlcjcwMzE0ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bulrush15", "html_url": "https://github.com/bulrush15", "followers_url": "https://api.github.com/users/bu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-09-09T12:07:12
2024-09-09T13:59:49
2024-09-09T13:59:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm on Windows 11 with an Nvidia GeForce RTX 3060 video card. I've already used ollama with a smaller LLM that I pulled. I started a pull of a large LLM, mistral-large. It seems it would take 3 hours on my PC and it's really slowing down my network. So I stopped the download/pull. 1. If I ...
{ "login": "bulrush15", "id": 7031486, "node_id": "MDQ6VXNlcjcwMzE0ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bulrush15", "html_url": "https://github.com/bulrush15", "followers_url": "https://api.github.com/users/bu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6711/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8595
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8595/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8595/comments
https://api.github.com/repos/ollama/ollama/issues/8595/events
https://github.com/ollama/ollama/issues/8595
2,811,649,642
I_kwDOJ0Z1Ps6nll5q
8,595
Train Ollama models using custom data
{ "login": "samrudha01codespace", "id": 144599345, "node_id": "U_kgDOCJ5pMQ", "avatar_url": "https://avatars.githubusercontent.com/u/144599345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samrudha01codespace", "html_url": "https://github.com/samrudha01codespace", "followers_url": "https...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2025-01-26T16:24:59
2025-01-28T21:32:55
2025-01-28T21:32:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can users train the small ollama models using there datasets?
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8595/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8053
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8053/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8053/comments
https://api.github.com/repos/ollama/ollama/issues/8053/events
https://github.com/ollama/ollama/issues/8053
2,733,979,055
I_kwDOJ0Z1Ps6i9TWv
8,053
Documentation enhancement Idea - AWS Fargate Infra Implementation
{ "login": "mcam10", "id": 42009541, "node_id": "MDQ6VXNlcjQyMDA5NTQx", "avatar_url": "https://avatars.githubusercontent.com/u/42009541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcam10", "html_url": "https://github.com/mcam10", "followers_url": "https://api.github.com/users/mcam10/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-12-11T20:48:40
2024-12-11T20:48:40
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was able to get Ollama up and running on [Fargate](https://aws.amazon.com/fargate/) using [copilot cli](https://aws.github.io/copilot-cli/) as a sandbox/test environment.. would this be helpful for the community?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8053/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6581
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6581/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6581/comments
https://api.github.com/repos/ollama/ollama/issues/6581/events
https://github.com/ollama/ollama/pull/6581
2,498,968,493
PR_kwDOJ0Z1Ps56Dygx
6,581
Add findutils to base images
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-08-31T17:32:32
2024-08-31T20:22:16
2024-08-31T17:40:05
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6581", "html_url": "https://github.com/ollama/ollama/pull/6581", "diff_url": "https://github.com/ollama/ollama/pull/6581.diff", "patch_url": "https://github.com/ollama/ollama/pull/6581.patch", "merged_at": "2024-08-31T17:40:05" }
This caused missing internal files
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6581/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5053
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5053/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5053/comments
https://api.github.com/repos/ollama/ollama/issues/5053/events
https://github.com/ollama/ollama/pull/5053
2,354,356,727
PR_kwDOJ0Z1Ps5yiRUi
5,053
feat: implemented a model export cli command
{ "login": "JerrettDavis", "id": 2610199, "node_id": "MDQ6VXNlcjI2MTAxOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JerrettDavis", "html_url": "https://github.com/JerrettDavis", "followers_url": "https://api.github.com...
[]
open
false
null
[]
null
1
2024-06-15T01:05:02
2024-08-14T15:18:39
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5053", "html_url": "https://github.com/ollama/ollama/pull/5053", "diff_url": "https://github.com/ollama/ollama/pull/5053.diff", "patch_url": "https://github.com/ollama/ollama/pull/5053.patch", "merged_at": null }
First pass at solving #335. Converted the bash script provided by [supersonictw](https://github.com/supersonictw) to golang. Export a model by running `ollama export <model> <output>`. For example `ollama export llama3:latest llama-backup`.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5053/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5053/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7502
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7502/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7502/comments
https://api.github.com/repos/ollama/ollama/issues/7502/events
https://github.com/ollama/ollama/issues/7502
2,634,308,518
I_kwDOJ0Z1Ps6dBFum
7,502
ollama-server does not run the large model for a period of time. When running the large model again, an error message is displayed:
{ "login": "GreatStep", "id": 3817997, "node_id": "MDQ6VXNlcjM4MTc5OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3817997?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GreatStep", "html_url": "https://github.com/GreatStep", "followers_url": "https://api.github.com/users/Gr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-11-05T03:14:30
2024-11-07T03:26:51
2024-11-05T16:34:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama-server does not run the large model for a period of time. When running the large model again, an error message is displayed: server cpu not listed in available servers map[] Every time I restart Ollama, everything returns to normal. I have seen many users encounter similar situations on...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7502/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2257
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2257/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2257/comments
https://api.github.com/repos/ollama/ollama/issues/2257/events
https://github.com/ollama/ollama/issues/2257
2,106,141,111
I_kwDOJ0Z1Ps59iSm3
2,257
[ask] Where can I see the version of llama.cpp used for each version of ollama?
{ "login": "iddar", "id": 199103, "node_id": "MDQ6VXNlcjE5OTEwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/199103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iddar", "html_url": "https://github.com/iddar", "followers_url": "https://api.github.com/users/iddar/followers"...
[]
closed
false
null
[]
null
2
2024-01-29T18:15:20
2024-02-02T00:08:44
2024-02-02T00:08:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I think it would be good to include the version of Ollama used in the release notes to know the new features.
{ "login": "iddar", "id": 199103, "node_id": "MDQ6VXNlcjE5OTEwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/199103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iddar", "html_url": "https://github.com/iddar", "followers_url": "https://api.github.com/users/iddar/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2257/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8606
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8606/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8606/comments
https://api.github.com/repos/ollama/ollama/issues/8606/events
https://github.com/ollama/ollama/issues/8606
2,812,491,291
I_kwDOJ0Z1Ps6nozYb
8,606
Why doesn't my ollama use GPU
{ "login": "baotianxia", "id": 68735021, "node_id": "MDQ6VXNlcjY4NzM1MDIx", "avatar_url": "https://avatars.githubusercontent.com/u/68735021?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baotianxia", "html_url": "https://github.com/baotianxia", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
21
2025-01-27T09:27:24
2025-01-28T02:37:10
2025-01-28T02:37:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I installed the Nvidia driver through `used sudo apt install nvidia-driver- xxx`and the ollama display model is being used on the GPU, but my CPU usage is 100% and the GPU is 0%. ![Image](https://github.com/user-attachments/assets/d9473bd3-953a-4f12-99d2-36420a7645d5) ![Image](https://github.com/user-attachments/asse...
{ "login": "baotianxia", "id": 68735021, "node_id": "MDQ6VXNlcjY4NzM1MDIx", "avatar_url": "https://avatars.githubusercontent.com/u/68735021?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baotianxia", "html_url": "https://github.com/baotianxia", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8606/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2171
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2171/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2171/comments
https://api.github.com/repos/ollama/ollama/issues/2171/events
https://github.com/ollama/ollama/issues/2171
2,098,067,157
I_kwDOJ0Z1Ps59DfbV
2,171
Request: Please add `xwincoder` to `ollama.ai`
{ "login": "jukofyork", "id": 69222624, "node_id": "MDQ6VXNlcjY5MjIyNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jukofyork", "html_url": "https://github.com/jukofyork", "followers_url": "https://api.github.com/users/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-01-24T11:31:32
2024-01-24T17:28:13
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
There is already the 3 variants of `xwinlm` (https://ollama.ai/library/xwinlm) but no `xwincoder` (https://huggingface.co/Xwin-LM/XwinCoder-34B) and it seems to be quite a good coding model from what I've seen so far.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2171/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4356
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4356/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4356/comments
https://api.github.com/repos/ollama/ollama/issues/4356/events
https://github.com/ollama/ollama/pull/4356
2,290,855,984
PR_kwDOJ0Z1Ps5vKGoy
4,356
Refactor parsing model configuration
{ "login": "redouan-rhazouani", "id": 81578195, "node_id": "MDQ6VXNlcjgxNTc4MTk1", "avatar_url": "https://avatars.githubusercontent.com/u/81578195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/redouan-rhazouani", "html_url": "https://github.com/redouan-rhazouani", "followers_url": "https...
[]
closed
false
null
[]
null
1
2024-05-11T11:46:08
2024-06-13T17:15:24
2024-06-13T17:13:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4356", "html_url": "https://github.com/ollama/ollama/pull/4356", "diff_url": "https://github.com/ollama/ollama/pull/4356.diff", "patch_url": "https://github.com/ollama/ollama/pull/4356.patch", "merged_at": null }
null
{ "login": "redouan-rhazouani", "id": 81578195, "node_id": "MDQ6VXNlcjgxNTc4MTk1", "avatar_url": "https://avatars.githubusercontent.com/u/81578195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/redouan-rhazouani", "html_url": "https://github.com/redouan-rhazouani", "followers_url": "https...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4356/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6427
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6427/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6427/comments
https://api.github.com/repos/ollama/ollama/issues/6427/events
https://github.com/ollama/ollama/pull/6427
2,474,176,554
PR_kwDOJ0Z1Ps54x7Nu
6,427
CI: handle directories during checksum
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-08-19T20:45:58
2024-08-19T20:48:48
2024-08-19T20:48:45
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6427", "html_url": "https://github.com/ollama/ollama/pull/6427", "diff_url": "https://github.com/ollama/ollama/pull/6427.diff", "patch_url": "https://github.com/ollama/ollama/pull/6427.patch", "merged_at": "2024-08-19T20:48:45" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6427/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2818
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2818/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2818/comments
https://api.github.com/repos/ollama/ollama/issues/2818/events
https://github.com/ollama/ollama/issues/2818
2,159,956,174
I_kwDOJ0Z1Ps6AvlDO
2,818
DNS `i/o timeout` when running `ollama pull`
{ "login": "sohanasarah", "id": 38297094, "node_id": "MDQ6VXNlcjM4Mjk3MDk0", "avatar_url": "https://avatars.githubusercontent.com/u/38297094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohanasarah", "html_url": "https://github.com/sohanasarah", "followers_url": "https://api.github.com/...
[]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
7
2024-02-28T22:11:15
2024-06-18T11:19:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ollama was working perfectly on my machine, and I had llama2 installed. But now, when I try to install a new model, it gives me the following error: ``` pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/mixtral/manifests/latest": dial tcp: lookup registry.ollama.ai on 172.25.9...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2818/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6877
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6877/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6877/comments
https://api.github.com/repos/ollama/ollama/issues/6877/events
https://github.com/ollama/ollama/issues/6877
2,536,128,097
I_kwDOJ0Z1Ps6XKj5h
6,877
OpenAI o1-like Chain-of-thought (CoT) inference workflow
{ "login": "kozuch", "id": 1474153, "node_id": "MDQ6VXNlcjE0NzQxNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1474153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kozuch", "html_url": "https://github.com/kozuch", "followers_url": "https://api.github.com/users/kozuch/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
7
2024-09-19T11:56:08
2024-09-23T23:35:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Well, I am surprised that the "main" and "great" new feature of the new OpenAI o1 model is actually doing say "more sophisticated" inference workflow while employing something like Chain-of-thought process. Basically I understand it that even a "dumb" model can perform much better when it "thinks more" during inference...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6877/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6877/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3559
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3559/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3559/comments
https://api.github.com/repos/ollama/ollama/issues/3559/events
https://github.com/ollama/ollama/pull/3559
2,233,882,790
PR_kwDOJ0Z1Ps5sJ6U-
3,559
ci: use go-version-file
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-04-09T16:50:41
2024-04-09T18:03:19
2024-04-09T18:03:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3559", "html_url": "https://github.com/ollama/ollama/pull/3559", "diff_url": "https://github.com/ollama/ollama/pull/3559.diff", "patch_url": "https://github.com/ollama/ollama/pull/3559.patch", "merged_at": "2024-04-09T18:03:18" }
use go-version-file to synchronize go versions between go.mod and ci
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3559/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/772
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/772/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/772/comments
https://api.github.com/repos/ollama/ollama/issues/772/events
https://github.com/ollama/ollama/pull/772
1,940,812,525
PR_kwDOJ0Z1Ps5cra6Y
772
linux: add user to the `ollama` group on install
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
2
2023-10-12T21:22:16
2023-10-23T21:06:32
2023-10-23T21:06:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/772", "html_url": "https://github.com/ollama/ollama/pull/772", "diff_url": "https://github.com/ollama/ollama/pull/772.diff", "patch_url": "https://github.com/ollama/ollama/pull/772.patch", "merged_at": "2023-10-23T21:06:31" }
- Run the ollama system service as the current user resolves #613
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/772/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3468
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3468/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3468/comments
https://api.github.com/repos/ollama/ollama/issues/3468/events
https://github.com/ollama/ollama/pull/3468
2,221,684,533
PR_kwDOJ0Z1Ps5rgCYZ
3,468
feat: add NeuralSpeed backend to boost up the inference speed on CPU
{ "login": "ftian1", "id": 16394660, "node_id": "MDQ6VXNlcjE2Mzk0NjYw", "avatar_url": "https://avatars.githubusercontent.com/u/16394660?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ftian1", "html_url": "https://github.com/ftian1", "followers_url": "https://api.github.com/users/ftian1/fo...
[]
closed
false
null
[]
null
1
2024-04-03T00:48:34
2024-11-21T09:29:19
2024-11-21T09:29:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3468", "html_url": "https://github.com/ollama/ollama/pull/3468", "diff_url": "https://github.com/ollama/ollama/pull/3468.diff", "patch_url": "https://github.com/ollama/ollama/pull/3468.patch", "merged_at": null }
This PR is used to integrate NeuralSpeed as a new backend in Ollama to provide better performance on x86_64 platforms. [NeuralSpeed](https://github.com/intel/neural-speed) is an LLM acceleration library by providing high efficient GEMM kernel and fusions on AVX/AVX2/AVX512. We can achieve better performance like [h...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3468/reactions", "total_count": 10, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3468/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4678
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4678/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4678/comments
https://api.github.com/repos/ollama/ollama/issues/4678/events
https://github.com/ollama/ollama/issues/4678
2,320,549,377
I_kwDOJ0Z1Ps6KUMYB
4,678
Please support Baichuan series models
{ "login": "Han-Huaqiao", "id": 41456966, "node_id": "MDQ6VXNlcjQxNDU2OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/41456966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Han-Huaqiao", "html_url": "https://github.com/Han-Huaqiao", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-28T09:16:31
2024-05-30T07:13:19
2024-05-30T07:11:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I expected to use ollama to start the Baichuan-7B-base model, but an error occurred: Error: Models based on 'BaichuanForCausalLM' are not yet supported. Please tell me when will the Baichuan series models be supported?
{ "login": "Han-Huaqiao", "id": 41456966, "node_id": "MDQ6VXNlcjQxNDU2OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/41456966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Han-Huaqiao", "html_url": "https://github.com/Han-Huaqiao", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4678/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/3359
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3359/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3359/comments
https://api.github.com/repos/ollama/ollama/issues/3359/events
https://github.com/ollama/ollama/issues/3359
2,207,847,294
I_kwDOJ0Z1Ps6DmRN-
3,359
Ollama Logo
{ "login": "corani", "id": 480775, "node_id": "MDQ6VXNlcjQ4MDc3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/480775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/corani", "html_url": "https://github.com/corani", "followers_url": "https://api.github.com/users/corani/follow...
[]
closed
false
null
[]
null
2
2024-03-26T10:26:21
2024-04-15T19:43:06
2024-04-15T19:43:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was playing with the `llava` model to create image-generation prompts based on an existing image. Starting with the Ollama logo I made a few iterations between llava and dall-e and ended up with the following result that I didn't want to keep to myself 😄 ![_cc4036ed-af48-4625-a85c-28b9e0b72249](https://github.co...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3359/reactions", "total_count": 14, "+1": 1, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 8, "rocket": 1, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/3359/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2410
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2410/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2410/comments
https://api.github.com/repos/ollama/ollama/issues/2410/events
https://github.com/ollama/ollama/pull/2410
2,125,053,307
PR_kwDOJ0Z1Ps5mXqrK
2,410
Added Encoding endpoint
{ "login": "suvalaki", "id": 18386930, "node_id": "MDQ6VXNlcjE4Mzg2OTMw", "avatar_url": "https://avatars.githubusercontent.com/u/18386930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suvalaki", "html_url": "https://github.com/suvalaki", "followers_url": "https://api.github.com/users/suv...
[]
open
false
null
[]
null
0
2024-02-08T12:18:42
2024-02-08T12:23:18
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2410", "html_url": "https://github.com/ollama/ollama/pull/2410", "diff_url": "https://github.com/ollama/ollama/pull/2410.diff", "patch_url": "https://github.com/ollama/ollama/pull/2410.patch", "merged_at": null }
It seems useful to expose the encoding function of a model that us called by the generate methods to enable token counting (without running the model end to end). Some thoughts: - Im not sure whether replicating the logic that modifies the prompt (the same as the generate function is correct here or whether we sho...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2410/reactions", "total_count": 4, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2410/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7381
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7381/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7381/comments
https://api.github.com/repos/ollama/ollama/issues/7381/events
https://github.com/ollama/ollama/issues/7381
2,616,234,948
I_kwDOJ0Z1Ps6b8JPE
7,381
Unrooted Termux install process
{ "login": "b9Joker108", "id": 147242971, "node_id": "U_kgDOCMa_2w", "avatar_url": "https://avatars.githubusercontent.com/u/147242971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/b9Joker108", "html_url": "https://github.com/b9Joker108", "followers_url": "https://api.github.com/users/b9J...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2024-10-27T00:42:22
2024-10-27T14:37:38
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am endeavouring to set up an `ollama` server on my unrooted Termux host environment, please refer: https://github.com/ollama/ollama/issues/7349#issuecomment-2439776813 and https://github.com/ollama/ollama/issues/7292#issuecomment-2439781839 @vpnry @dhiltgen So, the process is: #...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7381/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2021
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2021/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2021/comments
https://api.github.com/repos/ollama/ollama/issues/2021/events
https://github.com/ollama/ollama/pull/2021
2,084,753,027
PR_kwDOJ0Z1Ps5kPKJD
2,021
Update README.md - Library - Haystack
{ "login": "sachinsachdeva", "id": 7625278, "node_id": "MDQ6VXNlcjc2MjUyNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/7625278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sachinsachdeva", "html_url": "https://github.com/sachinsachdeva", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
0
2024-01-16T19:57:08
2024-01-18T21:38:33
2024-01-18T21:38:32
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2021", "html_url": "https://github.com/ollama/ollama/pull/2021", "diff_url": "https://github.com/ollama/ollama/pull/2021.diff", "patch_url": "https://github.com/ollama/ollama/pull/2021.patch", "merged_at": "2024-01-18T21:38:32" }
Updated readme with the web link for haystack ollama integration
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2021/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4649
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4649/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4649/comments
https://api.github.com/repos/ollama/ollama/issues/4649/events
https://github.com/ollama/ollama/issues/4649
2,317,761,490
I_kwDOJ0Z1Ps6KJjvS
4,649
Settings File In Addition to Environment Flags
{ "login": "chigkim", "id": 22120994, "node_id": "MDQ6VXNlcjIyMTIwOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chigkim", "html_url": "https://github.com/chigkim", "followers_url": "https://api.github.com/users/chigki...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-26T15:21:09
2024-05-31T19:57:59
2024-05-31T19:57:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Now there are quite a few features relying on environment variable. Can we have a way to control those features using settings file like ~/.ollama/settings.yaml for in MacOS? ```yaml OLLAMA_HOST: "0.0.0.0" OLLAMA_NOHISTORY: true OLLAMA_FLASH_ATTENTION: true OLLAMA_NUM_PARALLEL: 4 OLLAMA_MAX_LOADED: 2 ``` T...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4649/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4649/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4752
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4752/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4752/comments
https://api.github.com/repos/ollama/ollama/issues/4752/events
https://github.com/ollama/ollama/issues/4752
2,328,183,732
I_kwDOJ0Z1Ps6KxUO0
4,752
Multi-GPU and batch management
{ "login": "LaetLanf", "id": 131473617, "node_id": "U_kgDOB9Yg0Q", "avatar_url": "https://avatars.githubusercontent.com/u/131473617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LaetLanf", "html_url": "https://github.com/LaetLanf", "followers_url": "https://api.github.com/users/LaetLanf/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-05-31T16:19:29
2024-06-02T09:09:29
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I'm confident that a feature enabling multi-GPU optimization and batch management would be beneficial. I may have made a mistake, as I couldn't effectively use the `ollama_num_parallel` and `ollama_max_loaded_models` settings to optimize my Linux VM, which has four A100 80GB GPUs, using Llama3:70b-instruc...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4752/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4752/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6415
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6415/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6415/comments
https://api.github.com/repos/ollama/ollama/issues/6415/events
https://github.com/ollama/ollama/issues/6415
2,472,764,600
I_kwDOJ0Z1Ps6TY2S4
6,415
Feature Request: Adding FalconMamba 7B Instruct in `ollama`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.githu...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
6
2024-08-19T08:25:37
2024-10-13T02:38:10
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
FalconMamba is being added here in llama.cpp: https://github.com/ggerganov/llama.cpp/pull/9074 it would be nice to have the first SSM-based LLM on ollama ! Instruct weights: https://huggingface.co/tiiuae/falcon-mamba-7b-instruct GGUF weights: https://huggingface.co/collections/tiiuae/falconmamba-7b-66b9a580324dd159...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6415/reactions", "total_count": 12, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6415/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/666
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/666/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/666/comments
https://api.github.com/repos/ollama/ollama/issues/666/events
https://github.com/ollama/ollama/issues/666
1,920,870,201
I_kwDOJ0Z1Ps5yfic5
666
Linux Installation `curl` command fails
{ "login": "Shihab-Shahriar", "id": 10344623, "node_id": "MDQ6VXNlcjEwMzQ0NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/10344623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shihab-Shahriar", "html_url": "https://github.com/Shihab-Shahriar", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg...
closed
false
null
[]
null
15
2023-10-01T17:10:05
2024-08-01T15:30:00
2024-01-16T22:15:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
`curl https://ollama.ai/install.sh | sh` This leads to: ``` >>> Downloading ollama... Warning: Failed to open the file /tmp/tmp.hE5cI4TvS7/ollama: No such file or 0%##O#-# Warning: directory curl: (23) Failure writing output to destination ``` Ubuntu 22.04.3 LTS
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/666/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/7401
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7401/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7401/comments
https://api.github.com/repos/ollama/ollama/issues/7401/events
https://github.com/ollama/ollama/issues/7401
2,618,986,629
I_kwDOJ0Z1Ps6cGpCF
7,401
Configure docker image to start with some models installed
{ "login": "fnacarellidev", "id": 97247063, "node_id": "U_kgDOBcvfVw", "avatar_url": "https://avatars.githubusercontent.com/u/97247063?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fnacarellidev", "html_url": "https://github.com/fnacarellidev", "followers_url": "https://api.github.com/us...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-10-28T16:57:35
2024-10-29T15:22:06
2024-10-29T15:22:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I think it would be very nice if we had an option to install some models on the build stage of the ollama docker image, right now I have 2 workarounds to emulate this behaviour: 1. Have an init container that talks to the ollama container and installs the models, not very good because I can't cache that, so everytim...
{ "login": "fnacarellidev", "id": 97247063, "node_id": "U_kgDOBcvfVw", "avatar_url": "https://avatars.githubusercontent.com/u/97247063?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fnacarellidev", "html_url": "https://github.com/fnacarellidev", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7401/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3006
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3006/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3006/comments
https://api.github.com/repos/ollama/ollama/issues/3006/events
https://github.com/ollama/ollama/pull/3006
2,176,459,602
PR_kwDOJ0Z1Ps5pGtUx
3,006
Replace assets on server start
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2024-03-08T17:20:43
2024-03-08T17:26:03
2024-03-08T17:26:02
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3006", "html_url": "https://github.com/ollama/ollama/pull/3006", "diff_url": "https://github.com/ollama/ollama/pull/3006.diff", "patch_url": "https://github.com/ollama/ollama/pull/3006.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3006/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7197
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7197/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7197/comments
https://api.github.com/repos/ollama/ollama/issues/7197/events
https://github.com/ollama/ollama/issues/7197
2,585,716,651
I_kwDOJ0Z1Ps6aHuer
7,197
llama runner process no longer running: -1
{ "login": "Dhruv-1212", "id": 132161275, "node_id": "U_kgDOB-Ce-w", "avatar_url": "https://avatars.githubusercontent.com/u/132161275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dhruv-1212", "html_url": "https://github.com/Dhruv-1212", "followers_url": "https://api.github.com/users/Dhr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-10-14T11:20:40
2024-10-15T06:22:44
2024-10-15T06:22:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am trying to run llama3 models but getting this error on both pip installation and linux installation on a server with Tesla T4 GPU, but the same is working fine on my local machine. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
{ "login": "Dhruv-1212", "id": 132161275, "node_id": "U_kgDOB-Ce-w", "avatar_url": "https://avatars.githubusercontent.com/u/132161275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dhruv-1212", "html_url": "https://github.com/Dhruv-1212", "followers_url": "https://api.github.com/users/Dhr...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7197/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6090
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6090/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6090/comments
https://api.github.com/repos/ollama/ollama/issues/6090/events
https://github.com/ollama/ollama/issues/6090
2,439,044,533
I_kwDOJ0Z1Ps6RYN21
6,090
Ollama seems to not work with long system prompts
{ "login": "austin-starks", "id": 53793927, "node_id": "MDQ6VXNlcjUzNzkzOTI3", "avatar_url": "https://avatars.githubusercontent.com/u/53793927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/austin-starks", "html_url": "https://github.com/austin-starks", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-31T03:42:12
2024-07-31T12:43:26
2024-07-31T12:43:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? [I typed up the full problem here (with examples)](https://www.reddit.com/r/ollama/comments/1egddvv/getting_ollama_to_work_with_very_long_system/) TL;DR, if I try to run Ollama with a very long system prompt, it seems to completely ignore it. Happy to provide as much detail as you need to fix...
{ "login": "austin-starks", "id": 53793927, "node_id": "MDQ6VXNlcjUzNzkzOTI3", "avatar_url": "https://avatars.githubusercontent.com/u/53793927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/austin-starks", "html_url": "https://github.com/austin-starks", "followers_url": "https://api.githu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6090/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4133
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4133/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4133/comments
https://api.github.com/repos/ollama/ollama/issues/4133/events
https://github.com/ollama/ollama/issues/4133
2,278,209,989
I_kwDOJ0Z1Ps6HyrnF
4,133
"which/max" command line options to help with sizing.
{ "login": "bigattichouse", "id": 67535, "node_id": "MDQ6VXNlcjY3NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/67535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bigattichouse", "html_url": "https://github.com/bigattichouse", "followers_url": "https://api.github.com/user...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-05-03T18:27:26
2024-05-03T18:28:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Frequently I have to play with various quants available, I'd like to run instead of downloading and testing each one until I get one that works. This would save us all some bandwidth. `ollama which somemodel` to determine which models I can run `ollama max somemodel` to choose the largest model from a list that ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4133/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2663
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2663/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2663/comments
https://api.github.com/repos/ollama/ollama/issues/2663/events
https://github.com/ollama/ollama/issues/2663
2,148,213,419
I_kwDOJ0Z1Ps6ACyKr
2,663
gemma crashes ollama
{ "login": "donuts-are-good", "id": 96031819, "node_id": "U_kgDOBblUSw", "avatar_url": "https://avatars.githubusercontent.com/u/96031819?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donuts-are-good", "html_url": "https://github.com/donuts-are-good", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
7
2024-02-22T05:09:20
2024-02-23T03:24:16
2024-02-22T16:05:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![image](https://github.com/ollama/ollama/assets/96031819/58400f74-53e9-4d90-aea6-be291919a6f3)
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2663/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2663/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6658
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6658/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6658/comments
https://api.github.com/repos/ollama/ollama/issues/6658/events
https://github.com/ollama/ollama/pull/6658
2,508,535,448
PR_kwDOJ0Z1Ps56kOem
6,658
openai: support for structured outputs
{ "login": "iscy", "id": 294710, "node_id": "MDQ6VXNlcjI5NDcxMA==", "avatar_url": "https://avatars.githubusercontent.com/u/294710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iscy", "html_url": "https://github.com/iscy", "followers_url": "https://api.github.com/users/iscy/followers", ...
[]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
5
2024-09-05T19:11:35
2024-11-13T15:26:25
2024-11-13T15:26:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6658", "html_url": "https://github.com/ollama/ollama/pull/6658", "diff_url": "https://github.com/ollama/ollama/pull/6658.diff", "patch_url": "https://github.com/ollama/ollama/pull/6658.patch", "merged_at": null }
This PR is enabling the [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas) feature available on OpenAI. Using the math reasoning example they have on their website, here's the response of both OpenAI and Ollama using the exact same request: OpenAI (gpt-4o-2024-08-06...
{ "login": "iscy", "id": 294710, "node_id": "MDQ6VXNlcjI5NDcxMA==", "avatar_url": "https://avatars.githubusercontent.com/u/294710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iscy", "html_url": "https://github.com/iscy", "followers_url": "https://api.github.com/users/iscy/followers", ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6658/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6658/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2797
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2797/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2797/comments
https://api.github.com/repos/ollama/ollama/issues/2797/events
https://github.com/ollama/ollama/issues/2797
2,157,882,000
I_kwDOJ0Z1Ps6AnqqQ
2,797
Please consider supporting Intel GPU ARC A770 (16G)
{ "login": "HelloMorningStar", "id": 46133290, "node_id": "MDQ6VXNlcjQ2MTMzMjkw", "avatar_url": "https://avatars.githubusercontent.com/u/46133290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HelloMorningStar", "html_url": "https://github.com/HelloMorningStar", "followers_url": "https://...
[ { "id": 6677491450, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgJu-g", "url": "https://api.github.com/repos/ollama/ollama/labels/intel", "name": "intel", "color": "226E5B", "default": false, "description": "issues relating to Intel GPUs" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-02-28T01:05:03
2024-04-15T22:33:39
2024-04-15T22:33:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Here is a demo of ARC A770 running llama2: https://www.reddit.com/r/LocalLLaMA/comments/1b0c6u8/llama_2_inference_with_pytorch_on_intel_arc/ The Intel Arc A770 is a powerful graphics card that is well-suited for a variety of tasks, including machine learning. It has 16GB of GDDR6 memory, a 256-bit memory interfac...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2797/reactions", "total_count": 22, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 8 }
https://api.github.com/repos/ollama/ollama/issues/2797/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4788
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4788/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4788/comments
https://api.github.com/repos/ollama/ollama/issues/4788/events
https://github.com/ollama/ollama/issues/4788
2,329,745,994
I_kwDOJ0Z1Ps6K3RpK
4,788
Add EventSource for format /api/generate
{ "login": "Vali-98", "id": 137794480, "node_id": "U_kgDOCDaTsA", "avatar_url": "https://avatars.githubusercontent.com/u/137794480?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vali-98", "html_url": "https://github.com/Vali-98", "followers_url": "https://api.github.com/users/Vali-98/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-02T16:27:19
2024-07-03T07:03:11
2024-07-03T07:03:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? This was tested specifically with `/api/generate` and `react-native-sse`. https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events Stream responses sent in ollama doesn't seem to conform to SSE specifications, and breaks when using it with EventSource-li...
{ "login": "Vali-98", "id": 137794480, "node_id": "U_kgDOCDaTsA", "avatar_url": "https://avatars.githubusercontent.com/u/137794480?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vali-98", "html_url": "https://github.com/Vali-98", "followers_url": "https://api.github.com/users/Vali-98/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4788/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8425
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8425/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8425/comments
https://api.github.com/repos/ollama/ollama/issues/8425/events
https://github.com/ollama/ollama/issues/8425
2,788,066,594
I_kwDOJ0Z1Ps6mLoUi
8,425
The models only work on CPU, but cannot work on GPU
{ "login": "watashiwastar-yun", "id": 188650638, "node_id": "U_kgDOCz6Ujg", "avatar_url": "https://avatars.githubusercontent.com/u/188650638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/watashiwastar-yun", "html_url": "https://github.com/watashiwastar-yun", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-14T18:58:01
2025-01-15T12:07:19
2025-01-15T12:07:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Jan 15 01:51:05 root ollama[600447]: time=2025-01-15T01:51:05.970+08:00 level=INFO source=server.go:104 msg="system memory" total="503.7 GiB" free="436.7 GiB" free_swap="228.0 KiB" Jan 15 01:51:05 root ollama[600447]: time=2025-01-15T01:51:05.970+08:00 level=WARN source=config.go:215 msg="invali...
{ "login": "watashiwastar-yun", "id": 188650638, "node_id": "U_kgDOCz6Ujg", "avatar_url": "https://avatars.githubusercontent.com/u/188650638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/watashiwastar-yun", "html_url": "https://github.com/watashiwastar-yun", "followers_url": "https://api...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8425/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3331
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3331/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3331/comments
https://api.github.com/repos/ollama/ollama/issues/3331/events
https://github.com/ollama/ollama/pull/3331
2,204,621,276
PR_kwDOJ0Z1Ps5qmPbn
3,331
Integration tests conditionally pull
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-03-24T23:44:05
2024-03-25T19:48:55
2024-03-25T19:48:52
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3331", "html_url": "https://github.com/ollama/ollama/pull/3331", "diff_url": "https://github.com/ollama/ollama/pull/3331.diff", "patch_url": "https://github.com/ollama/ollama/pull/3331.patch", "merged_at": "2024-03-25T19:48:52" }
If images aren't present, pull them. Also fixes the expected responses
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3331/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/713
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/713/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/713/comments
https://api.github.com/repos/ollama/ollama/issues/713/events
https://github.com/ollama/ollama/issues/713
1,929,063,501
I_kwDOJ0Z1Ps5y-yxN
713
Using ollama with llm-ls
{ "login": "noahbald", "id": 36181524, "node_id": "MDQ6VXNlcjM2MTgxNTI0", "avatar_url": "https://avatars.githubusercontent.com/u/36181524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noahbald", "html_url": "https://github.com/noahbald", "followers_url": "https://api.github.com/users/noa...
[]
closed
false
null
[]
null
2
2023-10-05T21:09:26
2023-10-25T21:34:55
2023-10-25T21:34:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've been trying to setup ollama to use codellama with FIM in my editor with nvim.llm and llm-ls. As suggested in the ollama docs, this is what the locally running API may expect as a FIM request. ```sh curl -X POST http://localhost:11434/api/generate -d '{ "model": "codellama:7b-code", "prompt": "<PRE> def co...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/713/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/6327
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6327/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6327/comments
https://api.github.com/repos/ollama/ollama/issues/6327/events
https://github.com/ollama/ollama/pull/6327
2,461,920,588
PR_kwDOJ0Z1Ps54KDhn
6,327
convert safetensor adapters into GGUF
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-08-12T21:24:33
2024-08-23T18:29:58
2024-08-23T18:29:56
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6327", "html_url": "https://github.com/ollama/ollama/pull/6327", "diff_url": "https://github.com/ollama/ollama/pull/6327.diff", "patch_url": "https://github.com/ollama/ollama/pull/6327.patch", "merged_at": "2024-08-23T18:29:56" }
This change converts a Safetensors based LoRA into GGUF and ties it w/ a base model. Only llama2/llama3/mistral/gemma2 will work initially. You can create the Modelfile to look like: ``` FROM llama3 ADAPTER /path/to/my/safetensor/adapter/directory ``` I'll add in some tests, but wanted to get this out so peopl...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6327/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6327/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2547
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2547/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2547/comments
https://api.github.com/repos/ollama/ollama/issues/2547/events
https://github.com/ollama/ollama/issues/2547
2,139,186,178
I_kwDOJ0Z1Ps5_gWQC
2,547
Dynamically determine context window at runtime
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-02-16T18:33:48
2024-11-17T22:25:01
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2547/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2547/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/542
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/542/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/542/comments
https://api.github.com/repos/ollama/ollama/issues/542/events
https://github.com/ollama/ollama/issues/542
1,899,551,212
I_kwDOJ0Z1Ps5xONns
542
Creating new models
{ "login": "erlebach", "id": 324708, "node_id": "MDQ6VXNlcjMyNDcwOA==", "avatar_url": "https://avatars.githubusercontent.com/u/324708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erlebach", "html_url": "https://github.com/erlebach", "followers_url": "https://api.github.com/users/erlebac...
[]
closed
false
null
[]
null
2
2023-09-16T19:57:33
2023-09-27T22:09:39
2023-09-26T22:30:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In the docs, we find: ``` ### Customize a model Pull a base model: ``` ollama pull llama2 ``` Create a `Modelfile`: ``` FROM llama2 # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 # set the system prompt SYSTEM """ You are Mario from Super ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/542/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/5282
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5282/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5282/comments
https://api.github.com/repos/ollama/ollama/issues/5282/events
https://github.com/ollama/ollama/pull/5282
2,373,645,578
PR_kwDOJ0Z1Ps5zjGy9
5,282
Docs for `api/embed`
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-06-25T20:56:28
2024-07-22T20:37:10
2024-07-22T20:37:08
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5282", "html_url": "https://github.com/ollama/ollama/pull/5282", "diff_url": "https://github.com/ollama/ollama/pull/5282.diff", "patch_url": "https://github.com/ollama/ollama/pull/5282.patch", "merged_at": "2024-07-22T20:37:08" }
Waiting on #5127
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5282/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3336
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3336/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3336/comments
https://api.github.com/repos/ollama/ollama/issues/3336/events
https://github.com/ollama/ollama/issues/3336
2,205,087,465
I_kwDOJ0Z1Ps6Dbvbp
3,336
ollama.ai certificate has expired, not possible to download models
{ "login": "psy-q", "id": 87557, "node_id": "MDQ6VXNlcjg3NTU3", "avatar_url": "https://avatars.githubusercontent.com/u/87557?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psy-q", "html_url": "https://github.com/psy-q", "followers_url": "https://api.github.com/users/psy-q/followers", "f...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
81
2024-03-25T07:32:52
2024-06-28T01:42:40
2024-03-25T20:54:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The ollama.ai certificate has expired today, ollama now can't download models: ``` ollama run mistral pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/mistral/manifests/latest": tls: failed to verify certificate: x509: certificate has expired or is n...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3336/reactions", "total_count": 100, "+1": 90, "-1": 0, "laugh": 0, "hooray": 0, "confused": 4, "heart": 0, "rocket": 0, "eyes": 6 }
https://api.github.com/repos/ollama/ollama/issues/3336/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6550
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6550/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6550/comments
https://api.github.com/repos/ollama/ollama/issues/6550/events
https://github.com/ollama/ollama/issues/6550
2,493,590,123
I_kwDOJ0Z1Ps6UoSpr
6,550
Cannot download models behind a proxy in docker ollama.
{ "login": "lakshmikanthgr", "id": 12883743, "node_id": "MDQ6VXNlcjEyODgzNzQz", "avatar_url": "https://avatars.githubusercontent.com/u/12883743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lakshmikanthgr", "html_url": "https://github.com/lakshmikanthgr", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
10
2024-08-29T06:47:11
2024-09-22T22:59:00
2024-08-29T13:28:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Able to run the ollama in docker container in my high end machine which has ubuntu os. But i am not able to pull any model. I am getting ![image](https://github.com/user-attachments/assets/dd271879-a5d2-43d7-b522-6b99718ac54d) ### OS Docker ### GPU AMD ### CPU Intel ### Ollama versi...
{ "login": "lakshmikanthgr", "id": 12883743, "node_id": "MDQ6VXNlcjEyODgzNzQz", "avatar_url": "https://avatars.githubusercontent.com/u/12883743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lakshmikanthgr", "html_url": "https://github.com/lakshmikanthgr", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6550/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3278
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3278/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3278/comments
https://api.github.com/repos/ollama/ollama/issues/3278/events
https://github.com/ollama/ollama/pull/3278
2,199,171,144
PR_kwDOJ0Z1Ps5qTzwj
3,278
Enabling ollama to run on Intel GPUs with SYCL backend
{ "login": "zhewang1-intc", "id": 72838274, "node_id": "MDQ6VXNlcjcyODM4Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/72838274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhewang1-intc", "html_url": "https://github.com/zhewang1-intc", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
12
2024-03-21T05:44:14
2024-12-21T00:48:47
2024-05-28T23:30:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3278", "html_url": "https://github.com/ollama/ollama/pull/3278", "diff_url": "https://github.com/ollama/ollama/pull/3278.diff", "patch_url": "https://github.com/ollama/ollama/pull/3278.patch", "merged_at": "2024-05-28T23:30:50" }
Hi, I am submitting this pr to enable ollama to run on Intel GPUs with SYCL as the backend. This pr was [originally](https://github.com/ollama/ollama/pull/2458) started by @felipeagc who is currently unable to actively participate due to relocation. The original pr had fallen behind the main branch, making it inconven...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3278/reactions", "total_count": 15, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 15, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3278/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/356
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/356/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/356/comments
https://api.github.com/repos/ollama/ollama/issues/356/events
https://github.com/ollama/ollama/issues/356
1,852,260,461
I_kwDOJ0Z1Ps5uZ0Bt
356
Undefined symbols during go build
{ "login": "drusepth", "id": 538235, "node_id": "MDQ6VXNlcjUzODIzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/538235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drusepth", "html_url": "https://github.com/drusepth", "followers_url": "https://api.github.com/users/drusept...
[]
closed
false
null
[]
null
5
2023-08-15T23:02:19
2023-08-16T01:56:08
2023-08-16T01:56:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Trying to build on a fresh Ubuntu 22 instance: ```console ubuntu@machine:~/ollama$ go version go version go1.21.0 linux/amd64 ubuntu@machine:~/ollama$ go build . go: downloading github.com/chzyer/readline v1.5.1 go: downloading github.com/dustin/go-humanize v1.0.1 go: downloading github.com/olekukonko/tablew...
{ "login": "drusepth", "id": 538235, "node_id": "MDQ6VXNlcjUzODIzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/538235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drusepth", "html_url": "https://github.com/drusepth", "followers_url": "https://api.github.com/users/drusept...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/356/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8384
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8384/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8384/comments
https://api.github.com/repos/ollama/ollama/issues/8384/events
https://github.com/ollama/ollama/issues/8384
2,781,737,542
I_kwDOJ0Z1Ps6lzfJG
8,384
Unable to acess ollama model hosted on a raspberry pi 5 from an other device
{ "login": "Simonko-912", "id": 179495001, "node_id": "U_kgDOCrLgWQ", "avatar_url": "https://avatars.githubusercontent.com/u/179495001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Simonko-912", "html_url": "https://github.com/Simonko-912", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
12
2025-01-11T09:52:30
2025-01-28T21:11:13
2025-01-28T21:11:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When i access the ai model from the raspberry pi it works but when i try it with the correct ip and port i cant conect. (page assist error) Unable to connect to Ollama 🦙 I tryed changeing the firewall setings still didnt work raspberry pi model cat: /sys/firmware/devicetree/model: No such ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8384/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/153
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/153/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/153/comments
https://api.github.com/repos/ollama/ollama/issues/153/events
https://github.com/ollama/ollama/issues/153
1,814,970,242
I_kwDOJ0Z1Ps5sLj-C
153
Control model cache location (set ollama directory to something other than ~/.ollama)
{ "login": "weaversam8", "id": 2546219, "node_id": "MDQ6VXNlcjI1NDYyMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2546219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weaversam8", "html_url": "https://github.com/weaversam8", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2023-07-21T00:13:40
2023-10-27T16:50:42
2023-10-27T16:50:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be useful to configure the location where models are cached, so models could be downloaded and stored on external storage.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/153/reactions", "total_count": 21, "+1": 21, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/153/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7256
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7256/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7256/comments
https://api.github.com/repos/ollama/ollama/issues/7256/events
https://github.com/ollama/ollama/issues/7256
2,598,039,347
I_kwDOJ0Z1Ps6a2u8z
7,256
Last character being truncated by stop sequence
{ "login": "someone13574", "id": 81528246, "node_id": "MDQ6VXNlcjgxNTI4MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/81528246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/someone13574", "html_url": "https://github.com/someone13574", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-10-18T17:27:40
2024-11-05T21:06:42
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When running inference in raw mode with '\n\n' as a stop sequence, it seems like punctuation is being removed with the stop sequence. I assume this is because of a bug with how partial stop sequences are handled. ### OS Linux ### GPU Other ### CPU AMD ### Ollama version 0.3.11
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7256/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2473
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2473/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2473/comments
https://api.github.com/repos/ollama/ollama/issues/2473/events
https://github.com/ollama/ollama/issues/2473
2,132,184,506
I_kwDOJ0Z1Ps5_Fo26
2,473
Packaging Ollama with ROCm support for Arch Linux
{ "login": "xyproto", "id": 52813, "node_id": "MDQ6VXNlcjUyODEz", "avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyproto", "html_url": "https://github.com/xyproto", "followers_url": "https://api.github.com/users/xyproto/follower...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
18
2024-02-13T12:08:49
2024-06-03T16:19:21
2024-06-01T20:28:07
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Arch Linux maintainer of the `ollama` and `ollama-cuda` packages here. I want to package `ollama-rocm`, with support for AMD/ROCm, but I get error messages when building the package, and wonder if I am enabling support in the right way when building, or not. So far, I am building with `-tags rocm` and have ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2473/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 7, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2473/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8324
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8324/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8324/comments
https://api.github.com/repos/ollama/ollama/issues/8324/events
https://github.com/ollama/ollama/issues/8324
2,771,469,091
I_kwDOJ0Z1Ps6lMUMj
8,324
Add a CUDA+AVX2(VNNI) runner to the Docker image.
{ "login": "x0wllaar", "id": 10964379, "node_id": "MDQ6VXNlcjEwOTY0Mzc5", "avatar_url": "https://avatars.githubusercontent.com/u/10964379?v=4", "gravatar_id": "", "url": "https://api.github.com/users/x0wllaar", "html_url": "https://github.com/x0wllaar", "followers_url": "https://api.github.com/users/x0w...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-06T21:22:27
2025-01-06T21:46:49
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Description**: I would like to ask to add a CUDA+AVX2 (maybe VNNI) model runner to the default Docker image for Ollama. I think this can help with performance in partial offload scenarios. This should be supported at build time (#2281), but for some reason I cant find the runner in the docker image I think tha...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8324/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7935
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7935/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7935/comments
https://api.github.com/repos/ollama/ollama/issues/7935/events
https://github.com/ollama/ollama/pull/7935
2,719,025,694
PR_kwDOJ0Z1Ps6EG_Yf
7,935
Update the /api/create endpoint to use JSON
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
4
2024-12-05T00:00:23
2025-01-01T02:02:33
2025-01-01T02:02:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7935", "html_url": "https://github.com/ollama/ollama/pull/7935", "diff_url": "https://github.com/ollama/ollama/pull/7935.diff", "patch_url": "https://github.com/ollama/ollama/pull/7935.patch", "merged_at": "2025-01-01T02:02:31" }
This PR changes the way the POST `/api/create` endpoint works by changing the way the various options/parameters get serialized and passed to the server. Currently the create endpoint requires a `Modelfile`, which is a reasonable on-disk abstraction, but falls down for serializing things such as files and passing them ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7935/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8622
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8622/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8622/comments
https://api.github.com/repos/ollama/ollama/issues/8622/events
https://github.com/ollama/ollama/issues/8622
2,814,520,394
I_kwDOJ0Z1Ps6nwixK
8,622
Support for Zero-shot Text Classification Models
{ "login": "BrainSlugs83", "id": 5217366, "node_id": "MDQ6VXNlcjUyMTczNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5217366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BrainSlugs83", "html_url": "https://github.com/BrainSlugs83", "followers_url": "https://api.github.com...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-28T03:09:05
2025-01-28T03:09:05
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be helpful to developers if ollama supported zero-shot text classification models, such as [`deberta-v3-large-tasksource-nli`](https://huggingface.co/sileod/deberta-v3-large-tasksource-nli) or other offshoots of BERT, which are fairly small models, that allow you do things like pass in a list of categories and...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8622/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1394
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1394/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1394/comments
https://api.github.com/repos/ollama/ollama/issues/1394/events
https://github.com/ollama/ollama/issues/1394
2,027,152,209
I_kwDOJ0Z1Ps540-NR
1,394
magicoder doesn't work
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2023-12-05T21:28:17
2023-12-06T08:34:48
2023-12-06T08:34:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
a new model on the library page (magicoder) doesn't work. ollama run magicoder:6.7b-s-ds-q3_K_L
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1394/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3937
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3937/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3937/comments
https://api.github.com/repos/ollama/ollama/issues/3937/events
https://github.com/ollama/ollama/issues/3937
2,265,378,662
I_kwDOJ0Z1Ps6HBu9m
3,937
``/load`` with no parameters to clear chat context
{ "login": "renauddetry", "id": 720662, "node_id": "MDQ6VXNlcjcyMDY2Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/720662?v=4", "gravatar_id": "", "url": "https://api.github.com/users/renauddetry", "html_url": "https://github.com/renauddetry", "followers_url": "https://api.github.com/user...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-04-26T09:38:42
2024-05-01T21:44:37
2024-05-01T21:44:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be fantastic to have a command that clears a chat's context. At the moment, getting a fresh context can be done with - ``/bye``, then starting the client again. Few keystrokes, but long wait time for the model to be reloaded. - ``/load <model>``, where ``<model>`` is the name of the most recently loaded...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3937/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8320
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8320/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8320/comments
https://api.github.com/repos/ollama/ollama/issues/8320/events
https://github.com/ollama/ollama/issues/8320
2,770,967,611
I_kwDOJ0Z1Ps6lKZw7
8,320
yi-coder: Suffix not supported
{ "login": "pyscripter", "id": 1311616, "node_id": "MDQ6VXNlcjEzMTE2MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1311616?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pyscripter", "html_url": "https://github.com/pyscripter", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-06T16:05:52
2025-01-07T04:55:56
2025-01-06T18:56:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The yi-coder [documentation](https://ollama.com/library/yi-coder) provides the following code completion example: ```shell curl http://localhost:11434/api/generate -d '{ "model": "yi-coder", "prompt": "def compute_gcd(a, b):", "suffix": " return result", "options": { "tem...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8320/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7896
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7896/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7896/comments
https://api.github.com/repos/ollama/ollama/issues/7896/events
https://github.com/ollama/ollama/issues/7896
2,707,731,203
I_kwDOJ0Z1Ps6hZLMD
7,896
Installing bolt.new and qwen2.5-coder:7b locally (error cudaMalloc failed: out of memory)
{ "login": "LieLust", "id": 34171795, "node_id": "MDQ6VXNlcjM0MTcxNzk1", "avatar_url": "https://avatars.githubusercontent.com/u/34171795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LieLust", "html_url": "https://github.com/LieLust", "followers_url": "https://api.github.com/users/LieLus...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-11-30T17:41:35
2025-01-13T01:31:04
2025-01-13T01:31:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ### Title: Issue with installing **bolt.new** and **qwen2.5-coder:7b** locally (error `cudaMalloc failed: out of memory`) #### Description: I am trying to install **bolt.new** and **qwen2.5-coder:7b** locally, but I get the following error: `{"error":"llama runner process has terminated: ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7896/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8666
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8666/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8666/comments
https://api.github.com/repos/ollama/ollama/issues/8666/events
https://github.com/ollama/ollama/issues/8666
2,818,549,002
I_kwDOJ0Z1Ps6n_6UK
8,666
TERMUX ERROR
{ "login": "NeKosmico", "id": 165345955, "node_id": "U_kgDOCdr6ow", "avatar_url": "https://avatars.githubusercontent.com/u/165345955?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NeKosmico", "html_url": "https://github.com/NeKosmico", "followers_url": "https://api.github.com/users/NeKosm...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2025-01-29T15:30:45
2025-01-29T16:10:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I wanted to run Ollama in the termux (Android) application, everything was going well... Until the following happened in this part: ```bash ~/ollama $ go build . # github.com/ollama/ollama/discover gpu_info_cudart.c:61:13: warning: comparison of different enumeration types ('cudartReturn_t' (ak...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8666/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8651
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8651/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8651/comments
https://api.github.com/repos/ollama/ollama/issues/8651/events
https://github.com/ollama/ollama/issues/8651
2,817,464,249
I_kwDOJ0Z1Ps6n7xe5
8,651
Intel ARC 770 memory does not support
{ "login": "yiteei", "id": 77902908, "node_id": "MDQ6VXNlcjc3OTAyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/77902908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yiteei", "html_url": "https://github.com/yiteei", "followers_url": "https://api.github.com/users/yiteei/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-29T07:42:05
2025-01-29T23:28:53
2025-01-29T23:28:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ![Image](https://github.com/user-attachments/assets/9c4323e9-966c-435e-bd5c-f9f749322d94) Windows 11 24H2 Intel ARC 770 Intel I5-12600K ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.5.7
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8651/timeline
null
completed
false