url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/3186
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3186/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3186/comments
https://api.github.com/repos/ollama/ollama/issues/3186/events
https://github.com/ollama/ollama/issues/3186
2,190,222,208
I_kwDOJ0Z1Ps6CjCOA
3,186
Support alternate symlink path for ARM Mac
{ "login": "vassilmladenov", "id": 5396637, "node_id": "MDQ6VXNlcjUzOTY2Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5396637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vassilmladenov", "html_url": "https://github.com/vassilmladenov", "followers_url": "https://api.gith...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
0
2024-03-16T19:54:29
2024-03-18T09:10:56
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? If you install Ollama with a `brew install --cask ollama` on ARM, it creates a symlink at `/opt/homebrew/bin/ollama`, but the app still wants to run its install script to put a symlink in `/usr/local/bin/ollama` I think because of this line: https://github.com/ollama/ollama/blob/main/...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3186/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3186/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7158
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7158/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7158/comments
https://api.github.com/repos/ollama/ollama/issues/7158/events
https://github.com/ollama/ollama/pull/7158
2,577,269,982
PR_kwDOJ0Z1Ps5-JVDz
7,158
runner.go: Handle truncation of tokens for stop sequences
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-10-10T01:03:55
2024-10-10T03:39:05
2024-10-10T03:39:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7158", "html_url": "https://github.com/ollama/ollama/pull/7158", "diff_url": "https://github.com/ollama/ollama/pull/7158.diff", "patch_url": "https://github.com/ollama/ollama/pull/7158.patch", "merged_at": "2024-10-10T03:39:04" }
When a single token contains both text to be return and a stop sequence, this causes an out of bounds error when we update the cache to match our text. This is because we currently assume that the removing the stop sequence will consume at least one token. This also inverts the logic to deal with positive numbers, r...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7158/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2779
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2779/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2779/comments
https://api.github.com/repos/ollama/ollama/issues/2779/events
https://github.com/ollama/ollama/issues/2779
2,156,552,252
I_kwDOJ0Z1Ps6AimA8
2,779
Feature request: Additional Console Outputs for more efficient logging and debugging
{ "login": "LumiWasTaken", "id": 49376128, "node_id": "MDQ6VXNlcjQ5Mzc2MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LumiWasTaken", "html_url": "https://github.com/LumiWasTaken", "followers_url": "https://api.github.c...
[ { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw", "url": "https://api.github.com/repos/ollama/ollama/labels/memory", "name": "memory", "color": "5017EA", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-27T13:08:48
2024-07-25T10:15:16
2024-07-25T10:15:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Heya, i have the common issue that for example when using LLAVA 34b on a small-ish GPU with CPU offloading it sometimes gets stuck. I can't really trace the issue anywhere, is it the BLAST Batch Processing, is it a OOM error, what is it? ``` key clip.vision.image_grid_pinpoints not found in file key clip.vision.m...
{ "login": "LumiWasTaken", "id": 49376128, "node_id": "MDQ6VXNlcjQ5Mzc2MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LumiWasTaken", "html_url": "https://github.com/LumiWasTaken", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2779/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/500
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/500/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/500/comments
https://api.github.com/repos/ollama/ollama/issues/500/events
https://github.com/ollama/ollama/pull/500
1,888,511,084
PR_kwDOJ0Z1Ps5Z7A9e
500
use cmake toolchain to configure build
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
2
2023-09-09T00:36:39
2023-09-11T16:39:42
2023-09-11T16:39:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/500", "html_url": "https://github.com/ollama/ollama/pull/500", "diff_url": "https://github.com/ollama/ollama/pull/500.diff", "patch_url": "https://github.com/ollama/ollama/pull/500.patch", "merged_at": null }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/500/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5921
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5921/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5921/comments
https://api.github.com/repos/ollama/ollama/issues/5921/events
https://github.com/ollama/ollama/issues/5921
2,428,198,576
I_kwDOJ0Z1Ps6Qu16w
5,921
failed installation script on ubuntu 24
{ "login": "vikyw89", "id": 112059651, "node_id": "U_kgDOBq3lAw", "avatar_url": "https://avatars.githubusercontent.com/u/112059651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikyw89", "html_url": "https://github.com/vikyw89", "followers_url": "https://api.github.com/users/vikyw89/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-07-24T18:15:14
2024-07-26T16:49:01
2024-07-26T16:49:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when running installation script this error occured: ```$ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ######################################################################## 100.0%######################################################################### 100.0% ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5921/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2329
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2329/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2329/comments
https://api.github.com/repos/ollama/ollama/issues/2329/events
https://github.com/ollama/ollama/pull/2329
2,115,134,062
PR_kwDOJ0Z1Ps5l17_y
2,329
docs: add tenere to terminal clients
{ "login": "pythops", "id": 57548585, "node_id": "MDQ6VXNlcjU3NTQ4NTg1", "avatar_url": "https://avatars.githubusercontent.com/u/57548585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pythops", "html_url": "https://github.com/pythops", "followers_url": "https://api.github.com/users/pythop...
[]
closed
false
null
[]
null
2
2024-02-02T15:02:06
2024-02-20T04:13:03
2024-02-20T04:13:03
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2329", "html_url": "https://github.com/ollama/ollama/pull/2329", "diff_url": "https://github.com/ollama/ollama/pull/2329.diff", "patch_url": "https://github.com/ollama/ollama/pull/2329.patch", "merged_at": "2024-02-20T04:13:03" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2329/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2535/comments
https://api.github.com/repos/ollama/ollama/issues/2535/events
https://github.com/ollama/ollama/issues/2535
2,137,859,838
I_kwDOJ0Z1Ps5_bSb-
2,535
how to set up an ollama model storage directory
{ "login": "bangundwir", "id": 17474376, "node_id": "MDQ6VXNlcjE3NDc0Mzc2", "avatar_url": "https://avatars.githubusercontent.com/u/17474376?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bangundwir", "html_url": "https://github.com/bangundwir", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
4
2024-02-16T04:40:48
2024-02-18T21:58:49
2024-02-18T06:14:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
make it so that you can move the model storage directory on windows ollama
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2535/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2535/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4345
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4345/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4345/comments
https://api.github.com/repos/ollama/ollama/issues/4345/events
https://github.com/ollama/ollama/issues/4345
2,290,694,794
I_kwDOJ0Z1Ps6IiTqK
4,345
Feature Request: Support asynchronous pull API endpoint
{ "login": "moracca", "id": 7213746, "node_id": "MDQ6VXNlcjcyMTM3NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/7213746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moracca", "html_url": "https://github.com/moracca", "followers_url": "https://api.github.com/users/moracca/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
0
2024-05-11T05:58:40
2024-11-06T17:38:04
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be helpful if we could instruct ollama to download a model without having to wait for the completion, since the model can be quite large in some cases. Ideally subsequent requests to pull the same model would avoid doing anything (maybe return the current status message?). Eventually once it finishes downloa...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4345/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1341
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1341/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1341/comments
https://api.github.com/repos/ollama/ollama/issues/1341/events
https://github.com/ollama/ollama/issues/1341
2,020,342,004
I_kwDOJ0Z1Ps54a_j0
1,341
MultiGPU: not splitting model to multiple GPUs - CUDA out of memory
{ "login": "chymian", "id": 1899961, "node_id": "MDQ6VXNlcjE4OTk5NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1899961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chymian", "html_url": "https://github.com/chymian", "followers_url": "https://api.github.com/users/chymian/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
10
2023-12-01T08:12:03
2024-05-09T22:25:10
2024-05-09T22:25:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
trying to load a model (deepseek-coder) to 2 GPUs fails with OOM-error. __the setup:__ Linux: ubu 22.04 HW: i5-7400 (AVX, AVX2), 32GB GPU: 4 x 3070 8GB ollama: 0.1.12, running in docker nvidia-smi from within the container shows 2 x 3070. Because of the big contect-size, I want to load the model on 2 GPUs, b...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1341/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/1341/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1865
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1865/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1865/comments
https://api.github.com/repos/ollama/ollama/issues/1865/events
https://github.com/ollama/ollama/issues/1865
2,072,299,939
I_kwDOJ0Z1Ps57hMmj
1,865
Add GPU support for CUDA Compute Capability 5.0 and 5.2 cards
{ "login": "Subie1", "id": 133152722, "node_id": "U_kgDOB--_0g", "avatar_url": "https://avatars.githubusercontent.com/u/133152722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Subie1", "html_url": "https://github.com/Subie1", "followers_url": "https://api.github.com/users/Subie1/follower...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
12
2024-01-09T12:39:57
2024-12-10T19:30:15
2024-01-27T18:28:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The `ollama serve` command runs as normally with the detection of my GPU: ``` 2024/01/09 14:37:45 gpu.go:34: Detecting GPU type ama 2024/01/09 14:37:45 gpu.go:53: Nvidia GPU detected ggml_init_cublas: found 1 CUDA devices: Device 0: Quadro M1000M, compute capability 5.0 ``` Lines which lead me to belie...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1865/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8362
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8362/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8362/comments
https://api.github.com/repos/ollama/ollama/issues/8362/events
https://github.com/ollama/ollama/issues/8362
2,777,448,800
I_kwDOJ0Z1Ps6ljIFg
8,362
please add model:QVQ-Preview 72B!
{ "login": "twythebest", "id": 89891289, "node_id": "MDQ6VXNlcjg5ODkxMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/89891289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twythebest", "html_url": "https://github.com/twythebest", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2025-01-09T10:37:06
2025-01-09T10:37:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
please add model:QVQ-Preview 72B!
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8362/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1987
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1987/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1987/comments
https://api.github.com/repos/ollama/ollama/issues/1987/events
https://github.com/ollama/ollama/pull/1987
2,080,723,968
PR_kwDOJ0Z1Ps5kBdw2
1,987
Let gpu.go and gen_linux.sh also find CUDA on Arch Linux
{ "login": "xyproto", "id": 52813, "node_id": "MDQ6VXNlcjUyODEz", "avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyproto", "html_url": "https://github.com/xyproto", "followers_url": "https://api.github.com/users/xyproto/follower...
[]
closed
false
null
[]
null
0
2024-01-14T13:13:16
2024-01-19T00:01:04
2024-01-18T21:32:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1987", "html_url": "https://github.com/ollama/ollama/pull/1987", "diff_url": "https://github.com/ollama/ollama/pull/1987.diff", "patch_url": "https://github.com/ollama/ollama/pull/1987.patch", "merged_at": "2024-01-18T21:32:10" }
* Let gpu.go and gen_linux.sh find CUDA on Arch Linux. * These changes were needed to let the [ollama-cuda](https://archlinux.org/packages/extra/x86_64/ollama-cuda/) package on Arch Linux find CUDA when building. * Also, use `find` instead of `ls` in `gen_linux.sh`.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1987/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5408
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5408/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5408/comments
https://api.github.com/repos/ollama/ollama/issues/5408/events
https://github.com/ollama/ollama/pull/5408
2,384,163,596
PR_kwDOJ0Z1Ps50Foy8
5,408
cmd: create context
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
2
2024-07-01T15:39:40
2024-11-22T00:53:50
2024-11-22T00:53:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5408", "html_url": "https://github.com/ollama/ollama/pull/5408", "diff_url": "https://github.com/ollama/ollama/pull/5408.diff", "patch_url": "https://github.com/ollama/ollama/pull/5408.patch", "merged_at": null }
restrict create file references to a directory context, default the parent directory of the Modelfile but configurable with `-C/--context ` this allows follow up changes like #4240 without exposing more information than is requested Note: this is a breaking CLI change since arbitrary file paths will no longer be ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5408/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1879
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1879/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1879/comments
https://api.github.com/repos/ollama/ollama/issues/1879/events
https://github.com/ollama/ollama/issues/1879
2,073,277,911
I_kwDOJ0Z1Ps57k7XX
1,879
Jetson Orin NX 16gb not seeing much CUDA usage with Ubuntu 22 and Jetpack 6 even after applying documented LD path work around
{ "login": "carolynhudson", "id": 59717105, "node_id": "MDQ6VXNlcjU5NzE3MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/59717105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/carolynhudson", "html_url": "https://github.com/carolynhudson", "followers_url": "https://api.githu...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-01-09T22:32:01
2024-01-11T02:02:03
2024-01-10T23:21:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I recently rebuilt my Orin NX and chose the newest release OS and Jetpack edition as I wanted a clean slate to try ollama in. I saw no difference in the performance before or after following the given workaround. When I close the service instance and intentionally opened a new terminal window to run ollama serve in th...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1879/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6196
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6196/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6196/comments
https://api.github.com/repos/ollama/ollama/issues/6196/events
https://github.com/ollama/ollama/issues/6196
2,450,533,084
I_kwDOJ0Z1Ps6SECrc
6,196
llm decode error: 500 Internal Server Error - detokenize doesn't handle unicode characters from server.cpp properly on windows
{ "login": "iBog", "id": 168304, "node_id": "MDQ6VXNlcjE2ODMwNA==", "avatar_url": "https://avatars.githubusercontent.com/u/168304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iBog", "html_url": "https://github.com/iBog", "followers_url": "https://api.github.com/users/iBog/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-08-06T10:31:14
2024-10-22T19:07:53
2024-10-22T19:07:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Happened few times when same chat log (history) was used after local model was switched. For example chat was started with "llava:7b-v1.6" when switched to "llama3.1:latest" without clear context array (not sure with exact same llm's pair) LOG: ``` time=2024-08-05T14:54:58.212+03:00 level=...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6196/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3413
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3413/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3413/comments
https://api.github.com/repos/ollama/ollama/issues/3413/events
https://github.com/ollama/ollama/issues/3413
2,216,345,527
I_kwDOJ0Z1Ps6EGr-3
3,413
Template cannot work
{ "login": "LiuChaoXD", "id": 39954067, "node_id": "MDQ6VXNlcjM5OTU0MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/39954067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LiuChaoXD", "html_url": "https://github.com/LiuChaoXD", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-03-30T09:05:43
2024-05-16T23:38:41
2024-05-16T23:38:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I create the modelfile follow the document. ``` FROM Path/to/mixtral/gguf PARAMETER temperature 0.9 PARAMETER num_ctx 32000 PARAMETER stop "[INST]" PARAMETER stop "[/INST]" TEMPLATE """ {{ if .First }}<s>{{ if .System }}[INST]{{ .System }}[/INST]{{ end }}</s>{{ end }}[INST] {{ .Pr...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3413/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5724
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5724/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5724/comments
https://api.github.com/repos/ollama/ollama/issues/5724/events
https://github.com/ollama/ollama/issues/5724
2,411,246,192
I_kwDOJ0Z1Ps6PuLJw
5,724
Avoid blocking requests to already loaded models while loading another model
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-07-16T14:07:00
2024-07-16T20:41:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have noticed that when GPU VRAM gets near-full, but ollama has decided to load 2 models into VRAM, incoming requests to one model simply stall until the other model pops out of memory. This is most noticeable with an embedding model plus a larger model that takes up most of my 16 GB of VRAM. W...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5724/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3301
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3301/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3301/comments
https://api.github.com/repos/ollama/ollama/issues/3301/events
https://github.com/ollama/ollama/issues/3301
2,203,340,007
I_kwDOJ0Z1Ps6DVEzn
3,301
Question: GPU not fully utilized when not all layers are offloaded
{ "login": "TomTom101", "id": 872712, "node_id": "MDQ6VXNlcjg3MjcxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/872712?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomTom101", "html_url": "https://github.com/TomTom101", "followers_url": "https://api.github.com/users/TomT...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
13
2024-03-22T21:13:57
2024-06-01T21:27:54
2024-06-01T21:27:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am running Mixtral 8x7B Q4 on a RTX 3090 with 24GB VRAM. 23/33 layers are offloaded to the GPU: ``` llm_load_tensors: offloading 23 repeating layers to GPU llm_load_tensors: offloaded 23/33 layers to GPU llm_load_tensors: CPU buffer size = 25215.87 MiB llm_load_tensors: CUDA0 buffer size = 17999.66 M...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3301/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5467
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5467/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5467/comments
https://api.github.com/repos/ollama/ollama/issues/5467/events
https://github.com/ollama/ollama/pull/5467
2,389,385,467
PR_kwDOJ0Z1Ps50XhQh
5,467
Fix corner cases on tmp cleaner on mac
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-07-03T20:10:59
2024-07-03T20:39:39
2024-07-03T20:39:36
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5467", "html_url": "https://github.com/ollama/ollama/pull/5467", "diff_url": "https://github.com/ollama/ollama/pull/5467.diff", "patch_url": "https://github.com/ollama/ollama/pull/5467.patch", "merged_at": "2024-07-03T20:39:36" }
When ollama is running a long time, tmp cleaners can remove the runners. This tightens up a few corner cases on arm macs where we failed with "server cpu not listed in available servers map[]"
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5467/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2412
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2412/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2412/comments
https://api.github.com/repos/ollama/ollama/issues/2412/events
https://github.com/ollama/ollama/pull/2412
2,125,558,627
PR_kwDOJ0Z1Ps5mZazd
2,412
Added `/screenshot` command for multimodal model chats
{ "login": "ac-99", "id": 47637771, "node_id": "MDQ6VXNlcjQ3NjM3Nzcx", "avatar_url": "https://avatars.githubusercontent.com/u/47637771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ac-99", "html_url": "https://github.com/ac-99", "followers_url": "https://api.github.com/users/ac-99/follow...
[]
closed
false
null
[]
null
2
2024-02-08T16:10:51
2024-05-08T00:20:55
2024-05-08T00:20:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2412", "html_url": "https://github.com/ollama/ollama/pull/2412", "diff_url": "https://github.com/ollama/ollama/pull/2412.diff", "patch_url": "https://github.com/ollama/ollama/pull/2412.patch", "merged_at": null }
Added ability to feed current screen directly to multimodal models with a `/screenshot` command. This enables a more dynamic experience for users who can more quickly and easily get contextual responses from their multimodal assistants. **Example use cases** 1. Research assistant -- allows the multimodal LM t...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2412/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7748
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7748/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7748/comments
https://api.github.com/repos/ollama/ollama/issues/7748/events
https://github.com/ollama/ollama/issues/7748
2,673,803,581
I_kwDOJ0Z1Ps6fXwE9
7,748
ggml.c:4044: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed
{ "login": "pavelruzicka", "id": 23432593, "node_id": "MDQ6VXNlcjIzNDMyNTkz", "avatar_url": "https://avatars.githubusercontent.com/u/23432593?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pavelruzicka", "html_url": "https://github.com/pavelruzicka", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2024-11-19T22:54:26
2025-01-27T12:52:24
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? On certain API requests, the server throws a segmentation fault error and the API responds with a HTTP 500. So far, I have encountered this twice in thousands of requests. Unfortunately I do not have the particular prompts that resulted in this logged but I do not expect this to be directly repr...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7748/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3398
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3398/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3398/comments
https://api.github.com/repos/ollama/ollama/issues/3398/events
https://github.com/ollama/ollama/pull/3398
2,214,291,344
PR_kwDOJ0Z1Ps5rHIGA
3,398
CI automation for tagging latest images
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-03-28T22:49:53
2024-10-29T08:23:41
2024-03-28T23:25:54
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3398", "html_url": "https://github.com/ollama/ollama/pull/3398", "diff_url": "https://github.com/ollama/ollama/pull/3398.diff", "patch_url": "https://github.com/ollama/ollama/pull/3398.patch", "merged_at": "2024-03-28T23:25:54" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3398/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/105
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/105/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/105/comments
https://api.github.com/repos/ollama/ollama/issues/105/events
https://github.com/ollama/ollama/pull/105
1,810,818,984
PR_kwDOJ0Z1Ps5V1Qu1
105
attempt two for skipping files in the file walk
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-07-18T22:36:18
2023-07-18T22:49:30
2023-07-18T22:37:01
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/105", "html_url": "https://github.com/ollama/ollama/pull/105", "diff_url": "https://github.com/ollama/ollama/pull/105.diff", "patch_url": "https://github.com/ollama/ollama/pull/105.patch", "merged_at": "2023-07-18T22:37:01" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/105/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2673
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2673/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2673/comments
https://api.github.com/repos/ollama/ollama/issues/2673/events
https://github.com/ollama/ollama/issues/2673
2,148,705,158
I_kwDOJ0Z1Ps6AEqOG
2,673
Stop tokens appear in the model output.
{ "login": "olafgeibig", "id": 295644, "node_id": "MDQ6VXNlcjI5NTY0NA==", "avatar_url": "https://avatars.githubusercontent.com/u/295644?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olafgeibig", "html_url": "https://github.com/olafgeibig", "followers_url": "https://api.github.com/users/o...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-02-22T10:16:31
2024-05-17T22:48:56
2024-05-17T22:48:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I created my own Ollama model of https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF Here is my modelfile: ``` FROM ./nous-hermes-2-mistral-7b-dpo.Q5_K_M.gguf PARAMETER num_ctx 8192 TEMPLATE """<|im_start|>system {{ .System }}<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>as...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2673/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/2673/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2253
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2253/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2253/comments
https://api.github.com/repos/ollama/ollama/issues/2253/events
https://github.com/ollama/ollama/issues/2253
2,105,385,698
I_kwDOJ0Z1Ps59faLi
2,253
Invalid file magic dolphin-2.7-mixtral gguf
{ "login": "fschiro", "id": 75554993, "node_id": "MDQ6VXNlcjc1NTU0OTkz", "avatar_url": "https://avatars.githubusercontent.com/u/75554993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fschiro", "html_url": "https://github.com/fschiro", "followers_url": "https://api.github.com/users/fschir...
[]
closed
false
null
[]
null
3
2024-01-29T12:28:50
2024-03-11T18:40:03
2024-03-11T18:40:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I'm having trouble creating dolphin-2.7-mixtral from a GGUF. Is the model supported? ```bash ollama --version ollama version is 0.1.22 cat Modelfile FROM ./dolphin-2.7-mixtral-8x7b.Q4_K_M.gguf ls config.json dolphin-2.7-mixtral-8x7b.Q2_K.gguf dolphin-2.7-mixtral-8x7b.Q3_K_M.gguf dolphin-2.7-mi...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2253/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1763
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1763/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1763/comments
https://api.github.com/repos/ollama/ollama/issues/1763/events
https://github.com/ollama/ollama/issues/1763
2,063,000,734
I_kwDOJ0Z1Ps569uSe
1,763
Resuming to pull a model is not working via API
{ "login": "DennisKo", "id": 9072277, "node_id": "MDQ6VXNlcjkwNzIyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9072277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DennisKo", "html_url": "https://github.com/DennisKo", "followers_url": "https://api.github.com/users/Denni...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-01-02T21:55:26
2024-01-06T21:19:46
2024-01-06T21:19:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
If I start to pull a model via `/api/pull` and then abort the request at let's say 2% and re-request it, it will not resume and start from 0%. If I do it via `ollama pull model` it correctly resumes.... Did some more testing: Start via `/api/pull`, go to 2%, abort -> run `ollama pull model`, no resume... Start vi...
{ "login": "DennisKo", "id": 9072277, "node_id": "MDQ6VXNlcjkwNzIyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9072277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DennisKo", "html_url": "https://github.com/DennisKo", "followers_url": "https://api.github.com/users/Denni...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1763/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1763/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4822
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4822/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4822/comments
https://api.github.com/repos/ollama/ollama/issues/4822/events
https://github.com/ollama/ollama/pull/4822
2,334,528,078
PR_kwDOJ0Z1Ps5xexip
4,822
API PS Documentation
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-06-04T23:10:54
2024-06-05T18:06:54
2024-06-05T18:06:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4822", "html_url": "https://github.com/ollama/ollama/pull/4822", "diff_url": "https://github.com/ollama/ollama/pull/4822.diff", "patch_url": "https://github.com/ollama/ollama/pull/4822.patch", "merged_at": "2024-06-05T18:06:53" }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4822/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4822/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1184
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1184/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1184/comments
https://api.github.com/repos/ollama/ollama/issues/1184/events
https://github.com/ollama/ollama/pull/1184
2,000,018,323
PR_kwDOJ0Z1Ps5fzGb0
1,184
adjust download/upload parts
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-11-17T22:20:35
2024-05-09T22:17:51
2023-11-20T19:19:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1184", "html_url": "https://github.com/ollama/ollama/pull/1184", "diff_url": "https://github.com/ollama/ollama/pull/1184.diff", "patch_url": "https://github.com/ollama/ollama/pull/1184.patch", "merged_at": null }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1184/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1893
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1893/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1893/comments
https://api.github.com/repos/ollama/ollama/issues/1893/events
https://github.com/ollama/ollama/issues/1893
2,074,149,210
I_kwDOJ0Z1Ps57oQFa
1,893
response_json['eval_count'] doesn't exists - llms/ollama.py
{ "login": "mongolu", "id": 5344119, "node_id": "MDQ6VXNlcjUzNDQxMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5344119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mongolu", "html_url": "https://github.com/mongolu", "followers_url": "https://api.github.com/users/mongolu/...
[]
closed
false
null
[]
null
3
2024-01-10T11:17:49
2024-04-08T10:11:23
2024-01-10T11:19:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
after some time this error pops out. i think it's related with same situation for `response_json['prompt_eval_count']` Logs: ``` 'created_at': '2024-01-10T08:52:17.111694849Z', 'done': True, 'eval_duration': 516371613757000, 'load_duration': 260310, 'model': 'MixtralOrochi8x7B:latest', 'response': '', ...
{ "login": "mongolu", "id": 5344119, "node_id": "MDQ6VXNlcjUzNDQxMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5344119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mongolu", "html_url": "https://github.com/mongolu", "followers_url": "https://api.github.com/users/mongolu/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1893/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3699
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3699/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3699/comments
https://api.github.com/repos/ollama/ollama/issues/3699/events
https://github.com/ollama/ollama/pull/3699
2,248,263,611
PR_kwDOJ0Z1Ps5s6_QE
3,699
Ollama.md Documentation
{ "login": "jedt", "id": 173964, "node_id": "MDQ6VXNlcjE3Mzk2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/173964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jedt", "html_url": "https://github.com/jedt", "followers_url": "https://api.github.com/users/jedt/followers", ...
[]
closed
false
null
[]
null
1
2024-04-17T13:10:07
2024-04-17T13:14:39
2024-04-17T13:13:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3699", "html_url": "https://github.com/ollama/ollama/pull/3699", "diff_url": "https://github.com/ollama/ollama/pull/3699.diff", "patch_url": "https://github.com/ollama/ollama/pull/3699.patch", "merged_at": null }
A guide on setting up a fine-tuned Unsloth FastLanguageModel from a Google Colab notebook to: 1. HF hub 2. GGUF 3. local Ollama Preview link: https://github.com/ollama/ollama/blob/66f7b5bf9e63e1e98c98e8f487427e19195791e0/docs/ollama.md
{ "login": "jedt", "id": 173964, "node_id": "MDQ6VXNlcjE3Mzk2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/173964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jedt", "html_url": "https://github.com/jedt", "followers_url": "https://api.github.com/users/jedt/followers", ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3699/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2735/comments
https://api.github.com/repos/ollama/ollama/issues/2735/events
https://github.com/ollama/ollama/issues/2735
2,152,441,834
I_kwDOJ0Z1Ps6AS6fq
2,735
Build fails on MacOS
{ "login": "jrp2014", "id": 8142876, "node_id": "MDQ6VXNlcjgxNDI4NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8142876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jrp2014", "html_url": "https://github.com/jrp2014", "followers_url": "https://api.github.com/users/jrp2014/...
[]
closed
false
null
[]
null
3
2024-02-24T18:48:45
2024-03-04T16:00:47
2024-02-25T05:06:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Following the instructions in the Developer docs, out of the box I get: ``` (ollama) ➜ AI git clone https://github.com/ollama/ollama.git Cloning into 'ollama'... remote: Enumerating objects: 10778, done. remote: Counting objects: 100% (2489/2489), done. remote: Compressing objects: 100% (633/633), done. remote:...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2735/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/922
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/922/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/922/comments
https://api.github.com/repos/ollama/ollama/issues/922/events
https://github.com/ollama/ollama/pull/922
1,964,487,476
PR_kwDOJ0Z1Ps5d6xe-
922
add bracketed paste mode
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-10-26T22:53:45
2023-10-26T22:57:01
2023-10-26T22:57:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/922", "html_url": "https://github.com/ollama/ollama/pull/922", "diff_url": "https://github.com/ollama/ollama/pull/922.diff", "patch_url": "https://github.com/ollama/ollama/pull/922.patch", "merged_at": "2023-10-26T22:57:00" }
This change allows you to cut/paste into the REPL without have to add the """ around a block of text. I've tested it out with: * Terminal.app * iTerm2 * Warp
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/922/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7471
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7471/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7471/comments
https://api.github.com/repos/ollama/ollama/issues/7471/events
https://github.com/ollama/ollama/issues/7471
2,630,365,630
I_kwDOJ0Z1Ps6cyDG-
7,471
Cannot generate id_ed25519 - read-only file system
{ "login": "duhow", "id": 1145001, "node_id": "MDQ6VXNlcjExNDUwMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1145001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/duhow", "html_url": "https://github.com/duhow", "followers_url": "https://api.github.com/users/duhow/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-11-02T10:12:14
2024-11-17T14:16:02
2024-11-17T14:16:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Running the service `systemctl start ollama` cannot run due to **immutable system** , using https://github.com/ublue-os/bazzite . User `ollama` with default `$HOME` in `/usr/share/ollama` cannot write there. Performed normal setup with `sudo` rather than user-based. ```sh curl -fsSL https...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7471/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7667
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7667/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7667/comments
https://api.github.com/repos/ollama/ollama/issues/7667/events
https://github.com/ollama/ollama/pull/7667
2,658,819,958
PR_kwDOJ0Z1Ps6B6-pg
7,667
Support Multiple LoRa Adapters, Closes #7627
{ "login": "ItzCrazyKns", "id": 95534749, "node_id": "U_kgDOBbG-nQ", "avatar_url": "https://avatars.githubusercontent.com/u/95534749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ItzCrazyKns", "html_url": "https://github.com/ItzCrazyKns", "followers_url": "https://api.github.com/users/It...
[]
closed
false
null
[]
null
2
2024-11-14T13:23:40
2024-11-27T19:00:41
2024-11-27T19:00:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7667", "html_url": "https://github.com/ollama/ollama/pull/7667", "diff_url": "https://github.com/ollama/ollama/pull/7667.diff", "patch_url": "https://github.com/ollama/ollama/pull/7667.patch", "merged_at": "2024-11-27T19:00:04" }
Hi, so I've updated the Llama server by allowing it to handle multiple LoRa adapters. Previously, the server supported only one LoRa adapter, limiting users who needed to apply multiple adapters for advanced fine-tuning. Changes Made: - Command-Line Parsing: - Updated to accept multiple `--lora` flags. - In...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7667/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5680
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5680/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5680/comments
https://api.github.com/repos/ollama/ollama/issues/5680/events
https://github.com/ollama/ollama/issues/5680
2,407,155,838
I_kwDOJ0Z1Ps6Pekh-
5,680
Extremely slow on Mac M1 chip
{ "login": "lulunac27a", "id": 100660343, "node_id": "U_kgDOBf_0dw", "avatar_url": "https://avatars.githubusercontent.com/u/100660343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lulunac27a", "html_url": "https://github.com/lulunac27a", "followers_url": "https://api.github.com/users/lul...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
7
2024-07-13T20:33:58
2024-09-26T13:43:22
2024-07-23T20:55:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I tried chatting using Llama from Meta AI, when the answer is generating, my computer is so slow and sometimes freezes (like my mouse not moving when I move the trackpad). It takes few minutes to completely generate an answer from a question. I use Apple M1 chip with 8GB of RAM memory. ### OS ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5680/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2393
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2393/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2393/comments
https://api.github.com/repos/ollama/ollama/issues/2393/events
https://github.com/ollama/ollama/issues/2393
2,123,591,544
I_kwDOJ0Z1Ps5-k294
2,393
Inquiry on Optimal CPU and GPU Configurations for LLaMA 2(70B)
{ "login": "gautam-fairpe", "id": 127822235, "node_id": "U_kgDOB55pmw", "avatar_url": "https://avatars.githubusercontent.com/u/127822235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gautam-fairpe", "html_url": "https://github.com/gautam-fairpe", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-07T18:04:12
2024-05-07T00:10:37
2024-05-07T00:10:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am currently exploring the capabilities of LLaMA 2 for various NLP tasks and am in the process of setting up the necessary hardware environment to ensure optimal performance. Given the complexity and resource-intensive nature of LLaMA 2(70B), I am seeking advice on the most suitable CPU and GPU configurations tha...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2393/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7957
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7957/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7957/comments
https://api.github.com/repos/ollama/ollama/issues/7957/events
https://github.com/ollama/ollama/pull/7957
2,721,423,176
PR_kwDOJ0Z1Ps6EPPqm
7,957
merge llama/ggml into ml/backend/ggml
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-12-05T21:09:11
2025-01-10T19:30:25
2025-01-10T19:30:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7957", "html_url": "https://github.com/ollama/ollama/pull/7957", "diff_url": "https://github.com/ollama/ollama/pull/7957.diff", "patch_url": "https://github.com/ollama/ollama/pull/7957.patch", "merged_at": "2025-01-10T19:30:23" }
Branched from #7954 and #7875
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7957/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6945
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6945/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6945/comments
https://api.github.com/repos/ollama/ollama/issues/6945/events
https://github.com/ollama/ollama/pull/6945
2,546,654,764
PR_kwDOJ0Z1Ps58lhm8
6,945
Update README.md - Library - Haverscript
{ "login": "andygill", "id": 20696, "node_id": "MDQ6VXNlcjIwNjk2", "avatar_url": "https://avatars.githubusercontent.com/u/20696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andygill", "html_url": "https://github.com/andygill", "followers_url": "https://api.github.com/users/andygill/foll...
[]
closed
false
null
[]
null
0
2024-09-25T00:46:34
2024-11-21T08:11:40
2024-11-21T08:11:39
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6945", "html_url": "https://github.com/ollama/ollama/pull/6945", "diff_url": "https://github.com/ollama/ollama/pull/6945.diff", "patch_url": "https://github.com/ollama/ollama/pull/6945.patch", "merged_at": "2024-11-21T08:11:39" }
This PR adds a link to Haverscript. Haverscript uses classical functional programming techniques to provide a composable interface for interacting with ollama-hosted LLMs.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6945/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5005
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5005/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5005/comments
https://api.github.com/repos/ollama/ollama/issues/5005/events
https://github.com/ollama/ollama/issues/5005
2,349,538,106
I_kwDOJ0Z1Ps6MCxs6
5,005
ollama creat -f Modelfile doesn't process utf-8 encoding correctly
{ "login": "MGdesigner", "id": 4480740, "node_id": "MDQ6VXNlcjQ0ODA3NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4480740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MGdesigner", "html_url": "https://github.com/MGdesigner", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
8
2024-06-12T19:31:22
2024-06-14T07:20:13
2024-06-14T07:20:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Today I upgrade Ollama version to 0.1.43 from official site. After creating new model, I found my system prompt(written with CJK kanjis ) in modelfile didn't work. Then I check it out by > ollama show mymodel:latest --modelfile Then I found that modelfile of the model is not enc...
{ "login": "MGdesigner", "id": 4480740, "node_id": "MDQ6VXNlcjQ0ODA3NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4480740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MGdesigner", "html_url": "https://github.com/MGdesigner", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5005/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6310
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6310/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6310/comments
https://api.github.com/repos/ollama/ollama/issues/6310/events
https://github.com/ollama/ollama/issues/6310
2,459,597,443
I_kwDOJ0Z1Ps6SmnqD
6,310
llama3.1 8b template seems to be different from that in huggingface
{ "login": "fzyzcjy", "id": 5236035, "node_id": "MDQ6VXNlcjUyMzYwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fzyzcjy", "html_url": "https://github.com/fzyzcjy", "followers_url": "https://api.github.com/users/fzyzcjy/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
4
2024-08-11T13:35:21
2024-12-25T22:27:41
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi thanks for the tool! When reading https://ollama.com/library/llama3.1:8b-instruct-q4_K_M/blobs/11ce4ee3e170, it seems different from https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/blob/main/tokenizer_config.json#L2053. For example, it does not mention `Cutting Knowledge Date: De...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6310/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 3 }
https://api.github.com/repos/ollama/ollama/issues/6310/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/3002
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3002/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3002/comments
https://api.github.com/repos/ollama/ollama/issues/3002/events
https://github.com/ollama/ollama/issues/3002
2,176,088,229
I_kwDOJ0Z1Ps6BtHil
3,002
Disable Chat History/Logging Option
{ "login": "trymeouteh", "id": 31172274, "node_id": "MDQ6VXNlcjMxMTcyMjc0", "avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trymeouteh", "html_url": "https://github.com/trymeouteh", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-03-08T13:53:27
2024-05-18T18:51:58
2024-05-18T18:51:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add a setting to disable chat history/logging option and consider to have this disabled by default. This can increase privacy from preventing others to see what they asked the AI in the past. Would especially be useful feature for Users and users management https://github.com/ollama/ollama/issues/2863
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3002/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3002/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7042
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7042/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7042/comments
https://api.github.com/repos/ollama/ollama/issues/7042/events
https://github.com/ollama/ollama/pull/7042
2,556,047,910
PR_kwDOJ0Z1Ps59Fdbn
7,042
Updated few typos in this code file build_remote.py
{ "login": "vignesh1507", "id": 143084478, "node_id": "U_kgDOCIdLvg", "avatar_url": "https://avatars.githubusercontent.com/u/143084478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vignesh1507", "html_url": "https://github.com/vignesh1507", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-09-30T09:14:11
2024-11-21T19:25:40
2024-11-21T19:25:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7042", "html_url": "https://github.com/ollama/ollama/pull/7042", "diff_url": "https://github.com/ollama/ollama/pull/7042.diff", "patch_url": "https://github.com/ollama/ollama/pull/7042.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7042/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7042/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5872
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5872/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5872/comments
https://api.github.com/repos/ollama/ollama/issues/5872/events
https://github.com/ollama/ollama/pull/5872
2,424,996,900
PR_kwDOJ0Z1Ps52NGsP
5,872
[Ascend ] add ascend npu support
{ "login": "zhongTao99", "id": 56594937, "node_id": "MDQ6VXNlcjU2NTk0OTM3", "avatar_url": "https://avatars.githubusercontent.com/u/56594937?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongTao99", "html_url": "https://github.com/zhongTao99", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
47
2024-07-23T11:44:09
2025-01-26T09:28:23
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5872", "html_url": "https://github.com/ollama/ollama/pull/5872", "diff_url": "https://github.com/ollama/ollama/pull/5872.diff", "patch_url": "https://github.com/ollama/ollama/pull/5872.patch", "merged_at": null }
It's a draft for ascend npu support, It can get gpu info for npu, and need to be optimization fix: https://github.com/ollama/ollama/issues/5315 The **pre-builded ollama** that support Huawei Atlas A2 series as the backend can be obtained from the following: **docker image:** `docker pull leopony/ollama:lates...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5872/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/5872/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1268
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1268/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1268/comments
https://api.github.com/repos/ollama/ollama/issues/1268/events
https://github.com/ollama/ollama/pull/1268
2,010,122,948
PR_kwDOJ0Z1Ps5gVHQf
1,268
complete gguf upgrade
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
2
2023-11-24T18:59:51
2023-12-15T19:39:09
2023-12-15T19:39:08
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1268", "html_url": "https://github.com/ollama/ollama/pull/1268", "diff_url": "https://github.com/ollama/ollama/pull/1268.diff", "patch_url": "https://github.com/ollama/ollama/pull/1268.patch", "merged_at": null }
- remove ggml runner - automatically pull gguf models when ggml detected - tell users to update to gguf in the case automatic pull fails On running a ggml model, a gguf model will be automatically pulled before running: ``` ollama run orca-mini This model is no longer compatible with Ollama. Pulling a new versi...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1268/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1278
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1278/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1278/comments
https://api.github.com/repos/ollama/ollama/issues/1278/events
https://github.com/ollama/ollama/issues/1278
2,011,124,354
I_kwDOJ0Z1Ps5331KC
1,278
Install clobbers /etc/systemd/system/ollama.service file destroying any custom configurations like specifying IP or PORT being served or preventing cors errors
{ "login": "Dougie777", "id": 77511128, "node_id": "MDQ6VXNlcjc3NTExMTI4", "avatar_url": "https://avatars.githubusercontent.com/u/77511128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dougie777", "html_url": "https://github.com/Dougie777", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
4
2023-11-26T17:19:20
2024-01-20T00:10:10
2024-01-20T00:10:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Upgrading to the latest version clobbers my /etc/systemd/system/ollama.service file. If the file exists it should not be overwritten. Or the distro should only include a sample file like so /etc/systemd/system/ollama.service.sample To Reproduce 1 Install ollama as a service using docs. 2 Customize /etc/systemd/s...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1278/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5524
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5524/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5524/comments
https://api.github.com/repos/ollama/ollama/issues/5524/events
https://github.com/ollama/ollama/pull/5524
2,393,830,627
PR_kwDOJ0Z1Ps50mhRC
5,524
allow converting adapters from npz
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
1
2024-07-07T01:30:58
2024-08-12T21:34:38
2024-08-12T21:34:38
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5524", "html_url": "https://github.com/ollama/ollama/pull/5524", "diff_url": "https://github.com/ollama/ollama/pull/5524.diff", "patch_url": "https://github.com/ollama/ollama/pull/5524.patch", "merged_at": null }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5524/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1574
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1574/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1574/comments
https://api.github.com/repos/ollama/ollama/issues/1574/events
https://github.com/ollama/ollama/issues/1574
2,045,344,153
I_kwDOJ0Z1Ps556XmZ
1,574
Sending several requests to the server in quick succession appears to cause some responses to fail
{ "login": "charstorm", "id": 126527238, "node_id": "U_kgDOB4qnBg", "avatar_url": "https://avatars.githubusercontent.com/u/126527238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/charstorm", "html_url": "https://github.com/charstorm", "followers_url": "https://api.github.com/users/charst...
[]
closed
false
null
[]
null
3
2023-12-17T19:25:20
2023-12-17T20:15:23
2023-12-17T20:15:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, First, I want to thank everyone working on this project. I appreciate your efforts. I was testing ollama server and I noticed that it sometimes gave empty responses. I found out that it happens when a request is made right after the previous one. Adding a sleep seems to solve the issue. Here is some code to d...
{ "login": "charstorm", "id": 126527238, "node_id": "U_kgDOB4qnBg", "avatar_url": "https://avatars.githubusercontent.com/u/126527238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/charstorm", "html_url": "https://github.com/charstorm", "followers_url": "https://api.github.com/users/charst...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1574/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5618
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5618/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5618/comments
https://api.github.com/repos/ollama/ollama/issues/5618/events
https://github.com/ollama/ollama/pull/5618
2,401,860,057
PR_kwDOJ0Z1Ps51BqXR
5,618
OpenAI: add suffix to docs
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-07-10T22:41:08
2024-07-16T23:53:07
2024-07-16T23:53:07
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5618", "html_url": "https://github.com/ollama/ollama/pull/5618", "diff_url": "https://github.com/ollama/ollama/pull/5618.diff", "patch_url": "https://github.com/ollama/ollama/pull/5618.patch", "merged_at": null }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5618/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1514
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1514/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1514/comments
https://api.github.com/repos/ollama/ollama/issues/1514/events
https://github.com/ollama/ollama/issues/1514
2,040,816,473
I_kwDOJ0Z1Ps55pGNZ
1,514
The code below appears to ignore CUDA_VISIBLE_DEVICES in its calculation, i.e. any GPU you won't use, will still be counted as VRAM.
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2023-12-14T03:00:05
2024-04-23T15:31:40
2024-04-23T15:31:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
```go func CheckVRAM() (int64, error) { cmd := exec.Command("nvidia-smi", "--query-gpu=memory.free", "--format=csv,noheader,nounits") var stdout bytes.Buffer cmd.Stdout = &stdout err := cmd.Run() if err != nil { return 0, errNvidiaSMI } ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1514/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/1514/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6405
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6405/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6405/comments
https://api.github.com/repos/ollama/ollama/issues/6405/events
https://github.com/ollama/ollama/issues/6405
2,472,007,294
I_kwDOJ0Z1Ps6TV9Z-
6,405
Implement layer-by-layer paging from CPU RAM into GPU for large models.
{ "login": "Speedway1", "id": 100301611, "node_id": "U_kgDOBfp7Kw", "avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Speedway1", "html_url": "https://github.com/Speedway1", "followers_url": "https://api.github.com/users/Speedw...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
11
2024-08-18T14:49:10
2024-08-18T23:22:52
2024-08-18T19:54:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
While the GPU makers want us to believe that the main crunch point is not enough GPU power, the real issue with self-hosted LLMs is lack of memory. Especially when we're inferencing at large context windows (which is where the magic starts to happen). At the moment Ollama loads all the model's layers and does a very...
{ "login": "Speedway1", "id": 100301611, "node_id": "U_kgDOBfp7Kw", "avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Speedway1", "html_url": "https://github.com/Speedway1", "followers_url": "https://api.github.com/users/Speedw...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6405/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2013
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2013/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2013/comments
https://api.github.com/repos/ollama/ollama/issues/2013/events
https://github.com/ollama/ollama/pull/2013
2,083,245,093
PR_kwDOJ0Z1Ps5kJ6Tg
2,013
Add support for min_p sampling (original by @Robitx)
{ "login": "nathanpbell", "id": 3697, "node_id": "MDQ6VXNlcjM2OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathanpbell", "html_url": "https://github.com/nathanpbell", "followers_url": "https://api.github.com/users/nathan...
[]
closed
false
null
[]
null
4
2024-01-16T08:04:30
2024-05-22T10:48:24
2024-01-16T09:00:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2013", "html_url": "https://github.com/ollama/ollama/pull/2013", "diff_url": "https://github.com/ollama/ollama/pull/2013.diff", "patch_url": "https://github.com/ollama/ollama/pull/2013.patch", "merged_at": null }
This is a updated copy of @Robitx's pull request to add support for min_p sampling that was implemented in llama.cpp. It differs from @Robitx's pull request in only in that it resolves the merge conflict that occurred after he submitted his original pull request. Feel free to ignore this and pull in his instead (if ...
{ "login": "nathanpbell", "id": 3697, "node_id": "MDQ6VXNlcjM2OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathanpbell", "html_url": "https://github.com/nathanpbell", "followers_url": "https://api.github.com/users/nathan...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2013/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/41
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/41/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/41/comments
https://api.github.com/repos/ollama/ollama/issues/41/events
https://github.com/ollama/ollama/pull/41
1,791,993,700
PR_kwDOJ0Z1Ps5U1YcJ
41
tcp socket
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-07-06T17:56:25
2023-07-06T18:15:50
2023-07-06T18:15:32
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/41", "html_url": "https://github.com/ollama/ollama/pull/41", "diff_url": "https://github.com/ollama/ollama/pull/41.diff", "patch_url": "https://github.com/ollama/ollama/pull/41.patch", "merged_at": "2023-07-06T18:15:32" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/41/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/41/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1562
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1562/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1562/comments
https://api.github.com/repos/ollama/ollama/issues/1562/events
https://github.com/ollama/ollama/issues/1562
2,044,697,361
I_kwDOJ0Z1Ps5535sR
1,562
Inquiries Regarding Ollama Tool Usage
{ "login": "ewijaya", "id": 9668738, "node_id": "MDQ6VXNlcjk2Njg3Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/9668738?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ewijaya", "html_url": "https://github.com/ewijaya", "followers_url": "https://api.github.com/users/ewijaya/...
[]
closed
false
null
[]
null
1
2023-12-16T10:24:55
2023-12-19T17:53:05
2023-12-19T17:53:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Thanks for Ollama tool, it's been a fantastic resource! I have a couple of inquiries I hope you could assist me with: 1. I recently executed the following command: ``` ollama create dolphin.mistral -f Modelfile.dolphin.mistral ``` The contents of my `Modelfile.dolphin.mistral` are as ...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1562/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/917
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/917/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/917/comments
https://api.github.com/repos/ollama/ollama/issues/917/events
https://github.com/ollama/ollama/pull/917
1,964,079,452
PR_kwDOJ0Z1Ps5d5Xnq
917
fix docker build annotations
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-10-26T18:04:41
2023-10-26T19:00:34
2023-10-26T19:00:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/917", "html_url": "https://github.com/ollama/ollama/pull/917", "diff_url": "https://github.com/ollama/ollama/pull/917.diff", "patch_url": "https://github.com/ollama/ollama/pull/917.patch", "merged_at": "2023-10-26T19:00:34" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/917/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7070
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7070/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7070/comments
https://api.github.com/repos/ollama/ollama/issues/7070/events
https://github.com/ollama/ollama/issues/7070
2,560,290,013
I_kwDOJ0Z1Ps6Ymuzd
7,070
Warning: Could not connect to a running Ollama instance (Mac OS - Apple Silicon M2 Pro)
{ "login": "sohamnandi77", "id": 56152437, "node_id": "MDQ6VXNlcjU2MTUyNDM3", "avatar_url": "https://avatars.githubusercontent.com/u/56152437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohamnandi77", "html_url": "https://github.com/sohamnandi77", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
4
2024-10-01T22:05:41
2025-01-30T05:06:42
2024-11-05T22:53:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After successfully installing Ollama on my machine, I am encountering the following warning messages when trying to run the software: **Copy code** Warning: could not connect to a running Ollama instance Warning: client version is 0.3.12 **Steps to Reproduce:** Install Ollama on macOS...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7070/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3565
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3565/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3565/comments
https://api.github.com/repos/ollama/ollama/issues/3565/events
https://github.com/ollama/ollama/pull/3565
2,234,466,351
PR_kwDOJ0Z1Ps5sL8Pu
3,565
fix: rope
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2024-04-09T23:18:35
2024-04-24T16:14:45
2024-04-09T23:36:55
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3565", "html_url": "https://github.com/ollama/ollama/pull/3565", "diff_url": "https://github.com/ollama/ollama/pull/3565.diff", "patch_url": "https://github.com/ollama/ollama/pull/3565.patch", "merged_at": "2024-04-09T23:36:55" }
Some models set RopeFrequencyBase and RopeFrequencyScale. Removing these fields makes those models unusable
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3565/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4972
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4972/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4972/comments
https://api.github.com/repos/ollama/ollama/issues/4972/events
https://github.com/ollama/ollama/pull/4972
2,345,709,903
PR_kwDOJ0Z1Ps5yEs9O
4,972
fix: "Skip searching for network devices"
{ "login": "jayson-cloude", "id": 62731682, "node_id": "MDQ6VXNlcjYyNzMxNjgy", "avatar_url": "https://avatars.githubusercontent.com/u/62731682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jayson-cloude", "html_url": "https://github.com/jayson-cloude", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
0
2024-06-11T08:12:25
2024-06-15T00:04:41
2024-06-15T00:04:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4972", "html_url": "https://github.com/ollama/ollama/pull/4972", "diff_url": "https://github.com/ollama/ollama/pull/4972.diff", "patch_url": "https://github.com/ollama/ollama/pull/4972.patch", "merged_at": "2024-06-15T00:04:41" }
On an Ubuntu 24.04 computer with vmware installed, the sudo lshw command will get stuck. "Network interfaces" is always displayed
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4972/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5847
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5847/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5847/comments
https://api.github.com/repos/ollama/ollama/issues/5847/events
https://github.com/ollama/ollama/pull/5847
2,422,400,869
PR_kwDOJ0Z1Ps52ENrz
5,847
Reduce docker image size
{ "login": "yeahdongcn", "id": 2831050, "node_id": "MDQ6VXNlcjI4MzEwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeahdongcn", "html_url": "https://github.com/yeahdongcn", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2024-07-22T09:31:17
2024-09-03T16:25:32
2024-09-03T16:25:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5847", "html_url": "https://github.com/ollama/ollama/pull/5847", "diff_url": "https://github.com/ollama/ollama/pull/5847.diff", "patch_url": "https://github.com/ollama/ollama/pull/5847.patch", "merged_at": "2024-09-03T16:25:31" }
The docker image size is approximately reduced by 20MB after cleaning the apt caches.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5847/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5847/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5788
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5788/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5788/comments
https://api.github.com/repos/ollama/ollama/issues/5788/events
https://github.com/ollama/ollama/issues/5788
2,418,228,328
I_kwDOJ0Z1Ps6QIzxo
5,788
Support LoRA GGUF Adapters
{ "login": "suncloudsmoon", "id": 34616349, "node_id": "MDQ6VXNlcjM0NjE2MzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/34616349?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suncloudsmoon", "html_url": "https://github.com/suncloudsmoon", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-19T07:11:05
2024-09-19T21:15:52
2024-09-12T22:20:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Recently, [llama.cpp added support for LoRA GGUF adapters](https://github.com/ggerganov/llama.cpp/pull/8332), replacing the old GGML format. I would love to see this feature extended to Ollama if it's possible. Currently, Ollama only supports GGML adapters as shown in [```modelfile.md```](https://github.com/ollama/olla...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5788/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5788/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6102
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6102/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6102/comments
https://api.github.com/repos/ollama/ollama/issues/6102/events
https://github.com/ollama/ollama/pull/6102
2,440,578,534
PR_kwDOJ0Z1Ps53Bf3W
6,102
cmd: quantize progress
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[]
closed
false
null
[]
null
1
2024-07-31T17:49:55
2024-11-21T09:51:09
2024-11-21T09:51:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6102", "html_url": "https://github.com/ollama/ollama/pull/6102", "diff_url": "https://github.com/ollama/ollama/pull/6102.diff", "patch_url": "https://github.com/ollama/ollama/pull/6102.patch", "merged_at": null }
new PR because the old one was again stuck rebasing
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6102/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8381
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8381/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8381/comments
https://api.github.com/repos/ollama/ollama/issues/8381/events
https://github.com/ollama/ollama/pull/8381
2,781,408,916
PR_kwDOJ0Z1Ps6HZi2n
8,381
Explicit mention `ollama serve` will start a server, friendly for new users
{ "login": "deephbz", "id": 13776377, "node_id": "MDQ6VXNlcjEzNzc2Mzc3", "avatar_url": "https://avatars.githubusercontent.com/u/13776377?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deephbz", "html_url": "https://github.com/deephbz", "followers_url": "https://api.github.com/users/deephb...
[]
open
false
null
[]
null
0
2025-01-10T23:29:20
2025-01-10T23:29:20
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8381", "html_url": "https://github.com/ollama/ollama/pull/8381", "diff_url": "https://github.com/ollama/ollama/pull/8381.diff", "patch_url": "https://github.com/ollama/ollama/pull/8381.patch", "merged_at": null }
Different local LLM framework run differently. Some run as a stand-alone single while others run as server to serve API requests. New users are coming to Ollama and we could make it clear in this quick start tutorial.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8381/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3696
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3696/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3696/comments
https://api.github.com/repos/ollama/ollama/issues/3696/events
https://github.com/ollama/ollama/pull/3696
2,247,896,624
PR_kwDOJ0Z1Ps5s5u3L
3,696
supported openbmb/minicpm-2b-dpo
{ "login": "hadoop2xu", "id": 48076281, "node_id": "MDQ6VXNlcjQ4MDc2Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/48076281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hadoop2xu", "html_url": "https://github.com/hadoop2xu", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-04-17T10:02:01
2024-05-09T18:09:16
2024-05-09T18:09:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3696", "html_url": "https://github.com/ollama/ollama/pull/3696", "diff_url": "https://github.com/ollama/ollama/pull/3696.diff", "patch_url": "https://github.com/ollama/ollama/pull/3696.patch", "merged_at": null }
支持 openbmb/minicpm-2b-dpo 使用方法:ollama run modelbest/minicpm-2b-dpo 模型地址: https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3696/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3696/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7361
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7361/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7361/comments
https://api.github.com/repos/ollama/ollama/issues/7361/events
https://github.com/ollama/ollama/pull/7361
2,614,860,053
PR_kwDOJ0Z1Ps5_8UJt
7,361
Fix incremental build file deps
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-25T18:36:03
2024-10-25T18:50:48
2024-10-25T18:50:45
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7361", "html_url": "https://github.com/ollama/ollama/pull/7361", "diff_url": "https://github.com/ollama/ollama/pull/7361.diff", "patch_url": "https://github.com/ollama/ollama/pull/7361.patch", "merged_at": "2024-10-25T18:50:45" }
The common src/hdr defs should be in the common definitions, not gpu specific.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7361/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4199
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4199/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4199/comments
https://api.github.com/repos/ollama/ollama/issues/4199/events
https://github.com/ollama/ollama/issues/4199
2,280,846,843
I_kwDOJ0Z1Ps6H8vX7
4,199
support llama 3 Moe
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2024-05-06T13:08:32
2024-05-06T23:36:48
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
please support QuantFactory/Meta-Llama-3-120B-Instruct-GGUF raincandy-u/Llama-3-Aplite-Instruct-4x8B-GGUF-MoE
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4199/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/279
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/279/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/279/comments
https://api.github.com/repos/ollama/ollama/issues/279/events
https://github.com/ollama/ollama/issues/279
1,836,703,477
I_kwDOJ0Z1Ps5ted71
279
Files and folders in .ollama aren't getting cleaned up
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2023-08-04T12:59:21
2023-10-23T16:29:53
2023-10-23T16:29:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I created a sentiments modelfile and the blobs and manifests folders were populated. Then i deleted that model, and the files were removed but the folders under manifests were not. Then I noticed that sentiments uses orca and doesn't specify a SYSTEM instruction, so inherits it from orca. So updated the modelfile to...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/279/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1888
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1888/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1888/comments
https://api.github.com/repos/ollama/ollama/issues/1888/events
https://github.com/ollama/ollama/issues/1888
2,073,926,657
I_kwDOJ0Z1Ps57nZwB
1,888
nvmlInit_v2 unable to detect Nvidia GPU in WSL
{ "login": "taweili", "id": 6722, "node_id": "MDQ6VXNlcjY3MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/6722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taweili", "html_url": "https://github.com/taweili", "followers_url": "https://api.github.com/users/taweili/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-01-10T09:16:33
2024-01-10T23:21:58
2024-01-10T23:21:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ollama has switched to using [NVML](https://developer.nvidia.com/nvidia-management-library-nvml) to detect the Nvidia environment. However, this method failed on WSL. Here is a short C code to validate the behavior. The `nvmlReturn_t` returns 9 [NVML_ERROR_DRIVER_NOT_LOADED = 9](https://docs.nvidia.com/deploy/nvml-...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1888/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8602
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8602/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8602/comments
https://api.github.com/repos/ollama/ollama/issues/8602/events
https://github.com/ollama/ollama/issues/8602
2,812,269,390
I_kwDOJ0Z1Ps6nn9NO
8,602
Deepseek-R1 671B - Segmentation Fault Bug
{ "login": "Notbici", "id": 196611455, "node_id": "U_kgDOC7gNfw", "avatar_url": "https://avatars.githubusercontent.com/u/196611455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Notbici", "html_url": "https://github.com/Notbici", "followers_url": "https://api.github.com/users/Notbici/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2025-01-27T07:27:36
2025-01-28T11:09:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I've been using the Deepseek-R1 671B model from Ollama on my 8x H100 machine and keep running into a segmentation fault, I've noticed that the frequency of the segfault happens the larger the context becomes. I'm using the latest Ollama release. Hardware Specs: - 8x H100 - 80GB SXM - Xeon ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8602/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7339
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7339/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7339/comments
https://api.github.com/repos/ollama/ollama/issues/7339/events
https://github.com/ollama/ollama/issues/7339
2,610,352,358
I_kwDOJ0Z1Ps6bltDm
7,339
Error: an unknown error was encountered while running the model
{ "login": "ipsmile", "id": 28075439, "node_id": "MDQ6VXNlcjI4MDc1NDM5", "avatar_url": "https://avatars.githubusercontent.com/u/28075439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ipsmile", "html_url": "https://github.com/ipsmile", "followers_url": "https://api.github.com/users/ipsmil...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
4
2024-10-24T04:05:02
2024-10-31T18:20:08
2024-10-31T18:20:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? $ ollama run deepseek-coder-v2 pulling manifest pulling 5ff0abeeac1d... 100% ▕████████████████▏ 8.9 GB pulling 22091531faf0... 100% ▕████████████████▏ 705 B pulling 4bb71764481f... 100% ▕████████████████▏ 13 KB pu...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7339/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6874
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6874/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6874/comments
https://api.github.com/repos/ollama/ollama/issues/6874/events
https://github.com/ollama/ollama/issues/6874
2,535,612,824
I_kwDOJ0Z1Ps6XImGY
6,874
Unable to pull models behind the proxy on windows
{ "login": "WeiguangHan", "id": 109776541, "node_id": "U_kgDOBosOnQ", "avatar_url": "https://avatars.githubusercontent.com/u/109776541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WeiguangHan", "html_url": "https://github.com/WeiguangHan", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXU...
closed
false
null
[]
null
3
2024-09-19T08:11:57
2024-09-20T23:34:26
2024-09-20T23:34:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
``` PS C:\Users\Administrator> set HTTPS_PROXY=http://child-prc.intel.com:913 PS C:\Users\Administrator> set https_proxy=http://child-prc.intel.com:913 PS C:\Users\Administrator> ollama run qwen2.5:7b pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen2.5/manifests/7b": d...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6874/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/6874/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6963
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6963/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6963/comments
https://api.github.com/repos/ollama/ollama/issues/6963/events
https://github.com/ollama/ollama/pull/6963
2,548,755,009
PR_kwDOJ0Z1Ps58stLp
6,963
llama3.2 vision support
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
10
2024-09-25T18:57:57
2024-10-22T14:04:35
2024-10-18T23:12:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6963", "html_url": "https://github.com/ollama/ollama/pull/6963", "diff_url": "https://github.com/ollama/ollama/pull/6963.diff", "patch_url": "https://github.com/ollama/ollama/pull/6963.patch", "merged_at": "2024-10-18T23:12:35" }
Image processing routines for being able to run llama3.2. This will need to be refactored at some point to support other multimodal models as well. EDIT: This now includes all of the code for getting vision support to work, and not just the image processing routines. It's still not 100% though, but good enough to...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6963/reactions", "total_count": 120, "+1": 65, "-1": 0, "laugh": 0, "hooray": 27, "confused": 0, "heart": 3, "rocket": 25, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6963/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/443
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/443/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/443/comments
https://api.github.com/repos/ollama/ollama/issues/443/events
https://github.com/ollama/ollama/pull/443
1,874,156,921
PR_kwDOJ0Z1Ps5ZKnUl
443
windows: fix filepath bugs
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-30T18:16:07
2023-08-31T21:19:11
2023-08-31T21:19:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/443", "html_url": "https://github.com/ollama/ollama/pull/443", "diff_url": "https://github.com/ollama/ollama/pull/443.diff", "patch_url": "https://github.com/ollama/ollama/pull/443.patch", "merged_at": "2023-08-31T21:19:10" }
List and Delete has the same issue where the path was constructed using Linux/macOS path separators which does not work in Windows. This PR fixes and simplifies the code. Fix `filenameWithPath` which also assumes a Linux/macOS path separator when looking for `~`. Use `filenameWithPath` to resolve adapter filepath
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/443/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7510
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7510/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7510/comments
https://api.github.com/repos/ollama/ollama/issues/7510/events
https://github.com/ollama/ollama/issues/7510
2,635,471,331
I_kwDOJ0Z1Ps6dFhnj
7,510
Add support for function call (response back) (message.role=tool)
{ "login": "RogerBarreto", "id": 19890735, "node_id": "MDQ6VXNlcjE5ODkwNzM1", "avatar_url": "https://avatars.githubusercontent.com/u/19890735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RogerBarreto", "html_url": "https://github.com/RogerBarreto", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
2
2024-11-05T13:29:06
2024-12-06T17:43:20
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# Add support for function call (response back) 1. Currently there's no support for sending back the function call result to the model using the `role=tool` messages. 2. Using the native API (not openai), function tool calls don't have an identifier associated `tool_call_id`, this is present in the `openai` API, an...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7510/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7510/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4724
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4724/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4724/comments
https://api.github.com/repos/ollama/ollama/issues/4724/events
https://github.com/ollama/ollama/issues/4724
2,325,983,681
I_kwDOJ0Z1Ps6Ko7HB
4,724
empty response
{ "login": "themw123", "id": 80266862, "node_id": "MDQ6VXNlcjgwMjY2ODYy", "avatar_url": "https://avatars.githubusercontent.com/u/80266862?v=4", "gravatar_id": "", "url": "https://api.github.com/users/themw123", "html_url": "https://github.com/themw123", "followers_url": "https://api.github.com/users/the...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-05-30T15:37:14
2025-01-25T17:25:51
2024-09-12T23:19:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am getting an empty response string with llama3:8b. Different models like mistral-instruct are working fine. Setup: - windows 11 - newest ollama version - llama3:8b(latest) When context gets to high (approximately after exchanging 20 question/answer pairs) by appending the history wi...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4724/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4724/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8683/comments
https://api.github.com/repos/ollama/ollama/issues/8683/events
https://github.com/ollama/ollama/issues/8683
2,819,701,999
I_kwDOJ0Z1Ps6oETzv
8,683
Support release build without AVX
{ "login": "yoonsio", "id": 24367477, "node_id": "MDQ6VXNlcjI0MzY3NDc3", "avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoonsio", "html_url": "https://github.com/yoonsio", "followers_url": "https://api.github.com/users/yoonsi...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
0
2025-01-30T01:34:51
2025-01-30T02:13:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Release image fails to detect the GPU when running on a CPU that does not support AVX. Please add a non-AVX release build to the release pipeline. ``` msg="Dynamic LLM libraries" runners="[cpu_avx cpu cpu_avx2]" ``` Custom image can be built by overriding `CUSTOM_CPU_FLAGS`. #### Example: ``` docker build --platform li...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8683/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8683/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7431
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7431/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7431/comments
https://api.github.com/repos/ollama/ollama/issues/7431/events
https://github.com/ollama/ollama/pull/7431
2,625,485,479
PR_kwDOJ0Z1Ps6AdJ8D
7,431
Add Perfect Memory AI to community integrations
{ "login": "DariusKocar", "id": 60488234, "node_id": "MDQ6VXNlcjYwNDg4MjM0", "avatar_url": "https://avatars.githubusercontent.com/u/60488234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DariusKocar", "html_url": "https://github.com/DariusKocar", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-10-30T22:22:53
2024-11-17T23:19:26
2024-11-17T23:19:26
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7431", "html_url": "https://github.com/ollama/ollama/pull/7431", "diff_url": "https://github.com/ollama/ollama/pull/7431.diff", "patch_url": "https://github.com/ollama/ollama/pull/7431.patch", "merged_at": "2024-11-17T23:19:26" }
I added Perfect Memory AI to community integrations. Perfect Memory uses Ollama as an AI provider for offline inference. https://www.perfectmemory.ai/support/ai-assistant/ollama-setup
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7431/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2195
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2195/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2195/comments
https://api.github.com/repos/ollama/ollama/issues/2195/events
https://github.com/ollama/ollama/pull/2195
2,101,357,601
PR_kwDOJ0Z1Ps5lHcg9
2,195
Ignore AMD integrated GPUs
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
6
2024-01-26T00:02:11
2024-07-02T04:08:16
2024-01-26T17:30:24
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2195", "html_url": "https://github.com/ollama/ollama/pull/2195", "diff_url": "https://github.com/ollama/ollama/pull/2195.diff", "patch_url": "https://github.com/ollama/ollama/pull/2195.patch", "merged_at": "2024-01-26T17:30:24" }
Fixes #2054 Integrated GPUs (APUs) from AMD may be reported by ROCm, but we can't run on them with our current llama.cpp configuration. These iGPUs report 512M of memory, so I've coded the check to ignore any ROCm reported GPU that has less than 1G of memory. If we detect only an integrated GPU, this will fallbac...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2195/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2195/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6753
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6753/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6753/comments
https://api.github.com/repos/ollama/ollama/issues/6753/events
https://github.com/ollama/ollama/issues/6753
2,519,594,795
I_kwDOJ0Z1Ps6WLfcr
6,753
`image_url` support for vision models
{ "login": "madroidmaq", "id": 6247142, "node_id": "MDQ6VXNlcjYyNDcxNDI=", "avatar_url": "https://avatars.githubusercontent.com/u/6247142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madroidmaq", "html_url": "https://github.com/madroidmaq", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
3
2024-09-11T12:20:03
2024-11-25T21:18:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? curl: ```py curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer OPENAI_API_KEY" \ -d '{ "model": "minicpm-v:8b-2.6-fp16", "messages": [ { "role": "user", "content": [ { ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6753/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6753/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6682
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6682/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6682/comments
https://api.github.com/repos/ollama/ollama/issues/6682/events
https://github.com/ollama/ollama/pull/6682
2,511,323,740
PR_kwDOJ0Z1Ps56ttgy
6,682
Remove go server debug logging
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-09-06T23:47:54
2024-09-07T00:05:14
2024-09-07T00:05:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6682", "html_url": "https://github.com/ollama/ollama/pull/6682", "diff_url": "https://github.com/ollama/ollama/pull/6682.diff", "patch_url": "https://github.com/ollama/ollama/pull/6682.patch", "merged_at": "2024-09-07T00:05:13" }
null
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6682/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5319
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5319/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5319/comments
https://api.github.com/repos/ollama/ollama/issues/5319/events
https://github.com/ollama/ollama/issues/5319
2,377,608,378
I_kwDOJ0Z1Ps6Nt2y6
5,319
Fine-tuned model responding incorrectly to my prompts
{ "login": "giannisak", "id": 154079765, "node_id": "U_kgDOCS8SFQ", "avatar_url": "https://avatars.githubusercontent.com/u/154079765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/giannisak", "html_url": "https://github.com/giannisak", "followers_url": "https://api.github.com/users/gianni...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
3
2024-06-27T09:14:30
2024-09-16T18:51:00
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm having an issue with my fine-tuned model. It doesn't respond to my prompts correctly and instead generates unrelated outputs. It seems like the model is making up its own user input, then replying to this instead of my actual input. ## Example: ### My Input: `Hi! Who are you?` ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5319/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/5319/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4891
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4891/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4891/comments
https://api.github.com/repos/ollama/ollama/issues/4891/events
https://github.com/ollama/ollama/issues/4891
2,339,461,478
I_kwDOJ0Z1Ps6LcVlm
4,891
Under NVIDIA's latest driver: version 555.99, any model will only run on the CPU.
{ "login": "despairTK", "id": 111871110, "node_id": "U_kgDOBqsEhg", "avatar_url": "https://avatars.githubusercontent.com/u/111871110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/despairTK", "html_url": "https://github.com/despairTK", "followers_url": "https://api.github.com/users/despai...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-06-07T02:29:55
2024-06-07T03:18:13
2024-06-07T03:18:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I updated to the latest NVIDIA driver: version 555.99, any model would only run on the CPU and the GPU would not work at all. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.41
{ "login": "despairTK", "id": 111871110, "node_id": "U_kgDOBqsEhg", "avatar_url": "https://avatars.githubusercontent.com/u/111871110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/despairTK", "html_url": "https://github.com/despairTK", "followers_url": "https://api.github.com/users/despai...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4891/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1467
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1467/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1467/comments
https://api.github.com/repos/ollama/ollama/issues/1467/events
https://github.com/ollama/ollama/issues/1467
2,036,064,737
I_kwDOJ0Z1Ps55W-Hh
1,467
REST API : /api/chat endpoint not working
{ "login": "slovanos", "id": 48527469, "node_id": "MDQ6VXNlcjQ4NTI3NDY5", "avatar_url": "https://avatars.githubusercontent.com/u/48527469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slovanos", "html_url": "https://github.com/slovanos", "followers_url": "https://api.github.com/users/slo...
[]
closed
false
null
[]
null
4
2023-12-11T16:32:30
2024-03-30T19:19:37
2023-12-11T16:58:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Refering to the the examples of the main page: ## Generate a response: Works perfectly ``` curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt":"Why is the sky blue?" }' ``` ## Chat with a model: Not Working ### Response is "404 page not found" ``` curl http://localhost:11434/ap...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1467/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3932
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3932/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3932/comments
https://api.github.com/repos/ollama/ollama/issues/3932/events
https://github.com/ollama/ollama/issues/3932
2,264,970,719
I_kwDOJ0Z1Ps6HALXf
3,932
ERROR: NO SUCH HOST
{ "login": "Jinish2170", "id": 121560356, "node_id": "U_kgDOBz7dJA", "avatar_url": "https://avatars.githubusercontent.com/u/121560356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jinish2170", "html_url": "https://github.com/Jinish2170", "followers_url": "https://api.github.com/users/Jin...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-26T05:05:29
2024-05-30T06:26:50
2024-05-01T21:24:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? pulling manifest Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(M...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3932/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4076
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4076/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4076/comments
https://api.github.com/repos/ollama/ollama/issues/4076/events
https://github.com/ollama/ollama/issues/4076
2,273,380,884
I_kwDOJ0Z1Ps6HgQoU
4,076
MoonDream:Latest Not Working
{ "login": "rb81", "id": 48117105, "node_id": "MDQ6VXNlcjQ4MTE3MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rb81", "html_url": "https://github.com/rb81", "followers_url": "https://api.github.com/users/rb81/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-01T11:48:03
2024-10-30T15:19:06
2024-05-01T18:20:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When running moondream:latest, the following error message is received: ``` Error: llama runner process no longer running: -1 ``` Tried running the model from CLI using `ollama serve` as well as the desktop application. Tried using the model form CLI as well as Open-WebUI. Same result ...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4076/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7183
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7183/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7183/comments
https://api.github.com/repos/ollama/ollama/issues/7183/events
https://github.com/ollama/ollama/issues/7183
2,582,817,318
I_kwDOJ0Z1Ps6Z8qom
7,183
Failed to update all the models downloaded locally
{ "login": "qzc438", "id": 61488260, "node_id": "MDQ6VXNlcjYxNDg4MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qzc438", "html_url": "https://github.com/qzc438", "followers_url": "https://api.github.com/users/qzc438/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-10-12T10:51:34
2024-10-13T04:57:23
2024-10-13T04:57:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I use this code in the latest version of ollama: `ollama list | cut -f 1 | tail -n +2 | xargs -n 1 ollama pull` There is an error message: pulling manifest Error: pull model manifest: file does not exist ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama v...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7183/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4781
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4781/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4781/comments
https://api.github.com/repos/ollama/ollama/issues/4781/events
https://github.com/ollama/ollama/issues/4781
2,329,591,935
I_kwDOJ0Z1Ps6K2sB_
4,781
ollama not show my model.
{ "login": "tuantupharma", "id": 35091001, "node_id": "MDQ6VXNlcjM1MDkxMDAx", "avatar_url": "https://avatars.githubusercontent.com/u/35091001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuantupharma", "html_url": "https://github.com/tuantupharma", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
2
2024-06-02T10:53:12
2024-07-11T15:23:26
2024-07-11T15:23:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama working well with chatbox but when i install open-webui at first it working well but after few day ollama forget all of my model. the model file still in my ssd but ollama not detect them!, i pull model again but one again ollama not detect them in 2 days ### OS Windows ### GPU Nvidia...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4781/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2442
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2442/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2442/comments
https://api.github.com/repos/ollama/ollama/issues/2442/events
https://github.com/ollama/ollama/issues/2442
2,128,334,996
I_kwDOJ0Z1Ps5-29CU
2,442
Error: unable to initialize llm library Radeon card detected, but permissions not set up properly. Either run ollama as root, or add you user account to the render group.
{ "login": "pladaria", "id": 579417, "node_id": "MDQ6VXNlcjU3OTQxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/579417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pladaria", "html_url": "https://github.com/pladaria", "followers_url": "https://api.github.com/users/pladari...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-02-10T11:29:38
2024-03-12T02:08:44
2024-03-11T23:31:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm unable to run ollama. My setup: * OS: Linux * CPU+GPU: AMD Ryzen 3 2200G with Radeon Vega Graphics * GPU: nVidia Tesla P40 - 24GB RAM ``` $ ollama serve time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go:863 msg="total blobs: 0" time=2024-02-10T12:21:38.851+01:00 level=INFO source=images.go...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2442/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4410
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4410/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4410/comments
https://api.github.com/repos/ollama/ollama/issues/4410/events
https://github.com/ollama/ollama/issues/4410
2,293,775,494
I_kwDOJ0Z1Ps6IuDyG
4,410
Inconsistent punctuation in `ollama serve -h`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[ { "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api....
null
0
2024-05-13T20:22:52
2024-05-13T22:30:47
2024-05-13T22:30:47
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` Environment Variables: OLLAMA_HOST The host:port to bind to (default "127.0.0.1:11434") OLLAMA_ORIGINS A comma separated list of allowed origins. OLLAMA_MODELS The path to the models directory (default is "~/.ollama/models") OLLAMA_KEEP_ALIVE The d...
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4410/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4415
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4415/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4415/comments
https://api.github.com/repos/ollama/ollama/issues/4415/events
https://github.com/ollama/ollama/pull/4415
2,294,112,351
PR_kwDOJ0Z1Ps5vVETA
4,415
update the FAQ to be more clear about windows env variables
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-05-14T01:00:41
2024-05-14T01:01:14
2024-05-14T01:01:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4415", "html_url": "https://github.com/ollama/ollama/pull/4415", "diff_url": "https://github.com/ollama/ollama/pull/4415.diff", "patch_url": "https://github.com/ollama/ollama/pull/4415.patch", "merged_at": "2024-05-14T01:01:13" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4415/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4576
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4576/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4576/comments
https://api.github.com/repos/ollama/ollama/issues/4576/events
https://github.com/ollama/ollama/issues/4576
2,310,708,658
I_kwDOJ0Z1Ps6Jup2y
4,576
Tried Agentic chucking using Ollama but got error
{ "login": "arunkumarm-git", "id": 170125746, "node_id": "U_kgDOCiPpsg", "avatar_url": "https://avatars.githubusercontent.com/u/170125746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arunkumarm-git", "html_url": "https://github.com/arunkumarm-git", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-05-22T14:30:20
2024-05-22T14:30:20
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Code: from langchain_community.llms import Ollama from langchain.chains import create_extraction_chain_pydantic from langchain_core.pydantic_v1 import BaseModel from typing import Optional, List llm = Ollama(model='llama3') from langchain import hub prompt = hub.pull("wfh/proposal-index...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4576/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1984
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1984/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1984/comments
https://api.github.com/repos/ollama/ollama/issues/1984/events
https://github.com/ollama/ollama/pull/1984
2,080,568,798
PR_kwDOJ0Z1Ps5kA--8
1,984
req
{ "login": "leotamminen", "id": 122639748, "node_id": "U_kgDOB09VhA", "avatar_url": "https://avatars.githubusercontent.com/u/122639748?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leotamminen", "html_url": "https://github.com/leotamminen", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-01-14T03:50:13
2024-01-14T03:50:32
2024-01-14T03:50:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1984", "html_url": "https://github.com/ollama/ollama/pull/1984", "diff_url": "https://github.com/ollama/ollama/pull/1984.diff", "patch_url": "https://github.com/ollama/ollama/pull/1984.patch", "merged_at": null }
null
{ "login": "leotamminen", "id": 122639748, "node_id": "U_kgDOB09VhA", "avatar_url": "https://avatars.githubusercontent.com/u/122639748?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leotamminen", "html_url": "https://github.com/leotamminen", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1984/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1305
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1305/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1305/comments
https://api.github.com/repos/ollama/ollama/issues/1305/events
https://github.com/ollama/ollama/issues/1305
2,014,737,519
I_kwDOJ0Z1Ps54FnRv
1,305
Flatpak package for Linux
{ "login": "rugk", "id": 11966684, "node_id": "MDQ6VXNlcjExOTY2Njg0", "avatar_url": "https://avatars.githubusercontent.com/u/11966684?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rugk", "html_url": "https://github.com/rugk", "followers_url": "https://api.github.com/users/rugk/followers"...
[]
closed
false
null
[]
null
2
2023-11-28T15:40:53
2023-12-05T20:18:58
2023-11-28T21:55:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be nice if you could publish this as a [flatpak](https://flatpak.org/) on [flathub](https://flathub.org/) e.g. Flatpaks are a new software distribution mechanism for Linux distros, can thus installed on any distro and are easy to update. They are easy to install _and_ update and work on all Linux distros. ...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1305/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1305/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/469
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/469/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/469/comments
https://api.github.com/repos/ollama/ollama/issues/469/events
https://github.com/ollama/ollama/pull/469
1,882,647,556
PR_kwDOJ0Z1Ps5ZnDAK
469
metal: add missing barriers for mul-mat
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-09-05T20:08:44
2023-09-05T23:37:14
2023-09-05T23:37:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/469", "html_url": "https://github.com/ollama/ollama/pull/469", "diff_url": "https://github.com/ollama/ollama/pull/469.diff", "patch_url": "https://github.com/ollama/ollama/pull/469.patch", "merged_at": "2023-09-05T23:37:13" }
port https://github.com/ggerganov/llama.cpp/pull/2699 to fix null response on generate
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/469/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2682
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2682/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2682/comments
https://api.github.com/repos/ollama/ollama/issues/2682/events
https://github.com/ollama/ollama/issues/2682
2,149,357,434
I_kwDOJ0Z1Ps6AHJd6
2,682
Windows - Serve Mode - Need to Ctrl-C or Right Click the CMD prompt from time to time to keep things moving
{ "login": "Shawneau", "id": 51348013, "node_id": "MDQ6VXNlcjUxMzQ4MDEz", "avatar_url": "https://avatars.githubusercontent.com/u/51348013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shawneau", "html_url": "https://github.com/Shawneau", "followers_url": "https://api.github.com/users/Sha...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-02-22T15:40:30
2024-03-12T00:14:53
2024-03-12T00:14:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running open web ui and every once and a while Ollama's cmd prompt in serve mode just stops doing anything, not a crash, it's still up, but I need to ctrl-c or right click in the window to get it moving again. Any idea why? <img width="537" alt="image" src="https://github.com/ollama/ollama/assets/51348013/9116654d...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2682/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7745
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7745/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7745/comments
https://api.github.com/repos/ollama/ollama/issues/7745/events
https://github.com/ollama/ollama/issues/7745
2,673,076,302
I_kwDOJ0Z1Ps6fU-hO
7,745
gpu VRAM usage didn't recover within timeout on llama3.2-vision:90b
{ "login": "ergosumdre", "id": 35677602, "node_id": "MDQ6VXNlcjM1Njc3NjAy", "avatar_url": "https://avatars.githubusercontent.com/u/35677602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ergosumdre", "html_url": "https://github.com/ergosumdre", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-19T18:10:27
2024-11-19T22:29:40
2024-11-19T22:29:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm encountering a 'timed out waiting for llama runner to start' error when executing the following command: `ollama run llama3.2-vision:90b` I have 64GB of VRAM, and I’m able to run other models without any issues. However, this specific model doesn’t seem to work. Here are the server ...
{ "login": "ergosumdre", "id": 35677602, "node_id": "MDQ6VXNlcjM1Njc3NjAy", "avatar_url": "https://avatars.githubusercontent.com/u/35677602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ergosumdre", "html_url": "https://github.com/ergosumdre", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7745/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8639
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8639/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8639/comments
https://api.github.com/repos/ollama/ollama/issues/8639/events
https://github.com/ollama/ollama/pull/8639
2,815,856,245
PR_kwDOJ0Z1Ps6JPIfp
8,639
Enable using rocm/dev-almalinux images for unified-builder-amd64
{ "login": "michaelburch", "id": 13478210, "node_id": "MDQ6VXNlcjEzNDc4MjEw", "avatar_url": "https://avatars.githubusercontent.com/u/13478210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelburch", "html_url": "https://github.com/michaelburch", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
0
2025-01-28T14:33:29
2025-01-28T15:32:45
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8639", "html_url": "https://github.com/ollama/ollama/pull/8639", "diff_url": "https://github.com/ollama/ollama/pull/8639.diff", "patch_url": "https://github.com/ollama/ollama/pull/8639.patch", "merged_at": null }
Adds build args and dependency updates to support using rocm/dev-almalinux images for unified-builder-amd64. ARG RHEL_VERSION=8 ARG RHEL_VARIANT=almalinux-${RHEL_VERSION} Can be used with the following to build docker images with the latest ROCM library: ARG ROCM_VERSION=6.3.1
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8639/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5125
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5125/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5125/comments
https://api.github.com/repos/ollama/ollama/issues/5125/events
https://github.com/ollama/ollama/pull/5125
2,360,951,586
PR_kwDOJ0Z1Ps5y4v6j
5,125
Bump latest fedora cuda repo to 39
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-06-19T00:15:15
2024-06-20T18:27:27
2024-06-20T18:27:24
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5125", "html_url": "https://github.com/ollama/ollama/pull/5125", "diff_url": "https://github.com/ollama/ollama/pull/5125.diff", "patch_url": "https://github.com/ollama/ollama/pull/5125.patch", "merged_at": "2024-06-20T18:27:24" }
Fixes #5062 Fedora39 is now the latest. https://developer.download.nvidia.com/compute/cuda/repos/
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5125/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7853
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7853/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7853/comments
https://api.github.com/repos/ollama/ollama/issues/7853/events
https://github.com/ollama/ollama/issues/7853
2,697,243,627
I_kwDOJ0Z1Ps6gxKvr
7,853
embeding api issue
{ "login": "sycbbyes", "id": 15940789, "node_id": "MDQ6VXNlcjE1OTQwNzg5", "avatar_url": "https://avatars.githubusercontent.com/u/15940789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sycbbyes", "html_url": "https://github.com/sycbbyes", "followers_url": "https://api.github.com/users/syc...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-11-27T06:03:41
2024-12-05T07:44:33
2024-12-05T07:44:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when call ollama 、api/embed, there is python error with message: ollama-webui | 2024-11-27T04:25:57.120363066Z ValueError: [TypeError("'coroutine' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')] ollama-webui | 2024-11-27T04:26:04.202107805Z INFO: ...
{ "login": "sycbbyes", "id": 15940789, "node_id": "MDQ6VXNlcjE1OTQwNzg5", "avatar_url": "https://avatars.githubusercontent.com/u/15940789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sycbbyes", "html_url": "https://github.com/sycbbyes", "followers_url": "https://api.github.com/users/syc...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7853/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7716
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7716/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7716/comments
https://api.github.com/repos/ollama/ollama/issues/7716/events
https://github.com/ollama/ollama/issues/7716
2,666,898,598
I_kwDOJ0Z1Ps6e9aSm
7,716
Feature suggestions and development compilation environment issues
{ "login": "mingyue0094", "id": 63558866, "node_id": "MDQ6VXNlcjYzNTU4ODY2", "avatar_url": "https://avatars.githubusercontent.com/u/63558866?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mingyue0094", "html_url": "https://github.com/mingyue0094", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
3
2024-11-18T02:43:59
2024-11-20T19:43:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Wish: 1. set env avx=0 will automatically try to use Nvidia gpu 2. On this repository page, press `.` to enter a complete development environment to modify code, compile, download files, and run tests. Configuring this development environment is so complicated and difficult. Good luck to you ------- set env ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7716/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3543
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3543/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3543/comments
https://api.github.com/repos/ollama/ollama/issues/3543/events
https://github.com/ollama/ollama/issues/3543
2,232,381,369
I_kwDOJ0Z1Ps6FD2-5
3,543
Conversion Script
{ "login": "scefali", "id": 8533851, "node_id": "MDQ6VXNlcjg1MzM4NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8533851?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scefali", "html_url": "https://github.com/scefali", "followers_url": "https://api.github.com/users/scefali/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-04-09T00:22:18
2024-04-24T18:39:07
2024-04-24T18:39:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am trying to run the conversion script as shown in the example for conversion to gguf. ### What did you expect to see? ``` python llm/llama.cpp/convert.py ./model --outtype f16 --outfile converted.bin Loading model file model/model-00001-of-00002.safetensors Traceback...
{ "login": "scefali", "id": 8533851, "node_id": "MDQ6VXNlcjg1MzM4NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8533851?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scefali", "html_url": "https://github.com/scefali", "followers_url": "https://api.github.com/users/scefali/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3543/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3464
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3464/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3464/comments
https://api.github.com/repos/ollama/ollama/issues/3464/events
https://github.com/ollama/ollama/pull/3464
2,221,593,829
PR_kwDOJ0Z1Ps5rfusN
3,464
Fix numgpu opt miscomparison
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-04-02T22:58:56
2024-04-03T03:10:20
2024-04-03T03:10:17
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3464", "html_url": "https://github.com/ollama/ollama/pull/3464", "diff_url": "https://github.com/ollama/ollama/pull/3464.diff", "patch_url": "https://github.com/ollama/ollama/pull/3464.patch", "merged_at": "2024-04-03T03:10:17" }
opts are now a pointer which means we incorrectly reloaded the model when the actual layers loaded didn't match the input request
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3464/timeline
null
null
true