url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/6604
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6604/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6604/comments
https://api.github.com/repos/ollama/ollama/issues/6604/events
https://github.com/ollama/ollama/issues/6604
2,502,317,787
I_kwDOJ0Z1Ps6VJlbb
6,604
Report better error message on old drivers (show detected version and minimum requirement)
{ "login": "my106", "id": 77132705, "node_id": "MDQ6VXNlcjc3MTMyNzA1", "avatar_url": "https://avatars.githubusercontent.com/u/77132705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/my106", "html_url": "https://github.com/my106", "followers_url": "https://api.github.com/users/my106/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6430601766, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-09-03T09:02:14
2024-09-05T15:45:01
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? [log.txt](https://github.com/user-attachments/files/16846086/log.txt) This is a log ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.9
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6604/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2509
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2509/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2509/comments
https://api.github.com/repos/ollama/ollama/issues/2509/events
https://github.com/ollama/ollama/pull/2509
2,135,649,983
PR_kwDOJ0Z1Ps5m7eiB
2,509
handle race condition while setting raw mode in windows
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-02-15T05:02:22
2024-02-15T05:28:35
2024-02-15T05:28:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2509", "html_url": "https://github.com/ollama/ollama/pull/2509", "diff_url": "https://github.com/ollama/ollama/pull/2509.diff", "patch_url": "https://github.com/ollama/ollama/pull/2509.patch", "merged_at": "2024-02-15T05:28:35" }
This change handles a race condition in the go routine which handles reading in runes. On Windows "raw mode" (i.e. turning off echo/line/processed input) gets turned off too late which would cause `ReadRune()` to wait until the buffer was full (when it got a new line). This change goes into raw mode faster, but it stil...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2509/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6601
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6601/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6601/comments
https://api.github.com/repos/ollama/ollama/issues/6601/events
https://github.com/ollama/ollama/issues/6601
2,502,017,756
I_kwDOJ0Z1Ps6VIcLc
6,601
when i try to visit https://xxxxxxxx.com/api/chat,it is very slow
{ "login": "lessuit", "id": 52142616, "node_id": "MDQ6VXNlcjUyMTQyNjE2", "avatar_url": "https://avatars.githubusercontent.com/u/52142616?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lessuit", "html_url": "https://github.com/lessuit", "followers_url": "https://api.github.com/users/lessui...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-09-03T06:23:13
2024-10-01T07:45:27
2024-09-04T01:09:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? there is docker logs: time=2024-09-03T06:16:48.144Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb gpu=GPU-1a66ac9e-e1b3-db2b-4e52-f54d2f3...
{ "login": "lessuit", "id": 52142616, "node_id": "MDQ6VXNlcjUyMTQyNjE2", "avatar_url": "https://avatars.githubusercontent.com/u/52142616?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lessuit", "html_url": "https://github.com/lessuit", "followers_url": "https://api.github.com/users/lessui...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6601/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/733
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/733/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/733/comments
https://api.github.com/repos/ollama/ollama/issues/733/events
https://github.com/ollama/ollama/issues/733
1,931,631,321
I_kwDOJ0Z1Ps5zIlrZ
733
where is everything?
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
[]
closed
false
null
[]
null
17
2023-10-08T04:06:42
2024-12-07T17:22:09
2023-12-04T20:30:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I don't use Docker so maybe there are obvious answers that I don't know. I've downloaded the install from the website and it put it in the /usr/local/bin directory. Not my first choice. For testing software I want to put it in a user directory. It ran find and pulled the mistrel models. Only thing is, I've already g...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/733/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/652
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/652/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/652/comments
https://api.github.com/repos/ollama/ollama/issues/652/events
https://github.com/ollama/ollama/issues/652
1,919,987,277
I_kwDOJ0Z1Ps5ycK5N
652
Failed to build `Dockerfile`: `unknown flag -ldflags -w -s`
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
6
2023-09-29T22:44:08
2023-11-10T05:07:53
2023-09-30T20:34:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
On an AWS EC2 `g4dn.2xlarge` instance (Ubuntu 22.04.2 LTS) with Ollama [a1b2d95](https://github.com/jmorganca/ollama/tree/a1b2d95f967df6b4f89a6b9ed67263711d59593c), from a fresh `git clone git@github.com:jmorganca/ollama.git`: ```none > sudo docker buildx build . --file Dockerfile => => transferring context: 6.93...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/652/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6686
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6686/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6686/comments
https://api.github.com/repos/ollama/ollama/issues/6686/events
https://github.com/ollama/ollama/issues/6686
2,511,576,566
I_kwDOJ0Z1Ps6Vs532
6,686
Model shows wrong date.
{ "login": "ghaisasadvait", "id": 11556546, "node_id": "MDQ6VXNlcjExNTU2NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11556546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghaisasadvait", "html_url": "https://github.com/ghaisasadvait", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-09-07T10:05:39
2024-09-07T22:10:39
2024-09-07T22:10:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ![image](https://github.com/user-attachments/assets/b1debc6f-31e7-4ceb-83bb-b0d32a97d274) I also tried using Open WEb UI and turned on the Duck DUck GO search functionality but somehow the model still returns the wrong date: ![image](https://github.com/user-attachments/assets/cb4b4d6b-37bc...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6686/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4628
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4628/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4628/comments
https://api.github.com/repos/ollama/ollama/issues/4628/events
https://github.com/ollama/ollama/issues/4628
2,316,694,931
I_kwDOJ0Z1Ps6KFfWT
4,628
aya model : error when using the generate endpoint
{ "login": "saurabhkumar", "id": 3962573, "node_id": "MDQ6VXNlcjM5NjI1NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/3962573?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saurabhkumar", "html_url": "https://github.com/saurabhkumar", "followers_url": "https://api.github.com...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2024-05-25T04:50:03
2024-05-27T06:21:57
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am running aya model locally. When i just start the model with `ollama run aya` and interact in the terminal, it works fine. But when I try using it via POSTMAN on Windows 10 at (http://127.0.0.1:11434/api/generate) with the following data: ``` { "model": "aya", "prompt": "Are there ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4628/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6309
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6309/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6309/comments
https://api.github.com/repos/ollama/ollama/issues/6309/events
https://github.com/ollama/ollama/pull/6309
2,459,531,445
PR_kwDOJ0Z1Ps54B6ha
6,309
Added a go example for mistral's native function calling
{ "login": "Binozo", "id": 70137898, "node_id": "MDQ6VXNlcjcwMTM3ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/70137898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Binozo", "html_url": "https://github.com/Binozo", "followers_url": "https://api.github.com/users/Binozo/fo...
[]
closed
false
null
[]
null
1
2024-08-11T10:34:22
2024-11-21T21:47:40
2024-11-21T21:47:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6309", "html_url": "https://github.com/ollama/ollama/pull/6309", "diff_url": "https://github.com/ollama/ollama/pull/6309.diff", "patch_url": "https://github.com/ollama/ollama/pull/6309.patch", "merged_at": null }
Hello there 🙋 I was playing around a bit with the awesome native function calling feature from the mistral model. I saw that an example for that was missing so I took a little inspiration from #5284 and built it myself ✌️
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6309/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4967
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4967/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4967/comments
https://api.github.com/repos/ollama/ollama/issues/4967/events
https://github.com/ollama/ollama/issues/4967
2,344,703,306
I_kwDOJ0Z1Ps6LwVVK
4,967
API Silently Truncates Conversation
{ "login": "flu0r1ne", "id": 76689481, "node_id": "MDQ6VXNlcjc2Njg5NDgx", "avatar_url": "https://avatars.githubusercontent.com/u/76689481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flu0r1ne", "html_url": "https://github.com/flu0r1ne", "followers_url": "https://api.github.com/users/flu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
5
2024-06-10T19:45:35
2024-09-19T12:32:44
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ### Problem Description The chat API currently truncates conversations without warning when the context limit is exceeded. This behavior can cause significant problems in downstream applications. For instance, if a document is provided for summarization, silently removing part of the document...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4967/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4967/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8206
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8206/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8206/comments
https://api.github.com/repos/ollama/ollama/issues/8206/events
https://github.com/ollama/ollama/issues/8206
2,754,349,650
I_kwDOJ0Z1Ps6kLApS
8,206
MultiGPU ROCm
{ "login": "Schwenn2002", "id": 56083040, "node_id": "MDQ6VXNlcjU2MDgzMDQw", "avatar_url": "https://avatars.githubusercontent.com/u/56083040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Schwenn2002", "html_url": "https://github.com/Schwenn2002", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
10
2024-12-21T20:25:20
2025-01-07T21:34:28
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? System: CPU AMD Ryzen 9950X RAM 128 GB DDR5 GPU0 AMD Radeon PRO W7900 GPU1 AMD Radeon RX7900XTX ROCM: 6.3.1 Ubuntu 24.04 LTS (currently patched) ERROR: I start a large LLM (e.g. Llama-3.3-70B-Instruct-Q4_K_L) with open webui and a context window of 32678 and get the following error in ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8206/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8504
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8504/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8504/comments
https://api.github.com/repos/ollama/ollama/issues/8504/events
https://github.com/ollama/ollama/pull/8504
2,799,630,108
PR_kwDOJ0Z1Ps6IX_Po
8,504
add doc to describe setup of vm on proxmox for multiple P40 gpus
{ "login": "fred-vaneijk", "id": 178751132, "node_id": "U_kgDOCqeGnA", "avatar_url": "https://avatars.githubusercontent.com/u/178751132?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fred-vaneijk", "html_url": "https://github.com/fred-vaneijk", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
2025-01-20T15:44:15
2025-01-20T15:44:15
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8504", "html_url": "https://github.com/ollama/ollama/pull/8504", "diff_url": "https://github.com/ollama/ollama/pull/8504.diff", "patch_url": "https://github.com/ollama/ollama/pull/8504.patch", "merged_at": null }
Fixes an issue where 1 of the compute units would go to 100% CPU use and the system would appear locked up
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8504/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7728
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7728/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7728/comments
https://api.github.com/repos/ollama/ollama/issues/7728/events
https://github.com/ollama/ollama/pull/7728
2,670,021,679
PR_kwDOJ0Z1Ps6CTiWc
7,728
Improve crash reporting
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-11-18T21:43:38
2024-11-20T00:27:00
2024-11-20T00:26:58
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7728", "html_url": "https://github.com/ollama/ollama/pull/7728", "diff_url": "https://github.com/ollama/ollama/pull/7728.diff", "patch_url": "https://github.com/ollama/ollama/pull/7728.patch", "merged_at": "2024-11-20T00:26:58" }
Many model crashes are masked behind "An existing connection was forcibly closed by the remote host" This captures that common error message and wires in any detected errors from the log. This also adds the deepseek context shift error to the known errors we capture.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7728/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5798
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5798/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5798/comments
https://api.github.com/repos/ollama/ollama/issues/5798/events
https://github.com/ollama/ollama/issues/5798
2,419,572,294
I_kwDOJ0Z1Ps6QN75G
5,798
ollama save model to file and ollama load model from file
{ "login": "cruzanstx", "id": 2927083, "node_id": "MDQ6VXNlcjI5MjcwODM=", "avatar_url": "https://avatars.githubusercontent.com/u/2927083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cruzanstx", "html_url": "https://github.com/cruzanstx", "followers_url": "https://api.github.com/users/cr...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-07-19T18:20:09
2024-07-26T21:14:40
2024-07-26T21:14:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In docker you can save images and load them from tar.gz files. example: ```bash docker pull ollama/ollama:0.2.5 docker save ollama/ollama:0.2.5 | gzip > ollama_0.2.5.tar.gz docker load --input ollama_0.2.5.targ.gz ``` Could we have a similar loop of managing models example: ```bash ollama pull llama3:lates...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5798/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5798/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8022
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8022/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8022/comments
https://api.github.com/repos/ollama/ollama/issues/8022/events
https://github.com/ollama/ollama/issues/8022
2,729,116,225
I_kwDOJ0Z1Ps6iqwJB
8,022
Error reported when importing a multimodal large model of type hugginface (llava-mistral-7b)
{ "login": "lyp-liu", "id": 71242087, "node_id": "MDQ6VXNlcjcxMjQyMDg3", "avatar_url": "https://avatars.githubusercontent.com/u/71242087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lyp-liu", "html_url": "https://github.com/lyp-liu", "followers_url": "https://api.github.com/users/lyp-li...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-12-10T06:05:53
2024-12-29T20:08:57
2024-12-29T20:08:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama create mytestsafe -f ./mytest.Modelfile transferring model data 100% converting model Error: unsupported architecture At present, it seems impossible to convert the huggingface type Llava-mistral convert to a gguf type model through Llama.cpp. i want to know that the type of llava...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8022/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3497
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3497/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3497/comments
https://api.github.com/repos/ollama/ollama/issues/3497/events
https://github.com/ollama/ollama/issues/3497
2,226,625,418
I_kwDOJ0Z1Ps6Et5uK
3,497
Support AMD Firepro w7100 - gfx802 / gfx805
{ "login": "ninp0", "id": 1008583, "node_id": "MDQ6VXNlcjEwMDg1ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/1008583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ninp0", "html_url": "https://github.com/ninp0", "followers_url": "https://api.github.com/users/ninp0/follower...
[]
closed
false
null
[]
null
1
2024-04-04T22:28:50
2024-04-12T23:30:38
2024-04-12T23:30:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? Start ollama in a manner that will leverage an AMD Firepro w7100 gpu ### How should we solve this? Current output when starting ollama via: ``` $ sudo systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabl...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3497/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7358
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7358/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7358/comments
https://api.github.com/repos/ollama/ollama/issues/7358/events
https://github.com/ollama/ollama/pull/7358
2,614,544,504
PR_kwDOJ0Z1Ps5_7SYk
7,358
Fix unicode output on windows with redirect to file
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-25T16:15:37
2024-10-25T20:43:19
2024-10-25T20:43:16
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7358", "html_url": "https://github.com/ollama/ollama/pull/7358", "diff_url": "https://github.com/ollama/ollama/pull/7358.diff", "patch_url": "https://github.com/ollama/ollama/pull/7358.patch", "merged_at": "2024-10-25T20:43:16" }
If we're not writing out to a terminal, avoid setting the console mode on windows, which corrupts the output file. Fixes #3826
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7358/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4167
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4167/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4167/comments
https://api.github.com/repos/ollama/ollama/issues/4167/events
https://github.com/ollama/ollama/issues/4167
2,279,497,276
I_kwDOJ0Z1Ps6H3l48
4,167
abnormal reply of Llama-3-ChatQA-1.5-8B-GGUF
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-05-05T12:05:37
2024-05-11T08:46:53
2024-05-11T08:46:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? import Llama-3-ChatQA-1.5-8B-GGUF to ollama, but reply is abnormal. I have tried many gguf version of this model from different username on HF. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1..32
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4167/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4167/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1886
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1886/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1886/comments
https://api.github.com/repos/ollama/ollama/issues/1886/events
https://github.com/ollama/ollama/pull/1886
2,073,708,262
PR_kwDOJ0Z1Ps5jpjoL
1,886
feat: load ~/.ollama/.env using godotenv
{ "login": "sublimator", "id": 525211, "node_id": "MDQ6VXNlcjUyNTIxMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/525211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sublimator", "html_url": "https://github.com/sublimator", "followers_url": "https://api.github.com/users/s...
[]
closed
false
null
[]
null
4
2024-01-10T06:46:40
2024-01-22T23:51:54
2024-01-22T21:52:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1886", "html_url": "https://github.com/ollama/ollama/pull/1886", "diff_url": "https://github.com/ollama/ollama/pull/1886.diff", "patch_url": "https://github.com/ollama/ollama/pull/1886.patch", "merged_at": null }
- More generic than https://github.com/jmorganca/ollama/pull/1846 - Slots in simply with the existing environment variable configuration - Can be used to set environment variables on MacOS for e.g. OLLAMA_ORIGINS without needing to fiddle around with plist/SIP
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1886/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/1886/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7274
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7274/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7274/comments
https://api.github.com/repos/ollama/ollama/issues/7274/events
https://github.com/ollama/ollama/pull/7274
2,599,914,986
PR_kwDOJ0Z1Ps5_NFdD
7,274
Add Environment Variable For Row Split and No KV Offload
{ "login": "heislera763", "id": 126129661, "node_id": "U_kgDOB4SV_Q", "avatar_url": "https://avatars.githubusercontent.com/u/126129661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/heislera763", "html_url": "https://github.com/heislera763", "followers_url": "https://api.github.com/users/...
[]
open
false
null
[]
null
1
2024-10-20T03:38:41
2024-11-26T18:29:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7274", "html_url": "https://github.com/ollama/ollama/pull/7274", "diff_url": "https://github.com/ollama/ollama/pull/7274.diff", "patch_url": "https://github.com/ollama/ollama/pull/7274.patch", "merged_at": null }
This is https://github.com/ollama/ollama/pull/5527 (add "--split-mode row" parameter) but rebased and cleaned up. I've also added the "--no-kv-offload" parameter, which was discussed as a workaround to all KV cache being placed on the first GPU when using split rows. These parameters are activated with the new environm...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7274/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7274/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8522
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8522/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8522/comments
https://api.github.com/repos/ollama/ollama/issues/8522/events
https://github.com/ollama/ollama/issues/8522
2,802,904,225
I_kwDOJ0Z1Ps6nEOyh
8,522
Ollama throws 'does not support generate' error on running embedding models on windows
{ "login": "tanmaysharma2001", "id": 78191188, "node_id": "MDQ6VXNlcjc4MTkxMTg4", "avatar_url": "https://avatars.githubusercontent.com/u/78191188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmaysharma2001", "html_url": "https://github.com/tanmaysharma2001", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2025-01-21T22:03:27
2025-01-21T22:38:30
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, as the title says, when using ollama cli, and trying to running any embedding models present on the website (in this case nomic-embed-text), they throw an error which is: ``` Error: "nomic-embed-text" does not support generate ``` to reproduce: 1. simply install ollama on windows through th...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8522/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1568
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1568/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1568/comments
https://api.github.com/repos/ollama/ollama/issues/1568/events
https://github.com/ollama/ollama/issues/1568
2,044,977,596
I_kwDOJ0Z1Ps554-G8
1,568
ollama in Powershell using WSL2
{ "login": "BananaAcid", "id": 1894723, "node_id": "MDQ6VXNlcjE4OTQ3MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1894723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BananaAcid", "html_url": "https://github.com/BananaAcid", "followers_url": "https://api.github.com/users...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
1
2023-12-16T22:16:53
2023-12-19T17:42:00
2023-12-19T17:41:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Just an info for others trying to trigger ollama from powershell: Either use `wsl ollama run llama2` (prefix with wsl) - or - enable a `ollama` command in powershell: 1. `notepad $PROFILE` 2. add as last line: `function ollama() { $cmd = @("ollama") + $args ; &wsl.exe $cmd }` Note: setting `OLLAMA_M...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1568/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1568/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4660
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4660/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4660/comments
https://api.github.com/repos/ollama/ollama/issues/4660/events
https://github.com/ollama/ollama/issues/4660
2,318,779,533
I_kwDOJ0Z1Ps6KNcSN
4,660
Changing seed does not change response
{ "login": "ccreutzi", "id": 89011131, "node_id": "MDQ6VXNlcjg5MDExMTMx", "avatar_url": "https://avatars.githubusercontent.com/u/89011131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccreutzi", "html_url": "https://github.com/ccreutzi", "followers_url": "https://api.github.com/users/ccr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
1
2024-05-27T10:07:48
2024-06-11T21:24:42
2024-06-11T21:24:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? According to [the documentation](https://github.com/ollama/ollama/blob/main/docs/api.md#request-reproducible-outputs), getting reproducible outputs requires setting the seed and setting temperature to 0. As far as I can tell, the part of these that works is setting the temperature to 0. But c...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4660/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4660/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4820
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4820/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4820/comments
https://api.github.com/repos/ollama/ollama/issues/4820/events
https://github.com/ollama/ollama/issues/4820
2,334,271,800
I_kwDOJ0Z1Ps6LIik4
4,820
Issue with Llama3 Model on Multiple AMD GPU
{ "login": "rasodu", "id": 13222196, "node_id": "MDQ6VXNlcjEzMjIyMTk2", "avatar_url": "https://avatars.githubusercontent.com/u/13222196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rasodu", "html_url": "https://github.com/rasodu", "followers_url": "https://api.github.com/users/rasodu/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
closed
false
null
[]
null
7
2024-06-04T19:58:35
2024-07-28T18:31:30
2024-06-23T22:21:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm experiencing an issue with running the llama3 model (specifically, version 70b-instruct-q6) on multiple AMD GPUs. While it works correctly on ollama/ollama:0.1.34-rocm, I've encountered a problem where it produces junk output when using ollama/ollama:0.1.35-rocm and ollama/ollama:0.1.41-rocm...
{ "login": "rasodu", "id": 13222196, "node_id": "MDQ6VXNlcjEzMjIyMTk2", "avatar_url": "https://avatars.githubusercontent.com/u/13222196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rasodu", "html_url": "https://github.com/rasodu", "followers_url": "https://api.github.com/users/rasodu/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4820/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4499
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4499/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4499/comments
https://api.github.com/repos/ollama/ollama/issues/4499/events
https://github.com/ollama/ollama/issues/4499
2,302,621,905
I_kwDOJ0Z1Ps6JPzjR
4,499
paligemma
{ "login": "wwjCMP", "id": 32979859, "node_id": "MDQ6VXNlcjMyOTc5ODU5", "avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwjCMP", "html_url": "https://github.com/wwjCMP", "followers_url": "https://api.github.com/users/wwjCMP/fo...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
8
2024-05-17T12:27:40
2024-12-19T10:19:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/google/paligemma-3b-pt-224
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4499/reactions", "total_count": 48, "+1": 45, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4499/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1171
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1171/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1171/comments
https://api.github.com/repos/ollama/ollama/issues/1171/events
https://github.com/ollama/ollama/issues/1171
1,998,760,855
I_kwDOJ0Z1Ps53IquX
1,171
Update installed models
{ "login": "Bodo-von-Greif", "id": 6941672, "node_id": "MDQ6VXNlcjY5NDE2NzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6941672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bodo-von-Greif", "html_url": "https://github.com/Bodo-von-Greif", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
1
2023-11-17T10:19:08
2023-11-17T19:11:01
2023-11-17T19:11:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi all, i wrote a small bash script to update the installed models. Maybe its useful for some of you: ` #/bin/bash #Based on: ollama run codellama 'show me how to send the first colum named "name" of the list which is produced with ollama list with xargs to "ollama pull"' echo "Actual models" ollama list ...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1171/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8101
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8101/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8101/comments
https://api.github.com/repos/ollama/ollama/issues/8101/events
https://github.com/ollama/ollama/pull/8101
2,740,135,352
PR_kwDOJ0Z1Ps6FPOGv
8,101
llama: vendor commit ba1cb19c
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-12-14T20:40:05
2024-12-14T22:55:54
2024-12-14T22:55:51
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8101", "html_url": "https://github.com/ollama/ollama/pull/8101", "diff_url": "https://github.com/ollama/ollama/pull/8101.diff", "patch_url": "https://github.com/ollama/ollama/pull/8101.patch", "merged_at": "2024-12-14T22:55:51" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8101/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8478
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8478/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8478/comments
https://api.github.com/repos/ollama/ollama/issues/8478/events
https://github.com/ollama/ollama/issues/8478
2,796,624,096
I_kwDOJ0Z1Ps6msRjg
8,478
Display Minimum System Requirements
{ "login": "Siddhesh-Agarwal", "id": 68057995, "node_id": "MDQ6VXNlcjY4MDU3OTk1", "avatar_url": "https://avatars.githubusercontent.com/u/68057995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Siddhesh-Agarwal", "html_url": "https://github.com/Siddhesh-Agarwal", "followers_url": "https://...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
6
2025-01-18T03:38:17
2025-01-20T09:51:21
2025-01-20T09:51:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be great to have the minimum system requirements like disk space, and RAM for each model. This could be done both on the website and in the CLI like a small notice. The minimum RAM required by the model is: minimum_RAM = num_of_parameters * bytes_per_parameter I see that most models use the `Q4_K_M` or ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8478/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8478/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4491
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4491/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4491/comments
https://api.github.com/repos/ollama/ollama/issues/4491/events
https://github.com/ollama/ollama/issues/4491
2,301,965,398
I_kwDOJ0Z1Ps6JNTRW
4,491
Pulling using API - Session timeout (5 minutes)
{ "login": "pelletier197", "id": 24528884, "node_id": "MDQ6VXNlcjI0NTI4ODg0", "avatar_url": "https://avatars.githubusercontent.com/u/24528884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pelletier197", "html_url": "https://github.com/pelletier197", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-05-17T06:55:50
2024-07-25T22:40:57
2024-07-25T22:40:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When using the REST API to pull models, the `PULL` request seems to timeout for large models (llama3). This is linked to [this issue](https://github.com/ollama/ollama-js/issues/72). Is there any way to override the default session timeout when pulling models ? I noticed that the `Generat...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4491/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4491/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7208
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7208/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7208/comments
https://api.github.com/repos/ollama/ollama/issues/7208/events
https://github.com/ollama/ollama/issues/7208
2,587,622,492
I_kwDOJ0Z1Ps6aO_xc
7,208
insufficient VRAM to load any model layers
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-10-15T04:45:47
2024-10-16T04:37:13
2024-10-16T04:37:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? "I want to know why the model prompts 'GPU has too little memory to allocate any layers.' I have four GPU cards, with available memory of 23.3 GiB, 23.3 GiB, 16.8 GiB, and 9.7 GiB respectively." ``` 10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.994+08:00 level=DEBUG sou...
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7208/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7688
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7688/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7688/comments
https://api.github.com/repos/ollama/ollama/issues/7688/events
https://github.com/ollama/ollama/issues/7688
2,662,591,083
I_kwDOJ0Z1Ps6es-pr
7,688
Have start model downloading after internet disconnect
{ "login": "mosquet", "id": 136934740, "node_id": "U_kgDOCCl1VA", "avatar_url": "https://avatars.githubusercontent.com/u/136934740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mosquet", "html_url": "https://github.com/mosquet", "followers_url": "https://api.github.com/users/mosquet/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw...
closed
false
null
[]
null
2
2024-11-15T17:01:02
2024-11-17T12:02:03
2024-11-17T12:02:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## Description When internet connection is lost during model download, the process starts again from 0% instead of resuming from the interrupted point. ## Current Behavior - Download fails on connection loss - When connection restored, download restarts from 0% - All previous download p...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7688/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7776
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7776/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7776/comments
https://api.github.com/repos/ollama/ollama/issues/7776/events
https://github.com/ollama/ollama/issues/7776
2,678,294,755
I_kwDOJ0Z1Ps6fo4jj
7,776
streaming for tools support
{ "login": "ZHOUxiaohe1987", "id": 59469405, "node_id": "MDQ6VXNlcjU5NDY5NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/59469405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZHOUxiaohe1987", "html_url": "https://github.com/ZHOUxiaohe1987", "followers_url": "https://api.gi...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-11-21T07:01:10
2024-11-21T09:55:09
2024-11-21T09:55:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I try to use tools in stream way with langchain, but it do not work.
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7776/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6622
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6622/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6622/comments
https://api.github.com/repos/ollama/ollama/issues/6622/events
https://github.com/ollama/ollama/issues/6622
2,504,121,981
I_kwDOJ0Z1Ps6VQd59
6,622
[Bug] open-webui integration error when ui docker listen on 11434
{ "login": "zydmtaichi", "id": 93961601, "node_id": "U_kgDOBZm9gQ", "avatar_url": "https://avatars.githubusercontent.com/u/93961601?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zydmtaichi", "html_url": "https://github.com/zydmtaichi", "followers_url": "https://api.github.com/users/zydmt...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-09-04T01:47:32
2024-09-05T00:34:41
2024-09-05T00:34:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? i start open-webui via below cmd first and then ollama service failed to up by using `ollama serve`. Output tells the port already in use. I got the same err reason if i change the order of launch(first ollama, then open-webui docker), please check and improve the integration of ollama and open-...
{ "login": "zydmtaichi", "id": 93961601, "node_id": "U_kgDOBZm9gQ", "avatar_url": "https://avatars.githubusercontent.com/u/93961601?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zydmtaichi", "html_url": "https://github.com/zydmtaichi", "followers_url": "https://api.github.com/users/zydmt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6622/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2165
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2165/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2165/comments
https://api.github.com/repos/ollama/ollama/issues/2165/events
https://github.com/ollama/ollama/issues/2165
2,097,159,829
I_kwDOJ0Z1Ps59AB6V
2,165
ROCm v5 crash - free(): invalid pointer
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
26
2024-01-23T23:39:26
2024-03-12T18:26:23
2024-03-12T18:26:23
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
``` loading library /tmp/ollama800487147/rocm_v5/libext_server.so 2024/01/23 19:26:51 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama800487147/rocm_v5/libext_server.so 2024/01/23 19:26:51 dyn_ext_server.go:145: INFO Initializing llama server free(): invalid pointer Aborted (core dumped) ``` ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2165/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4756
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4756/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4756/comments
https://api.github.com/repos/ollama/ollama/issues/4756/events
https://github.com/ollama/ollama/pull/4756
2,328,405,482
PR_kwDOJ0Z1Ps5xKEpa
4,756
refactor convert
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
2
2024-05-31T18:46:52
2024-08-01T21:16:33
2024-08-01T21:16:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4756", "html_url": "https://github.com/ollama/ollama/pull/4756", "diff_url": "https://github.com/ollama/ollama/pull/4756.diff", "patch_url": "https://github.com/ollama/ollama/pull/4756.patch", "merged_at": "2024-08-01T21:16:31" }
the goal is to build a single, well defined interface to convert a model as well as interfaces for input formats (e.g. safetensors, pytorch), model architectures (e.g. llama, gemma), and model tokenizers this change makes some significant changes to the conversion process: 1. implement a single function call for ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4756/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3335
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3335/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3335/comments
https://api.github.com/repos/ollama/ollama/issues/3335/events
https://github.com/ollama/ollama/issues/3335
2,205,087,423
I_kwDOJ0Z1Ps6Dbva_
3,335
Error: pull model manifest: ollama.ai certificate is expired
{ "login": "hheydaroff", "id": 29415152, "node_id": "MDQ6VXNlcjI5NDE1MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/29415152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hheydaroff", "html_url": "https://github.com/hheydaroff", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-03-25T07:32:51
2024-03-25T11:00:16
2024-03-25T08:31:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I try to pull a model from the registry, it gives me the foError: pull model manifest: `Get "https://registry.ollama.ai/v2/library/llama2-uncensored/manifests/latest": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: “ollama.ai” certificate is expiredl...
{ "login": "hheydaroff", "id": 29415152, "node_id": "MDQ6VXNlcjI5NDE1MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/29415152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hheydaroff", "html_url": "https://github.com/hheydaroff", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3335/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3335/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/2525
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2525/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2525/comments
https://api.github.com/repos/ollama/ollama/issues/2525/events
https://github.com/ollama/ollama/issues/2525
2,137,485,748
I_kwDOJ0Z1Ps5_Z3G0
2,525
ollama version 1.25 problem emojis
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-02-15T21:32:20
2024-02-21T15:51:22
2024-02-21T15:51:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Apparently adding "my friend" to the end of a prompt, causes mistral to return emojies that end up never stopping. ``` ollama run mistral >>> hello my friend Hello! How can I help you today? Is there a specific question or topic you'd like to discuss? I'm here to answer any questions you may have to the best of ...
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2525/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2525/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7724
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7724/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7724/comments
https://api.github.com/repos/ollama/ollama/issues/7724/events
https://github.com/ollama/ollama/pull/7724
2,668,186,705
PR_kwDOJ0Z1Ps6CO2-e
7,724
Update README.md
{ "login": "zeitlings", "id": 25689591, "node_id": "MDQ6VXNlcjI1Njg5NTkx", "avatar_url": "https://avatars.githubusercontent.com/u/25689591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zeitlings", "html_url": "https://github.com/zeitlings", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-11-18T11:19:04
2024-11-19T03:33:23
2024-11-19T03:33:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7724", "html_url": "https://github.com/ollama/ollama/pull/7724", "diff_url": "https://github.com/ollama/ollama/pull/7724.diff", "patch_url": "https://github.com/ollama/ollama/pull/7724.patch", "merged_at": "2024-11-19T03:33:23" }
Add [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) to Extensions & Plugins. - Manage local models - Perform local inference
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7724/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3452
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3452/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3452/comments
https://api.github.com/repos/ollama/ollama/issues/3452/events
https://github.com/ollama/ollama/issues/3452
2,220,042,787
I_kwDOJ0Z1Ps6EUyoj
3,452
Pulling manifest fails with error "read: connection reset by peer"
{ "login": "chopeen", "id": 183731, "node_id": "MDQ6VXNlcjE4MzczMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/183731?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chopeen", "html_url": "https://github.com/chopeen", "followers_url": "https://api.github.com/users/chopeen/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-04-02T09:43:19
2024-07-01T19:54:50
2024-04-09T10:54:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After upgrading from `0.1.27` to `0.1.30`, I can no longer pull models when connected to a corporate network: ``` $ ollama pull codellama pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=VV5GsyYSIqo_4gO3ILCHrA&scope=repository%!A(MISSING)library%!F(MISS...
{ "login": "chopeen", "id": 183731, "node_id": "MDQ6VXNlcjE4MzczMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/183731?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chopeen", "html_url": "https://github.com/chopeen", "followers_url": "https://api.github.com/users/chopeen/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3452/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3946
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3946/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3946/comments
https://api.github.com/repos/ollama/ollama/issues/3946/events
https://github.com/ollama/ollama/issues/3946
2,265,926,090
I_kwDOJ0Z1Ps6HD0nK
3,946
An existing connection was forcibly closed by the remote host.
{ "login": "icreatewithout", "id": 34464412, "node_id": "MDQ6VXNlcjM0NDY0NDEy", "avatar_url": "https://avatars.githubusercontent.com/u/34464412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/icreatewithout", "html_url": "https://github.com/icreatewithout", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-04-26T14:36:09
2024-05-01T21:07:51
2024-05-01T21:07:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=2AycvwJ7brPYMZDkoil6gg&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1714142386": read tcp 192.168.3.57:4603->34.120.132.20:443: wsarecv: An existing connection was for...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3946/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/171
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/171/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/171/comments
https://api.github.com/repos/ollama/ollama/issues/171/events
https://github.com/ollama/ollama/pull/171
1,816,538,312
PR_kwDOJ0Z1Ps5WIw9f
171
fix extended tag names
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-07-22T02:08:46
2023-07-22T03:27:25
2023-07-22T03:27:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/171", "html_url": "https://github.com/ollama/ollama/pull/171", "diff_url": "https://github.com/ollama/ollama/pull/171.diff", "patch_url": "https://github.com/ollama/ollama/pull/171.patch", "merged_at": "2023-07-22T03:27:25" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/171/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2841
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2841/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2841/comments
https://api.github.com/repos/ollama/ollama/issues/2841/events
https://github.com/ollama/ollama/issues/2841
2,161,897,987
I_kwDOJ0Z1Ps6A2_ID
2,841
Add/Remove Model Repos + Self Host Your Own Model Repo + Pull Models From Other Repos
{ "login": "trymeouteh", "id": 31172274, "node_id": "MDQ6VXNlcjMxMTcyMjc0", "avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trymeouteh", "html_url": "https://github.com/trymeouteh", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
3
2024-02-29T18:52:03
2024-04-25T08:18:36
2024-03-01T01:09:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
1. The ability to manage the model repos in Ollama. Simular to how in F-Droid (Android app store) you can add and remove repos which allows you to get apps from other sources. 2. Self host your own repo. Allow anyone to self host their own repo. - Weather this means simply setting up a git repo (Github, Gitlab, Git...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2841/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2841/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8258
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8258/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8258/comments
https://api.github.com/repos/ollama/ollama/issues/8258/events
https://github.com/ollama/ollama/issues/8258
2,761,061,859
I_kwDOJ0Z1Ps6kknXj
8,258
Error: an error was encountered while running the model: unexpected EOF
{ "login": "Hyccccccc", "id": 60806532, "node_id": "MDQ6VXNlcjYwODA2NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/60806532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hyccccccc", "html_url": "https://github.com/Hyccccccc", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
18
2024-12-27T16:22:40
2024-12-31T03:26:28
2024-12-31T03:26:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I use Ollama to pull llama3.1:8b. When I run llama3.1:8b, the following error occurs: > ollama run llama3.1 > \>\>\> hello > Hello! HowError: an error was encountered while running the model: unexpected EOF (I use the Ubuntu 20.04 image, and since I don’t have permission, _systemctl_...
{ "login": "Hyccccccc", "id": 60806532, "node_id": "MDQ6VXNlcjYwODA2NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/60806532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hyccccccc", "html_url": "https://github.com/Hyccccccc", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8258/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/8258/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6810
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6810/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6810/comments
https://api.github.com/repos/ollama/ollama/issues/6810/events
https://github.com/ollama/ollama/pull/6810
2,526,735,469
PR_kwDOJ0Z1Ps57iMlK
6,810
Create docker-image.yml
{ "login": "liufriendd", "id": 128777784, "node_id": "U_kgDOB6z-OA", "avatar_url": "https://avatars.githubusercontent.com/u/128777784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liufriendd", "html_url": "https://github.com/liufriendd", "followers_url": "https://api.github.com/users/liu...
[]
closed
false
null
[]
null
1
2024-09-15T04:39:02
2024-09-16T20:42:14
2024-09-16T20:42:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6810", "html_url": "https://github.com/ollama/ollama/pull/6810", "diff_url": "https://github.com/ollama/ollama/pull/6810.diff", "patch_url": "https://github.com/ollama/ollama/pull/6810.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6810/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2645
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2645/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2645/comments
https://api.github.com/repos/ollama/ollama/issues/2645/events
https://github.com/ollama/ollama/issues/2645
2,147,315,464
I_kwDOJ0Z1Ps5__W8I
2,645
Biomistral support planned?
{ "login": "DimIsaev", "id": 11172642, "node_id": "MDQ6VXNlcjExMTcyNjQy", "avatar_url": "https://avatars.githubusercontent.com/u/11172642?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DimIsaev", "html_url": "https://github.com/DimIsaev", "followers_url": "https://api.github.com/users/Dim...
[]
closed
false
null
[]
null
3
2024-02-21T17:30:42
2024-02-22T05:19:07
2024-02-22T00:55:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Biomistral support planned?
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2645/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7104
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7104/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7104/comments
https://api.github.com/repos/ollama/ollama/issues/7104/events
https://github.com/ollama/ollama/issues/7104
2,567,600,954
I_kwDOJ0Z1Ps6ZCns6
7,104
Optimizing GPU Usage for AI Models: Splitting Workloads Across Multiple GPUs Even if the Model Fits in One GPU
{ "login": "varyagnord", "id": 124573691, "node_id": "U_kgDOB2zX-w", "avatar_url": "https://avatars.githubusercontent.com/u/124573691?v=4", "gravatar_id": "", "url": "https://api.github.com/users/varyagnord", "html_url": "https://github.com/varyagnord", "followers_url": "https://api.github.com/users/var...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
9
2024-10-05T02:17:29
2024-10-05T13:45:17
2024-10-05T13:44:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a question about how Ollama works and its options for working with AI models. If there are 2 GPUs in a PC, for example, two RTX3090s, and we launch a model that has a size of 20GB VRAM, it will be loaded into one card, preferably the fastest one. This means that processing 20GB of data will be handled by approxi...
{ "login": "varyagnord", "id": 124573691, "node_id": "U_kgDOB2zX-w", "avatar_url": "https://avatars.githubusercontent.com/u/124573691?v=4", "gravatar_id": "", "url": "https://api.github.com/users/varyagnord", "html_url": "https://github.com/varyagnord", "followers_url": "https://api.github.com/users/var...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7104/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6901
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6901/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6901/comments
https://api.github.com/repos/ollama/ollama/issues/6901/events
https://github.com/ollama/ollama/issues/6901
2,539,946,941
I_kwDOJ0Z1Ps6XZIO9
6,901
High CPU and slow generate tockens
{ "login": "maco6096", "id": 8744820, "node_id": "MDQ6VXNlcjg3NDQ4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8744820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maco6096", "html_url": "https://github.com/maco6096", "followers_url": "https://api.github.com/users/maco6...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
1
2024-09-21T03:30:13
2024-09-22T16:44:24
2024-09-22T16:44:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have 32c , 64G mem, cpu is avx2, run qwen1.5-7B-chat.gguf, cpu load 3000% and tocken generate very slow. this is my cpu config: (base) [app@T-LSM-1 ~]$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6901/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/386
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/386/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/386/comments
https://api.github.com/repos/ollama/ollama/issues/386/events
https://github.com/ollama/ollama/issues/386
1,857,783,219
I_kwDOJ0Z1Ps5uu4Wz
386
Whether the integration of llama2-chinese is supported
{ "login": "cypggs", "id": 3694954, "node_id": "MDQ6VXNlcjM2OTQ5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3694954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cypggs", "html_url": "https://github.com/cypggs", "followers_url": "https://api.github.com/users/cypggs/foll...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWR...
closed
false
null
[]
null
4
2023-08-19T15:57:07
2023-08-30T20:48:21
2023-08-30T20:48:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ERROR: type should be string, got "\r\nhttps://chinese.llama.family/\r\nhttps://github.com/FlagAlpha/Llama2-Chinese"
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/386/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6645
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6645/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6645/comments
https://api.github.com/repos/ollama/ollama/issues/6645/events
https://github.com/ollama/ollama/pull/6645
2,506,521,525
PR_kwDOJ0Z1Ps56dW2q
6,645
Fix gemma2 2b conversion
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-09-05T00:20:31
2024-09-06T00:02:30
2024-09-06T00:02:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6645", "html_url": "https://github.com/ollama/ollama/pull/6645", "diff_url": "https://github.com/ollama/ollama/pull/6645.diff", "patch_url": "https://github.com/ollama/ollama/pull/6645.patch", "merged_at": "2024-09-06T00:02:28" }
Gemma2 added some tensors which were not getting named correctly which caused a collision for the `ffm_norm` tensors. This change fixes the tensor names and adds a new unit test for converting gemma2 2b.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6645/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4710
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4710/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4710/comments
https://api.github.com/repos/ollama/ollama/issues/4710/events
https://github.com/ollama/ollama/issues/4710
2,324,189,841
I_kwDOJ0Z1Ps6KiFKR
4,710
s390x build ollama : running gcc failed
{ "login": "woale", "id": 660094, "node_id": "MDQ6VXNlcjY2MDA5NA==", "avatar_url": "https://avatars.githubusercontent.com/u/660094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/woale", "html_url": "https://github.com/woale", "followers_url": "https://api.github.com/users/woale/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
9
2024-05-29T20:26:12
2025-01-27T21:13:44
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama build fails on undefined llama references ``` # github.com/ollama/ollama /usr/local/go/pkg/tool/linux_s390x/link: running gcc failed: exit status 1 /usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_free_model': /tmp/go-build/cgo-gcc-prolog:63:...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4710/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7936
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7936/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7936/comments
https://api.github.com/repos/ollama/ollama/issues/7936/events
https://github.com/ollama/ollama/pull/7936
2,719,052,661
PR_kwDOJ0Z1Ps6EHFHQ
7,936
ci: adjust windows compilers for lint/test
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-12-05T00:26:25
2024-12-05T00:34:39
2024-12-05T00:33:51
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7936", "html_url": "https://github.com/ollama/ollama/pull/7936", "diff_url": "https://github.com/ollama/ollama/pull/7936.diff", "patch_url": "https://github.com/ollama/ollama/pull/7936.patch", "merged_at": "2024-12-05T00:33:51" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7936/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6578
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6578/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6578/comments
https://api.github.com/repos/ollama/ollama/issues/6578/events
https://github.com/ollama/ollama/issues/6578
2,498,880,963
I_kwDOJ0Z1Ps6U8eXD
6,578
`/show info` panics on nil ModelInfo
{ "login": "vimalk78", "id": 3284044, "node_id": "MDQ6VXNlcjMyODQwNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3284044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vimalk78", "html_url": "https://github.com/vimalk78", "followers_url": "https://api.github.com/users/vimal...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-08-31T14:41:52
2024-09-01T04:12:18
2024-09-01T04:12:18
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` ollama on  main via  v1.22.5 ❯ ./ollama run codellama:latest >>> /show info panic: interface conversion: interface {} is nil, not string goroutine 1 [running]: github....
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6578/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2143
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2143/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2143/comments
https://api.github.com/repos/ollama/ollama/issues/2143/events
https://github.com/ollama/ollama/pull/2143
2,094,695,037
PR_kwDOJ0Z1Ps5kw2Ga
2,143
Refine debug logging for llm
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-22T20:28:38
2024-01-22T21:19:19
2024-01-22T21:19:16
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2143", "html_url": "https://github.com/ollama/ollama/pull/2143", "diff_url": "https://github.com/ollama/ollama/pull/2143.diff", "patch_url": "https://github.com/ollama/ollama/pull/2143.patch", "merged_at": "2024-01-22T21:19:16" }
This wires up logging in llama.cpp to always go to stderr, and also turns up logging if OLLAMA_DEBUG is set. This solves a couple problems. We used to emit one line to llama.log in verbose/debug mode before shifting to stdout. Now all the logging from llama.cpp will go to stderr, and the verbosity can be controlle...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2143/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6023
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6023/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6023/comments
https://api.github.com/repos/ollama/ollama/issues/6023/events
https://github.com/ollama/ollama/issues/6023
2,433,814,241
I_kwDOJ0Z1Ps6REQ7h
6,023
Expose unavaiable Llama-CPP flags
{ "login": "doomgrave", "id": 18002421, "node_id": "MDQ6VXNlcjE4MDAyNDIx", "avatar_url": "https://avatars.githubusercontent.com/u/18002421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/doomgrave", "html_url": "https://github.com/doomgrave", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-07-28T08:25:04
2024-09-04T01:56:41
2024-09-04T01:56:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please expose all the llama-cpp flags when we configure the modelcard. For example: offload_kqv, flash_attn, logits_all can be needed in specific usecases!
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6023/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1971
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1971/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1971/comments
https://api.github.com/repos/ollama/ollama/issues/1971/events
https://github.com/ollama/ollama/pull/1971
2,079,836,002
PR_kwDOJ0Z1Ps5j-oGg
1,971
add max context length check
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-01-12T22:55:03
2024-01-12T23:10:26
2024-01-12T23:10:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1971", "html_url": "https://github.com/ollama/ollama/pull/1971", "diff_url": "https://github.com/ollama/ollama/pull/1971.diff", "patch_url": "https://github.com/ollama/ollama/pull/1971.patch", "merged_at": "2024-01-12T23:10:25" }
setting a context length greater than what the model is trained for has adverse effects. to prevent this, if the user requests a larger context length, log and set it to the model's max
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1971/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/698
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/698/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/698/comments
https://api.github.com/repos/ollama/ollama/issues/698/events
https://github.com/ollama/ollama/issues/698
1,926,731,522
I_kwDOJ0Z1Ps5y15cC
698
How to uninstall ollama ai on Linux
{ "login": "scalstairo", "id": 146988643, "node_id": "U_kgDOCMLeYw", "avatar_url": "https://avatars.githubusercontent.com/u/146988643?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scalstairo", "html_url": "https://github.com/scalstairo", "followers_url": "https://api.github.com/users/sca...
[]
closed
false
null
[]
null
3
2023-10-04T18:08:45
2023-10-08T11:54:42
2023-10-04T18:19:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How can ollama be uninstalled on linux? Do not see an obvious entry the package listings
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/698/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/698/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6980
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6980/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6980/comments
https://api.github.com/repos/ollama/ollama/issues/6980/events
https://github.com/ollama/ollama/issues/6980
2,550,753,458
I_kwDOJ0Z1Ps6YCWiy
6,980
Tools support is not working right
{ "login": "acastry", "id": 33638575, "node_id": "MDQ6VXNlcjMzNjM4NTc1", "avatar_url": "https://avatars.githubusercontent.com/u/33638575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/acastry", "html_url": "https://github.com/acastry", "followers_url": "https://api.github.com/users/acastr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
13
2024-09-26T14:22:14
2025-01-06T07:39:00
2025-01-06T07:39:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, Tools support doens't work as expected i Guess. When activated it gets the right function to be called, but at the same time it doesn't return anymore normal response for any other phrases than tool support example. ### **I am copying last documentation example for tools support.** ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6980/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/6980/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3418
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3418/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3418/comments
https://api.github.com/repos/ollama/ollama/issues/3418/events
https://github.com/ollama/ollama/pull/3418
2,216,581,518
PR_kwDOJ0Z1Ps5rOxWa
3,418
Request and model concurrency
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
44
2024-03-30T16:56:41
2024-05-30T01:21:31
2024-04-23T15:31:38
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3418", "html_url": "https://github.com/ollama/ollama/pull/3418", "diff_url": "https://github.com/ollama/ollama/pull/3418.diff", "patch_url": "https://github.com/ollama/ollama/pull/3418.patch", "merged_at": "2024-04-23T15:31:38" }
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. This change is designed to be "opt in" initially, so the default behavior mimics the current sequential implementation (1 request at a time, and only a single model), but can be changed by settin...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3418/reactions", "total_count": 95, "+1": 37, "-1": 0, "laugh": 0, "hooray": 19, "confused": 0, "heart": 12, "rocket": 19, "eyes": 8 }
https://api.github.com/repos/ollama/ollama/issues/3418/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7322
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7322/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7322/comments
https://api.github.com/repos/ollama/ollama/issues/7322/events
https://github.com/ollama/ollama/pull/7322
2,606,063,044
PR_kwDOJ0Z1Ps5_fkpi
7,322
Refine default thread selection for NUMA systems
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
10
2024-10-22T17:31:27
2024-11-14T23:51:57
2024-10-30T22:05:46
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7322", "html_url": "https://github.com/ollama/ollama/pull/7322", "diff_url": "https://github.com/ollama/ollama/pull/7322.diff", "patch_url": "https://github.com/ollama/ollama/pull/7322.patch", "merged_at": "2024-10-30T22:05:46" }
Until we have full NUMA support, this adjusts the default thread selection algorithm to count up the number of performance cores across all sockets. Fixes #7287 Fixes #7359
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7322/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5096
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5096/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5096/comments
https://api.github.com/repos/ollama/ollama/issues/5096/events
https://github.com/ollama/ollama/pull/5096
2,356,954,459
PR_kwDOJ0Z1Ps5yrEe_
5,096
Fix a build warning again
{ "login": "coolljt0725", "id": 8232360, "node_id": "MDQ6VXNlcjgyMzIzNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coolljt0725", "html_url": "https://github.com/coolljt0725", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2024-06-17T10:10:51
2024-06-18T00:54:16
2024-06-17T18:47:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5096", "html_url": "https://github.com/ollama/ollama/pull/5096", "diff_url": "https://github.com/ollama/ollama/pull/5096.diff", "patch_url": "https://github.com/ollama/ollama/pull/5096.patch", "merged_at": "2024-06-17T18:47:48" }
With the latest main branch, there is a build warning ``` # github.com/ollama/ollama/gpu In file included from gpu_info_oneapi.h:4, from gpu_info_oneapi.c:3: gpu_info_oneapi.c: In function ‘oneapi_init’: gpu_info_oneapi.c:101:27: warning: format ‘%d’ expects argument of type ‘int’, but argument 3...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5096/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2492
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2492/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2492/comments
https://api.github.com/repos/ollama/ollama/issues/2492/events
https://github.com/ollama/ollama/issues/2492
2,134,240,649
I_kwDOJ0Z1Ps5_Ne2J
2,492
System Prompt not honored until re-run `ollama serve`
{ "login": "hyjwei", "id": 76876891, "node_id": "MDQ6VXNlcjc2ODc2ODkx", "avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyjwei", "html_url": "https://github.com/hyjwei", "followers_url": "https://api.github.com/users/hyjwei/fo...
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
2
2024-02-14T12:23:47
2024-02-16T19:43:08
2024-02-16T19:42:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
There are actually two issues regarding System Prompt in the current main branch, and I believe them to be related. # Issue 1: `SYSTEM` prompt in modelfile not honored If I run a model, then create a new one based the same model, but with a new `SYSTEM` prompt, the new `SYSTEM` prompt is not honored. Killing the cu...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2492/timeline
null
completed
false