url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/1850
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1850/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1850/comments
https://api.github.com/repos/ollama/ollama/issues/1850/events
https://github.com/ollama/ollama/pull/1850
2,069,658,546
PR_kwDOJ0Z1Ps5jbt9h
1,850
Offload layers to GPU based on new model size estimates
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
2
2024-01-08T04:52:49
2024-01-10T13:28:25
2024-01-08T21:42:00
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1850", "html_url": "https://github.com/ollama/ollama/pull/1850", "diff_url": "https://github.com/ollama/ollama/pull/1850.diff", "patch_url": "https://github.com/ollama/ollama/pull/1850.patch", "merged_at": "2024-01-08T21:42:00" }
This PR fixes a large number of crashes and "out of memory" errors related to VRAM allocation, by using a more accurate estimation of how much memory is required to run a model with a given context size. Models such as `mixtral` will now run on lower end hardware that would previously before, even if defaulting to t...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1850/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1850/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/660
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/660/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/660/comments
https://api.github.com/repos/ollama/ollama/issues/660/events
https://github.com/ollama/ollama/issues/660
1,920,410,789
I_kwDOJ0Z1Ps5ydySl
660
Request: Docker image build having name/tag
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2023-09-30T21:54:47
2023-09-30T21:58:44
2023-09-30T21:58:43
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Having just built the Docker image successfully 🥳 : ```bash > sudo docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> acfffae34e3a About a minute ago 824MB ``` Running `docker image ls`, there is no info about t...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/660/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1976
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1976/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1976/comments
https://api.github.com/repos/ollama/ollama/issues/1976/events
https://github.com/ollama/ollama/issues/1976
2,080,282,584
I_kwDOJ0Z1Ps57_pfY
1,976
Cloud storage support
{ "login": "beliboba", "id": 73661136, "node_id": "MDQ6VXNlcjczNjYxMTM2", "avatar_url": "https://avatars.githubusercontent.com/u/73661136?v=4", "gravatar_id": "", "url": "https://api.github.com/users/beliboba", "html_url": "https://github.com/beliboba", "followers_url": "https://api.github.com/users/bel...
[]
closed
false
null
[]
null
6
2024-01-13T12:50:34
2024-01-18T17:40:47
2024-01-18T17:40:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Is there any support for cloud storage for models? If no, will it be ever implemented?
{ "login": "beliboba", "id": 73661136, "node_id": "MDQ6VXNlcjczNjYxMTM2", "avatar_url": "https://avatars.githubusercontent.com/u/73661136?v=4", "gravatar_id": "", "url": "https://api.github.com/users/beliboba", "html_url": "https://github.com/beliboba", "followers_url": "https://api.github.com/users/bel...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1976/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5418
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5418/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5418/comments
https://api.github.com/repos/ollama/ollama/issues/5418/events
https://github.com/ollama/ollama/issues/5418
2,384,830,405
I_kwDOJ0Z1Ps6OJZ_F
5,418
DeepSeek-Coder-V2 (Lite) spouts GGGs
{ "login": "lorenzodimauro97", "id": 50343905, "node_id": "MDQ6VXNlcjUwMzQzOTA1", "avatar_url": "https://avatars.githubusercontent.com/u/50343905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorenzodimauro97", "html_url": "https://github.com/lorenzodimauro97", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-07-01T22:23:39
2024-07-02T18:10:14
2024-07-01T23:04:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Eventually when using deepseek-coder-v2:16b-lite-instruct-q8_0 and Open Web UI (but also other means like for example continuedev) the model will stop working and spout GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG regardless of the input: ![image](https://github.com/ollama/ollama/assets/50343905/b90c39ae-...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5418/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8629
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8629/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8629/comments
https://api.github.com/repos/ollama/ollama/issues/8629/events
https://github.com/ollama/ollama/issues/8629
2,815,526,057
I_kwDOJ0Z1Ps6n0YSp
8,629
Choose path to install on Windows
{ "login": "EvgeniGenchev", "id": 59848681, "node_id": "MDQ6VXNlcjU5ODQ4Njgx", "avatar_url": "https://avatars.githubusercontent.com/u/59848681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EvgeniGenchev", "html_url": "https://github.com/EvgeniGenchev", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2025-01-28T12:31:56
2025-01-28T21:31:28
2025-01-28T21:31:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The title is pretty self-expanatory. I would be nice to chose a folder where ollama is being installed on windows instead of defaulting to C:\Users\...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8629/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4316
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4316/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4316/comments
https://api.github.com/repos/ollama/ollama/issues/4316/events
https://github.com/ollama/ollama/pull/4316
2,290,020,182
PR_kwDOJ0Z1Ps5vHUAi
4,316
Bump VRAM buffer back up
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-05-10T16:16:16
2024-05-10T17:02:38
2024-05-10T17:02:35
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4316", "html_url": "https://github.com/ollama/ollama/pull/4316", "diff_url": "https://github.com/ollama/ollama/pull/4316.diff", "patch_url": "https://github.com/ollama/ollama/pull/4316.patch", "merged_at": "2024-05-10T17:02:35" }
Under stress scenarios we're seeing OOMs so this should help stabilize the allocations under heavy concurrency stress.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4316/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/524
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/524/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/524/comments
https://api.github.com/repos/ollama/ollama/issues/524/events
https://github.com/ollama/ollama/pull/524
1,895,148,267
PR_kwDOJ0Z1Ps5aRLK_
524
subprocess improvements
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-09-13T19:42:08
2023-09-18T19:16:34
2023-09-18T19:16:33
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/524", "html_url": "https://github.com/ollama/ollama/pull/524", "diff_url": "https://github.com/ollama/ollama/pull/524.diff", "patch_url": "https://github.com/ollama/ollama/pull/524.patch", "merged_at": "2023-09-18T19:16:33" }
- increase start-up timeout - when runner fails to start fail rather than timing out - try runners in order rather than choosing 1 runner - embed metal runner in metal dir rather than gpu - refactor logging and error messages resolves #485
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/524/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6013
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6013/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6013/comments
https://api.github.com/repos/ollama/ollama/issues/6013/events
https://github.com/ollama/ollama/issues/6013
2,433,393,234
I_kwDOJ0Z1Ps6RCqJS
6,013
Getting 404 page not found on chat completions endpoint with new version
{ "login": "ajasingh", "id": 15189049, "node_id": "MDQ6VXNlcjE1MTg5MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajasingh", "html_url": "https://github.com/ajasingh", "followers_url": "https://api.github.com/users/aja...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-27T09:32:32
2024-07-27T09:45:40
2024-07-27T09:45:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am sucessfully running llama3.1 model locally in command prompt but when i try to access it via api it keeps giving 404 not found to me curl --location --request GET 'http://localhost:11434/v1/chat/completions' \ --header 'Content-Type: application/json' \ --data '{ "model": "lla...
{ "login": "ajasingh", "id": 15189049, "node_id": "MDQ6VXNlcjE1MTg5MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/15189049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajasingh", "html_url": "https://github.com/ajasingh", "followers_url": "https://api.github.com/users/aja...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6013/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7476
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7476/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7476/comments
https://api.github.com/repos/ollama/ollama/issues/7476/events
https://github.com/ollama/ollama/issues/7476
2,630,922,055
I_kwDOJ0Z1Ps6c0K9H
7,476
llama3.2 11b setup error
{ "login": "Teramime", "id": 185576450, "node_id": "U_kgDOCw-sAg", "avatar_url": "https://avatars.githubusercontent.com/u/185576450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Teramime", "html_url": "https://github.com/Teramime", "followers_url": "https://api.github.com/users/Teramime/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-03T02:44:32
2024-11-03T21:31:05
2024-11-03T21:31:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm trying to develop an agent that analyzes images using llama3.2 11b. Development environment: os: window11 pro cpu: intel i9-14900K ram: 32G vga: rtx 4080 super Releases v0.3.14 When I install ollama and run ollama run x/llama3.2-vision, the installation goes well, but it terminat...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7476/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5048
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5048/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5048/comments
https://api.github.com/repos/ollama/ollama/issues/5048/events
https://github.com/ollama/ollama/issues/5048
2,353,904,198
I_kwDOJ0Z1Ps6MTbpG
5,048
Add 'free' command, to free the currently running model out of memory.
{ "login": "Dalibor-P", "id": 131712814, "node_id": "U_kgDOB9nHLg", "avatar_url": "https://avatars.githubusercontent.com/u/131712814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dalibor-P", "html_url": "https://github.com/Dalibor-P", "followers_url": "https://api.github.com/users/Dalibo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-06-14T18:44:52
2024-09-29T09:19:12
2024-09-29T09:19:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Add a new command, possibly `ollama free`, to immediately free the currently running model out of memory, instead of waiting the default five minutes, as an alternative to `keep alive` parameter. Additionally, add the option into the context menu of ollama taskbar icon, next to `view logs `and `quit ollama` buttons.
{ "login": "Dalibor-P", "id": 131712814, "node_id": "U_kgDOB9nHLg", "avatar_url": "https://avatars.githubusercontent.com/u/131712814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dalibor-P", "html_url": "https://github.com/Dalibor-P", "followers_url": "https://api.github.com/users/Dalibo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5048/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5048/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5661
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5661/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5661/comments
https://api.github.com/repos/ollama/ollama/issues/5661/events
https://github.com/ollama/ollama/issues/5661
2,406,643,762
I_kwDOJ0Z1Ps6Pcngy
5,661
num_ctx parameter does not work on Linux
{ "login": "ronchengang", "id": 3615985, "node_id": "MDQ6VXNlcjM2MTU5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/3615985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ronchengang", "html_url": "https://github.com/ronchengang", "followers_url": "https://api.github.com/us...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-07-13T02:37:41
2024-10-16T06:10:02
2024-10-16T05:55:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Issue: Although the log shows the n_ctx = 102400, the input prompt still truncates to 2048. Ollama version: 0.2.1 OS: AWS Linux, instance type: g5.xlarge, GPU: Nvidia A10 24G GPU, version 12.x Model: Qwen2-7B-Instruct, GGUF V3 Ollama server log: ``` Device 0: NVIDIA A10G, compute c...
{ "login": "ronchengang", "id": 3615985, "node_id": "MDQ6VXNlcjM2MTU5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/3615985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ronchengang", "html_url": "https://github.com/ronchengang", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5661/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5661/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4501
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4501/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4501/comments
https://api.github.com/repos/ollama/ollama/issues/4501/events
https://github.com/ollama/ollama/issues/4501
2,303,035,854
I_kwDOJ0Z1Ps6JRYnO
4,501
Does Ollama currently plan to support multiple acceleration frameworks
{ "login": "glide-the", "id": 16206043, "node_id": "MDQ6VXNlcjE2MjA2MDQz", "avatar_url": "https://avatars.githubusercontent.com/u/16206043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/glide-the", "html_url": "https://github.com/glide-the", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-17T15:31:51
2024-07-09T05:17:34
2024-07-09T05:17:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
#### Requirements Does Ollama currently plan to support multiple acceleration frameworks? We understand that Ollama currently leverages Llama.cpp for inference acceleration, which supports only the Llama architecture. The GLM has made some modifications to the model. We are very keen on seeing the GLM ecosystem ...
{ "login": "glide-the", "id": 16206043, "node_id": "MDQ6VXNlcjE2MjA2MDQz", "avatar_url": "https://avatars.githubusercontent.com/u/16206043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/glide-the", "html_url": "https://github.com/glide-the", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4501/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4501/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7231
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7231/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7231/comments
https://api.github.com/repos/ollama/ollama/issues/7231/events
https://github.com/ollama/ollama/pull/7231
2,593,186,089
PR_kwDOJ0Z1Ps5-5QU1
7,231
fix: consider any status code as redirect
{ "login": "XciD", "id": 6586344, "node_id": "MDQ6VXNlcjY1ODYzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6586344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XciD", "html_url": "https://github.com/XciD", "followers_url": "https://api.github.com/users/XciD/followers", ...
[]
open
false
null
[]
null
0
2024-10-16T22:34:11
2024-12-02T20:40:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7231", "html_url": "https://github.com/ollama/ollama/pull/7231", "diff_url": "https://github.com/ollama/ollama/pull/7231.diff", "patch_url": "https://github.com/ollama/ollama/pull/7231.patch", "merged_at": null }
When retrieving the url for downloading a model, ollama always consider that the model is host on a CDN. This PR resolve: - If 200 is return on the same host, just return the current url - Consider any 3xx as a redirect url
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7231/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7231/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7850
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7850/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7850/comments
https://api.github.com/repos/ollama/ollama/issues/7850/events
https://github.com/ollama/ollama/pull/7850
2,696,476,961
PR_kwDOJ0Z1Ps6DQXCy
7,850
openai: remove unused error code
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2024-11-26T23:38:39
2024-11-27T00:08:11
2024-11-27T00:08:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7850", "html_url": "https://github.com/ollama/ollama/pull/7850", "diff_url": "https://github.com/ollama/ollama/pull/7850.diff", "patch_url": "https://github.com/ollama/ollama/pull/7850.patch", "merged_at": "2024-11-27T00:08:09" }
The writeError takes a code argument which is no longer used. Remove it for clarity.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7850/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3987
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3987/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3987/comments
https://api.github.com/repos/ollama/ollama/issues/3987/events
https://github.com/ollama/ollama/issues/3987
2,267,308,728
I_kwDOJ0Z1Ps6HJGK4
3,987
Increase the number of CPU usage for ollama_llama_se in linux
{ "login": "wwjCMP", "id": 32979859, "node_id": "MDQ6VXNlcjMyOTc5ODU5", "avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwjCMP", "html_url": "https://github.com/wwjCMP", "followers_url": "https://api.github.com/users/wwjCMP/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
0
2024-04-28T02:53:48
2024-04-28T04:05:52
2024-04-28T04:05:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Can the number of CPU cores used by the command 'ollama_llama_se' be increased through settings in Linux? As shown in the figure, the CPU is running at full load, but the 'ollama_llama_se' command can only use about thirty cores in competition with other continuously running commands. Which of these continuously runnin...
{ "login": "wwjCMP", "id": 32979859, "node_id": "MDQ6VXNlcjMyOTc5ODU5", "avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwjCMP", "html_url": "https://github.com/wwjCMP", "followers_url": "https://api.github.com/users/wwjCMP/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3987/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3683/comments
https://api.github.com/repos/ollama/ollama/issues/3683/events
https://github.com/ollama/ollama/issues/3683
2,246,915,257
I_kwDOJ0Z1Ps6F7TS5
3,683
mixtral:22b OLLAMA 0.1.32 llama runner process no longer running: -1 cudaMalloc failed: out of memory
{ "login": "subhashdasyam", "id": 19161628, "node_id": "MDQ6VXNlcjE5MTYxNjI4", "avatar_url": "https://avatars.githubusercontent.com/u/19161628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subhashdasyam", "html_url": "https://github.com/subhashdasyam", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
null
[]
null
9
2024-04-16T21:31:15
2024-05-07T14:51:16
2024-04-17T00:41:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Apr...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3683/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3683/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5591
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5591/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5591/comments
https://api.github.com/repos/ollama/ollama/issues/5591/events
https://github.com/ollama/ollama/issues/5591
2,400,005,016
I_kwDOJ0Z1Ps6PDSuY
5,591
Upgrading removes all models
{ "login": "loranger", "id": 6014, "node_id": "MDQ6VXNlcjYwMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loranger", "html_url": "https://github.com/loranger", "followers_url": "https://api.github.com/users/loranger/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
3
2024-07-10T07:43:19
2024-12-02T07:32:15
2024-11-17T18:51:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I just upgraded my ollama setup manually, by running the install script again, as [specified](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama), but I also lost all my models, the ones pulled, the ones built, all are gone. I supposed that's not the desire...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5591/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5271
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5271/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5271/comments
https://api.github.com/repos/ollama/ollama/issues/5271/events
https://github.com/ollama/ollama/issues/5271
2,371,861,512
I_kwDOJ0Z1Ps6NX7wI
5,271
Low VRAM Utilization on RTX 3090 When Models are Split Across Multiple CUDA Devices (separate ollama serve)
{ "login": "chrisoutwright", "id": 27736055, "node_id": "MDQ6VXNlcjI3NzM2MDU1", "avatar_url": "https://avatars.githubusercontent.com/u/27736055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chrisoutwright", "html_url": "https://github.com/chrisoutwright", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-06-25T07:04:56
2024-08-01T22:38:04
2024-08-01T22:38:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## Environment - **Ollama Version**: 0.1.45 - **Operating System**: Win10 - **GPU Type**: NVIDIA RTX 3090, GTX 1080Ti ## Issue Description I am experiencing an issue with VRAM utilization in Ollama 0.1.45. When using the codestral example to split models across different CUDA devices ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5271/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3011
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3011/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3011/comments
https://api.github.com/repos/ollama/ollama/issues/3011/events
https://github.com/ollama/ollama/issues/3011
2,176,669,801
I_kwDOJ0Z1Ps6BvVhp
3,011
Starcoder2 crashes latest ollama container
{ "login": "madelponte", "id": 3129897, "node_id": "MDQ6VXNlcjMxMjk4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3129897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madelponte", "html_url": "https://github.com/madelponte", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
3
2024-03-08T19:45:23
2024-03-08T21:35:41
2024-03-08T21:35:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Image ID: `76b4fbf17cef` Comamnd run: `ollama run starcoder2` Tried with both docker and podman and same thing happens with either. Error: ```go time=2024-03-08T19:33:43.460Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-08T19:33:43.460Z level=INFO source=cpu_common.go:11 msg="CPU has AVX...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3011/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3011/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7185
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7185/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7185/comments
https://api.github.com/repos/ollama/ollama/issues/7185/events
https://github.com/ollama/ollama/issues/7185
2,583,207,847
I_kwDOJ0Z1Ps6Z-J-n
7,185
[Feature Request] Command to browse the model library / search for a specific model from the ollama CLI.
{ "login": "AFellowSpeedrunner", "id": 73440604, "node_id": "MDQ6VXNlcjczNDQwNjA0", "avatar_url": "https://avatars.githubusercontent.com/u/73440604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AFellowSpeedrunner", "html_url": "https://github.com/AFellowSpeedrunner", "followers_url": "ht...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
6
2024-10-12T15:50:00
2024-10-13T12:22:57
2024-10-13T04:56:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I recently had an idea. What if there was a command to search and browse the model library from the ollama CLI? I'm imagining something like `ollama search llama3` and `ollama browse`. For example, the search would return like (rough example, GitHub formatting kinda broke it and I don't know how to fix it): MO...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7185/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4931
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4931/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4931/comments
https://api.github.com/repos/ollama/ollama/issues/4931/events
https://github.com/ollama/ollama/issues/4931
2,341,634,421
I_kwDOJ0Z1Ps6LkoF1
4,931
Release Note Issue
{ "login": "karaketir16", "id": 27349806, "node_id": "MDQ6VXNlcjI3MzQ5ODA2", "avatar_url": "https://avatars.githubusercontent.com/u/27349806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karaketir16", "html_url": "https://github.com/karaketir16", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
1
2024-06-08T12:25:51
2024-06-08T20:27:53
2024-06-08T20:27:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In the release notes for [v0.1.34](https://github.com/ollama/ollama/releases/tag/v0.1.34), under the "What's Changed" section, the environment variable is incorrectly listed as `OLLAMA_MAX_LOADED`. The correct version is `OLLAMA_MAX_LOADED_MODELS`.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4931/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2295
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2295/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2295/comments
https://api.github.com/repos/ollama/ollama/issues/2295/events
https://github.com/ollama/ollama/issues/2295
2,111,185,400
I_kwDOJ0Z1Ps591iH4
2,295
multimodal processing doesn't work for one-shot CLI
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
1
2024-01-31T23:50:32
2024-02-02T05:33:07
2024-02-02T05:33:07
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This doesn't work: ``` % ollama run llava "whats in this image ./image.jpg" I'm sorry, but as a text-based AI language model, I am not able to directly view or interpret images. However, if the image is related to the topic of data science or machine learning, it could potentially be something like a dataset, a vi...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2295/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2295/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7685
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7685/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7685/comments
https://api.github.com/repos/ollama/ollama/issues/7685/events
https://github.com/ollama/ollama/issues/7685
2,661,647,183
I_kwDOJ0Z1Ps6epYNP
7,685
Streaming chat/completions behind a gateway with timeout
{ "login": "Upabjojr", "id": 4128856, "node_id": "MDQ6VXNlcjQxMjg4NTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4128856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Upabjojr", "html_url": "https://github.com/Upabjojr", "followers_url": "https://api.github.com/users/Upabj...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-11-15T11:04:48
2024-12-23T07:53:52
2024-12-23T07:53:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am using Ollama on a server behind a gateway that has a 30 second timeout on every forwarded HTTP request. If Ollama takes more than 30 seconds to respond to the HTTP request, the connection will be reset. So far, enabling streaming on chat/completions has been an efficient workaround, as streaming chunks of gener...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7685/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5839
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5839/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5839/comments
https://api.github.com/repos/ollama/ollama/issues/5839/events
https://github.com/ollama/ollama/issues/5839
2,421,807,379
I_kwDOJ0Z1Ps6QWdkT
5,839
CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
{ "login": "CaptainDP", "id": 19919798, "node_id": "MDQ6VXNlcjE5OTE5Nzk4", "avatar_url": "https://avatars.githubusercontent.com/u/19919798?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaptainDP", "html_url": "https://github.com/CaptainDP", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-22T03:11:37
2024-07-22T07:01:56
2024-07-22T07:01:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? error msg: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda/common.cuh:826 cublasCreate_v2(&cublas_handles[device]) model:qwen2-sft,use llama.cpp/convert_hf_to_gguf.py transfer to...
{ "login": "CaptainDP", "id": 19919798, "node_id": "MDQ6VXNlcjE5OTE5Nzk4", "avatar_url": "https://avatars.githubusercontent.com/u/19919798?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaptainDP", "html_url": "https://github.com/CaptainDP", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5839/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1758
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1758/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1758/comments
https://api.github.com/repos/ollama/ollama/issues/1758/events
https://github.com/ollama/ollama/issues/1758
2,061,960,258
I_kwDOJ0Z1Ps565wRC
1,758
💡 "ollama --verify" to validate a model
{ "login": "adriens", "id": 5235127, "node_id": "MDQ6VXNlcjUyMzUxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adriens", "html_url": "https://github.com/adriens", "followers_url": "https://api.github.com/users/adriens/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-01-02T05:33:31
2024-03-11T21:26:48
2024-03-11T20:33:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# ❔ About Sometimes, me may only need to validate than we can compile a model with `ollama`...without having to download the whole base model. **👉 In a few words, this would help telling, very fast and very easily is a `ollama` modelfile could be used. 👈** # 💡 Feature request Implement `ollama --verify...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1758/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1758/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2483
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2483/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2483/comments
https://api.github.com/repos/ollama/ollama/issues/2483/events
https://github.com/ollama/ollama/pull/2483
2,133,357,614
PR_kwDOJ0Z1Ps5mzpiS
2,483
update default registry domain
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
open
false
null
[]
null
1
2024-02-14T00:40:03
2024-12-10T21:50:54
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2483", "html_url": "https://github.com/ollama/ollama/pull/2483", "diff_url": "https://github.com/ollama/ollama/pull/2483.diff", "patch_url": "https://github.com/ollama/ollama/pull/2483.patch", "merged_at": null }
update default registry domain from registry.ollama.ai to ollama.com migrate models by moving models to their new location. this is one directional
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2483/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6859
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6859/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6859/comments
https://api.github.com/repos/ollama/ollama/issues/6859/events
https://github.com/ollama/ollama/issues/6859
2,534,379,222
I_kwDOJ0Z1Ps6XD47W
6,859
Something got changed in the build process and I seem unable to force CUDA/CUBLAS use.
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-09-18T18:18:21
2024-09-18T18:39:35
2024-09-18T18:39:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? How do I ensure that ollama is built with CUDA/CUBLAS support? I don't see anything in the README.md to that end. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version just built from source just now.
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6859/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6234
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6234/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6234/comments
https://api.github.com/repos/ollama/ollama/issues/6234/events
https://github.com/ollama/ollama/issues/6234
2,453,866,619
I_kwDOJ0Z1Ps6SQwh7
6,234
File Name with Empty Space Will Not be Recognize
{ "login": "Mo-enen", "id": 13920065, "node_id": "MDQ6VXNlcjEzOTIwMDY1", "avatar_url": "https://avatars.githubusercontent.com/u/13920065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mo-enen", "html_url": "https://github.com/Mo-enen", "followers_url": "https://api.github.com/users/Mo-ene...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-08-07T16:19:09
2024-08-08T07:40:05
2024-08-08T07:40:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
As the image below shows: The Chinese content is the correct response from the LLM. Not an error message. ![error](https://github.com/user-attachments/assets/83276219-8c15-4b1d-89f2-0f8996889376) ### OS Windows ### Ollama version 0.3.4
{ "login": "Mo-enen", "id": 13920065, "node_id": "MDQ6VXNlcjEzOTIwMDY1", "avatar_url": "https://avatars.githubusercontent.com/u/13920065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mo-enen", "html_url": "https://github.com/Mo-enen", "followers_url": "https://api.github.com/users/Mo-ene...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6234/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6234/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/498
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/498/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/498/comments
https://api.github.com/repos/ollama/ollama/issues/498/events
https://github.com/ollama/ollama/issues/498
1,888,084,370
I_kwDOJ0Z1Ps5wieGS
498
SSL certificate error.
{ "login": "ggozad", "id": 183103, "node_id": "MDQ6VXNlcjE4MzEwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/183103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggozad", "html_url": "https://github.com/ggozad", "followers_url": "https://api.github.com/users/ggozad/follow...
[]
closed
false
null
[]
null
0
2023-09-08T17:42:09
2023-09-08T20:25:35
2023-09-08T20:25:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hey! Just installed Ollama on my brand new MacBook. When trying to pull a model it seems there is a certificate error on the model registry: ``` ollama pull llama2 pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": tls: failed to verify certificate: ...
{ "login": "ggozad", "id": 183103, "node_id": "MDQ6VXNlcjE4MzEwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/183103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggozad", "html_url": "https://github.com/ggozad", "followers_url": "https://api.github.com/users/ggozad/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/498/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5385
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5385/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5385/comments
https://api.github.com/repos/ollama/ollama/issues/5385/events
https://github.com/ollama/ollama/issues/5385
2,381,908,177
I_kwDOJ0Z1Ps6N-QjR
5,385
Provide a single command for "serve + pull model", to be used in CI/CD
{ "login": "steren", "id": 360895, "node_id": "MDQ6VXNlcjM2MDg5NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/360895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/steren", "html_url": "https://github.com/steren", "followers_url": "https://api.github.com/users/steren/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
5
2024-06-29T18:35:39
2024-07-29T18:17:33
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am building a container image on top of the official `ollama/ollama` image and I want to store in this image the model I intend to serve, so that I do not have to pull it after startup. The use case is to run Ollama in an autoscaled container environment. The issue is that today, Ollama requires `ollama serve` bef...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5385/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5385/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5051
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5051/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5051/comments
https://api.github.com/repos/ollama/ollama/issues/5051/events
https://github.com/ollama/ollama/pull/5051
2,354,141,159
PR_kwDOJ0Z1Ps5yhhWO
5,051
add model capabilities
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
2
2024-06-14T21:29:14
2024-07-02T21:26:09
2024-07-02T21:26:07
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5051", "html_url": "https://github.com/ollama/ollama/pull/5051", "diff_url": "https://github.com/ollama/ollama/pull/5051.diff", "patch_url": "https://github.com/ollama/ollama/pull/5051.patch", "merged_at": "2024-07-02T21:26:07" }
detect completion capability by looking at model KVs. with this change, ollama correctly detects a model like [jina/jina-embeddings-v2-small-en](https://ollama.com/jina/jina-embeddings-v2-small-en) is an embedding model (as opposed to a text completion model)
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5051/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7226
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7226/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7226/comments
https://api.github.com/repos/ollama/ollama/issues/7226/events
https://github.com/ollama/ollama/issues/7226
2,592,038,509
I_kwDOJ0Z1Ps6af15t
7,226
Library tags not present in model information - RFC
{ "login": "elsatch", "id": 653433, "node_id": "MDQ6VXNlcjY1MzQzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elsatch", "html_url": "https://github.com/elsatch", "followers_url": "https://api.github.com/users/elsatch/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-10-16T14:15:34
2024-10-16T14:15:34
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Today, I was using a third party software along Ollma (Msty). That program has support for text and vision models. As I was trying MiniCPM-V, a vision model, it was not detected by the program. Somehow I assumed that the Vision tag present on the library would be also present in the Model information. But that does not...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7226/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4398
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4398/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4398/comments
https://api.github.com/repos/ollama/ollama/issues/4398/events
https://github.com/ollama/ollama/issues/4398
2,292,426,426
I_kwDOJ0Z1Ps6Io6a6
4,398
KeyError: 'name' when using completions with tool use in mistral
{ "login": "r4881t", "id": 81687400, "node_id": "MDQ6VXNlcjgxNjg3NDAw", "avatar_url": "https://avatars.githubusercontent.com/u/81687400?v=4", "gravatar_id": "", "url": "https://api.github.com/users/r4881t", "html_url": "https://github.com/r4881t", "followers_url": "https://api.github.com/users/r4881t/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-05-13T10:28:57
2024-09-28T03:27:42
2024-05-15T15:20:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am running Ollama + Litellm with Autogen. When I try it, I keep getting the error below ``` Traceback (most recent call last): File "/Users/pranavprakash/workspace/litellm/venv/lib/python3.11/site-packages/litellm/llms/ollama_chat.py", line 422, in ollama_acompletion "function"...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4398/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6589
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6589/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6589/comments
https://api.github.com/repos/ollama/ollama/issues/6589/events
https://github.com/ollama/ollama/issues/6589
2,500,132,754
I_kwDOJ0Z1Ps6VBP-S
6,589
Can this be used with "LM Studio" to share models? If so, how can it be modified?
{ "login": "Willy-Shenn", "id": 79782696, "node_id": "MDQ6VXNlcjc5NzgyNjk2", "avatar_url": "https://avatars.githubusercontent.com/u/79782696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Willy-Shenn", "html_url": "https://github.com/Willy-Shenn", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
2
2024-09-02T06:42:02
2024-09-02T21:58:24
2024-09-02T21:58:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am currently using two UI systems, but they cannot share models (possibly due to differences in how the models are identified and created). Even after modifying the environment variables, both UIs cannot use models from the same path. Is there anyone who can guide me on how to modify the two UIs so they can use model...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6589/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4448
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4448/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4448/comments
https://api.github.com/repos/ollama/ollama/issues/4448/events
https://github.com/ollama/ollama/issues/4448
2,297,242,533
I_kwDOJ0Z1Ps6I7SOl
4,448
Streaming Chat Completion via OpenAI API should support stream option to include Usage
{ "login": "odrobnik", "id": 333270, "node_id": "MDQ6VXNlcjMzMzI3MA==", "avatar_url": "https://avatars.githubusercontent.com/u/333270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/odrobnik", "html_url": "https://github.com/odrobnik", "followers_url": "https://api.github.com/users/odrobni...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
2
2024-05-15T08:43:18
2024-09-03T15:39:56
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In streaming mode the OpenAI chat completion has a new parameter to include Usage information after the Chunks. You just add a `{ "include_usage": true }` to the request. Then the final chunks will look like this: ``` ... data: {"id":"chatcmpl-9P4UJf7DEdyXVro2VOMRMT9qKR0bC","object":"chat.completion.chunk","cre...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4448/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4448/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4706
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4706/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4706/comments
https://api.github.com/repos/ollama/ollama/issues/4706/events
https://github.com/ollama/ollama/issues/4706
2,323,719,550
I_kwDOJ0Z1Ps6KgSV-
4,706
22B Codestral model
{ "login": "DuckyBlender", "id": 42645784, "node_id": "MDQ6VXNlcjQyNjQ1Nzg0", "avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DuckyBlender", "html_url": "https://github.com/DuckyBlender", "followers_url": "https://api.github.c...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-05-29T16:19:24
2024-05-29T20:02:22
2024-05-29T20:02:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/mistralai/Codestral-22B-v0.1 https://mistral.ai/news/codestral/
{ "login": "DuckyBlender", "id": 42645784, "node_id": "MDQ6VXNlcjQyNjQ1Nzg0", "avatar_url": "https://avatars.githubusercontent.com/u/42645784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DuckyBlender", "html_url": "https://github.com/DuckyBlender", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4706/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4706/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1919
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1919/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1919/comments
https://api.github.com/repos/ollama/ollama/issues/1919/events
https://github.com/ollama/ollama/issues/1919
2,076,033,030
I_kwDOJ0Z1Ps57vcAG
1,919
create model, not meeting the performance requirements of the gguf
{ "login": "quanpinjie", "id": 2564119, "node_id": "MDQ6VXNlcjI1NjQxMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2564119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/quanpinjie", "html_url": "https://github.com/quanpinjie", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
2
2024-01-11T08:25:25
2024-03-12T22:35:26
2024-03-12T22:35:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
i convert baichuan2 to gguf and create a model, The result is poor performance,do I need to configure anything else modelfile: FROM ./baichuan2-ggml-model-f16.gguf ![image](https://github.com/jmorganca/ollama/assets/2564119/ea70b5b6-9729-4a93-b990-a4ce439e6921)
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1919/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4779
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4779/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4779/comments
https://api.github.com/repos/ollama/ollama/issues/4779/events
https://github.com/ollama/ollama/pull/4779
2,329,458,752
PR_kwDOJ0Z1Ps5xNjcq
4,779
update welcome prompt in windows to `llama3`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-06-02T04:00:45
2024-06-02T04:05:52
2024-06-02T04:05:51
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4779", "html_url": "https://github.com/ollama/ollama/pull/4779", "diff_url": "https://github.com/ollama/ollama/pull/4779.diff", "patch_url": "https://github.com/ollama/ollama/pull/4779.patch", "merged_at": "2024-06-02T04:05:51" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4779/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1073
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1073/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1073/comments
https://api.github.com/repos/ollama/ollama/issues/1073/events
https://github.com/ollama/ollama/issues/1073
1,987,359,804
I_kwDOJ0Z1Ps52dLQ8
1,073
More fine-grained download speed
{ "login": "Dialga", "id": 5157928, "node_id": "MDQ6VXNlcjUxNTc5Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5157928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dialga", "html_url": "https://github.com/Dialga", "followers_url": "https://api.github.com/users/Dialga/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2023-11-10T10:33:32
2024-01-17T23:52:24
2024-01-17T23:52:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Currently when downloading large models, it shows downloading `16/19 GB`, it would be more helpful to show a float e.g. `16.22/19.3 GB`.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1073/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8410
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8410/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8410/comments
https://api.github.com/repos/ollama/ollama/issues/8410/events
https://github.com/ollama/ollama/pull/8410
2,785,995,103
PR_kwDOJ0Z1Ps6HpLv_
8,410
sample: add sampling package for new engine
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[]
open
false
null
[]
null
1
2025-01-14T02:04:26
2025-01-29T23:09:07
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8410", "html_url": "https://github.com/ollama/ollama/pull/8410", "diff_url": "https://github.com/ollama/ollama/pull/8410.diff", "patch_url": "https://github.com/ollama/ollama/pull/8410.patch", "merged_at": null }
This package introduces a first pass at the sampler for the new engine. It's super simple to write your own sampler, people would be able to build and run their own as well from source. Would like your thoughts on the following: - Go-isms! Please help with writing idiomatic go. - Should we keep min-p? openai do...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8410/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6070
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6070/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6070/comments
https://api.github.com/repos/ollama/ollama/issues/6070/events
https://github.com/ollama/ollama/issues/6070
2,437,096,604
I_kwDOJ0Z1Ps6RQySc
6,070
Run Ollama on multiple GPU using ollama run
{ "login": "atharvnagrikar", "id": 111486339, "node_id": "U_kgDOBqUlgw", "avatar_url": "https://avatars.githubusercontent.com/u/111486339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atharvnagrikar", "html_url": "https://github.com/atharvnagrikar", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-07-30T07:20:07
2024-07-30T17:00:55
2024-07-30T17:00:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I have 2 gpus each having 40 gb of memory and i want to run llama3.1 70b using these GPUs, are there any features to run ollama on distributed way
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6070/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8680
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8680/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8680/comments
https://api.github.com/repos/ollama/ollama/issues/8680/events
https://github.com/ollama/ollama/issues/8680
2,819,658,888
I_kwDOJ0Z1Ps6oEJSI
8,680
api/chat not working in parallel with docker-compose
{ "login": "acclayer7", "id": 178514264, "node_id": "U_kgDOCqPpWA", "avatar_url": "https://avatars.githubusercontent.com/u/178514264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/acclayer7", "html_url": "https://github.com/acclayer7", "followers_url": "https://api.github.com/users/acclay...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2025-01-30T00:54:32
2025-01-30T01:05:37
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I have my ollama with enough memory (16vram), I use OLLAMA_NUM_PARALLEL=2 OLLAMA_MAX_LOADED_MODELS=2, but I don't see any memory increase. I use docker-compose to make work, however when using the api, it does not increase the vram, it stays using the same vram and I still have 10gb vram...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8680/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8195
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8195/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8195/comments
https://api.github.com/repos/ollama/ollama/issues/8195/events
https://github.com/ollama/ollama/issues/8195
2,753,790,476
I_kwDOJ0Z1Ps6kI4IM
8,195
ERROR : max retries exceeded
{ "login": "Jinish2170", "id": 121560356, "node_id": "U_kgDOBz7dJA", "avatar_url": "https://avatars.githubusercontent.com/u/121560356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jinish2170", "html_url": "https://github.com/Jinish2170", "followers_url": "https://api.github.com/users/Jin...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-12-21T06:10:47
2024-12-25T07:34:12
2024-12-24T19:27:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Not been able to install new models like llama3.2 or llama 3.3 error message shown like "Error: max retries exceeded: Get"https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/dd/dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdf...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8195/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/75
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/75/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/75/comments
https://api.github.com/repos/ollama/ollama/issues/75/events
https://github.com/ollama/ollama/issues/75
1,801,998,729
I_kwDOJ0Z1Ps5raFGJ
75
error on `ollama run`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2023-07-13T01:45:38
2023-07-13T02:21:14
2023-07-13T02:21:14
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
`ollama run` sometimes shows a `malformed HTTP response` error: ``` ollama run orca Error: Post "http://127.0.0.1:11434/api/pull": net/http: HTTP/1.x transport connection broken: malformed HTTP response "{\"total\":2142590208,\"completed\":2142590208,\"percent\":100}" ```
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/75/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/75/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7223
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7223/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7223/comments
https://api.github.com/repos/ollama/ollama/issues/7223/events
https://github.com/ollama/ollama/issues/7223
2,591,323,421
I_kwDOJ0Z1Ps6adHUd
7,223
How to add support for RWKV?
{ "login": "MollySophia", "id": 20746884, "node_id": "MDQ6VXNlcjIwNzQ2ODg0", "avatar_url": "https://avatars.githubusercontent.com/u/20746884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MollySophia", "html_url": "https://github.com/MollySophia", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2024-10-16T09:54:37
2024-10-16T11:41:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi! I would like to try to make RWKV v6 models working with ollama. llama.cpp has it supported already. - Currently ollama fails to load the model due to a bug in llama.cpp. Here's the fix PR: https://github.com/ggerganov/llama.cpp/pull/9907 - Another issue is the chat template. I wonder how should a chat template b...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7223/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7223/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6895
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6895/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6895/comments
https://api.github.com/repos/ollama/ollama/issues/6895/events
https://github.com/ollama/ollama/pull/6895
2,539,685,256
PR_kwDOJ0Z1Ps58NyVk
6,895
CI: adjust step ordering for win arm to match x64
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-09-20T21:09:30
2024-09-20T21:21:27
2024-09-20T21:20:57
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6895", "html_url": "https://github.com/ollama/ollama/pull/6895", "diff_url": "https://github.com/ollama/ollama/pull/6895.diff", "patch_url": "https://github.com/ollama/ollama/pull/6895.patch", "merged_at": "2024-09-20T21:20:57" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6895/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4606
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4606/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4606/comments
https://api.github.com/repos/ollama/ollama/issues/4606/events
https://github.com/ollama/ollama/issues/4606
2,314,442,587
I_kwDOJ0Z1Ps6J85db
4,606
MiniCPM-Llama3-V 2.5
{ "login": "ycyy", "id": 10897377, "node_id": "MDQ6VXNlcjEwODk3Mzc3", "avatar_url": "https://avatars.githubusercontent.com/u/10897377?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ycyy", "html_url": "https://github.com/ycyy", "followers_url": "https://api.github.com/users/ycyy/followers"...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
9
2024-05-24T05:46:39
2024-06-09T17:11:22
2024-06-09T17:11:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[https://github.com/OpenBMB/MiniCPM-V](MiniCPM-V) > [2024.05.24] We release the [MiniCPM-Llama3-V 2.5 gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](https://github.com/OpenBMB/MiniCPM-V#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobil...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4606/reactions", "total_count": 9, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/4606/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/349
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/349/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/349/comments
https://api.github.com/repos/ollama/ollama/issues/349/events
https://github.com/ollama/ollama/pull/349
1,850,683,721
PR_kwDOJ0Z1Ps5X7j1Q
349
close open files
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-14T23:09:15
2023-08-14T23:15:59
2023-08-14T23:15:58
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/349", "html_url": "https://github.com/ollama/ollama/pull/349", "diff_url": "https://github.com/ollama/ollama/pull/349.diff", "patch_url": "https://github.com/ollama/ollama/pull/349.patch", "merged_at": "2023-08-14T23:15:58" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/349/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8150
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8150/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8150/comments
https://api.github.com/repos/ollama/ollama/issues/8150/events
https://github.com/ollama/ollama/issues/8150
2,746,656,707
I_kwDOJ0Z1Ps6jtqfD
8,150
model run failed
{ "login": "kingluxun", "id": 189943745, "node_id": "U_kgDOC1JPwQ", "avatar_url": "https://avatars.githubusercontent.com/u/189943745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kingluxun", "html_url": "https://github.com/kingluxun", "followers_url": "https://api.github.com/users/kinglu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-12-18T03:27:10
2024-12-18T03:37:53
2024-12-18T03:37:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? What is the issue? 0.5.4 Error: llama runner process has terminated: error:/usr/lib/ollama/runners/cuda_v12_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg. Same for 0.5.1 is running normally. nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024...
{ "login": "kingluxun", "id": 189943745, "node_id": "U_kgDOC1JPwQ", "avatar_url": "https://avatars.githubusercontent.com/u/189943745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kingluxun", "html_url": "https://github.com/kingluxun", "followers_url": "https://api.github.com/users/kinglu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8150/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6763
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6763/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6763/comments
https://api.github.com/repos/ollama/ollama/issues/6763/events
https://github.com/ollama/ollama/issues/6763
2,520,631,609
I_kwDOJ0Z1Ps6WPck5
6,763
`ollama show` displays context length in scientific notation
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg...
closed
false
null
[]
null
0
2024-09-11T19:47:47
2024-09-11T21:58:42
2024-09-11T21:58:41
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ![image](https://github.com/user-attachments/assets/df3815ec-a650-4894-887a-c523e33ab3a3) ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6763/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3815
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3815/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3815/comments
https://api.github.com/repos/ollama/ollama/issues/3815/events
https://github.com/ollama/ollama/issues/3815
2,255,853,207
I_kwDOJ0Z1Ps6GdZaX
3,815
OpenSSL SSL_read: error:0A000126
{ "login": "xuya227939", "id": 16217324, "node_id": "MDQ6VXNlcjE2MjE3MzI0", "avatar_url": "https://avatars.githubusercontent.com/u/16217324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xuya227939", "html_url": "https://github.com/xuya227939", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
3
2024-04-22T07:56:53
2024-08-23T20:57:36
2024-08-23T20:57:36
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? jiang@jiang-MS-7D90:~$ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ######################################################################## 100.0%#=#=-# # curl: (56) OpenSSL SSL_read: error:0A000126:SSL r...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3815/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/2714
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2714/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2714/comments
https://api.github.com/repos/ollama/ollama/issues/2714/events
https://github.com/ollama/ollama/issues/2714
2,151,554,978
I_kwDOJ0Z1Ps6APh-i
2,714
Misunderstanding of ollama num_ctx parameter and context window
{ "login": "PhilipAmadasun", "id": 55031054, "node_id": "MDQ6VXNlcjU1MDMxMDU0", "avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipAmadasun", "html_url": "https://github.com/PhilipAmadasun", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
26
2024-02-23T18:00:42
2024-12-09T09:25:49
2024-02-23T19:34:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm trying to understand the relationship between the context window and the `num_ctx` parameter. Let's say I'm using mistral, and mistral's max context (according to google) is 8000, and "attention span" (according to google) is 128000. If I have a 27000 length user query. What exactly happens? If I set `num_ctx: 409...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2714/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2714/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8440
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8440/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8440/comments
https://api.github.com/repos/ollama/ollama/issues/8440/events
https://github.com/ollama/ollama/issues/8440
2,789,502,572
I_kwDOJ0Z1Ps6mRG5s
8,440
Using `mkdir -p` rather than checking manually is a dir exists before creating it A.K.A. Storing (very) large files in /root vs. "Error: mkdir /usr/share/ollama/XXX: file exists"
{ "login": "liar666", "id": 3216927, "node_id": "MDQ6VXNlcjMyMTY5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/3216927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liar666", "html_url": "https://github.com/liar666", "followers_url": "https://api.github.com/users/liar666/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2025-01-15T11:06:49
2025-01-16T15:18:37
2025-01-16T14:59:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, Like many Linux users, I have a separated `/home` and `/`. When I created these partitions (~10 years ago), I allocated only ~40GB to `/` as it was _way_ sufficient to store the OS + all the packages I use in my every day life. Unfortunately, when I experiment with new models in `ollama`, ...
{ "login": "liar666", "id": 3216927, "node_id": "MDQ6VXNlcjMyMTY5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/3216927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liar666", "html_url": "https://github.com/liar666", "followers_url": "https://api.github.com/users/liar666/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8440/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3408
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3408/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3408/comments
https://api.github.com/repos/ollama/ollama/issues/3408/events
https://github.com/ollama/ollama/issues/3408
2,215,643,381
I_kwDOJ0Z1Ps6EEAj1
3,408
Pushing a model isn't early alpha anymore
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
1
2024-03-29T16:17:47
2024-04-15T19:40:06
2024-04-15T19:40:06
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? Pushing a model isn't early alpha anymore ### How should we solve this? remove 'early alpha' in the import doc ### What is the impact of not solving this? folks will think its early alpha ### Anything else? _No response_
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3408/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8446
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8446/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8446/comments
https://api.github.com/repos/ollama/ollama/issues/8446/events
https://github.com/ollama/ollama/pull/8446
2,791,628,854
PR_kwDOJ0Z1Ps6H8rG9
8,446
add conversion code for cohere2 arch
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
open
false
null
[]
null
0
2025-01-16T03:51:52
2025-01-18T05:54:34
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8446", "html_url": "https://github.com/ollama/ollama/pull/8446", "diff_url": "https://github.com/ollama/ollama/pull/8446.diff", "patch_url": "https://github.com/ollama/ollama/pull/8446.patch", "merged_at": null }
This change adds conversion + test routines for Cohere's command-r7b model.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8446/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8446/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7102
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7102/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7102/comments
https://api.github.com/repos/ollama/ollama/issues/7102/events
https://github.com/ollama/ollama/issues/7102
2,566,570,878
I_kwDOJ0Z1Ps6Y-sN-
7,102
VideoCore GPU support
{ "login": "erkinalp", "id": 5833034, "node_id": "MDQ6VXNlcjU4MzMwMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5833034?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erkinalp", "html_url": "https://github.com/erkinalp", "followers_url": "https://api.github.com/users/erkin...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-10-04T14:51:16
2024-10-04T14:51:16
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Required to be able to run models on RaspberryPi's GPU.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7102/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8016
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8016/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8016/comments
https://api.github.com/repos/ollama/ollama/issues/8016/events
https://github.com/ollama/ollama/pull/8016
2,727,845,381
PR_kwDOJ0Z1Ps6ElPgL
8,016
Add warning message when prompt doesn't include json for structured outputs
{ "login": "danclaytondev", "id": 27310664, "node_id": "MDQ6VXNlcjI3MzEwNjY0", "avatar_url": "https://avatars.githubusercontent.com/u/27310664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danclaytondev", "html_url": "https://github.com/danclaytondev", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
2
2024-12-09T17:51:22
2024-12-09T17:57:07
2024-12-09T17:55:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8016", "html_url": "https://github.com/ollama/ollama/pull/8016", "diff_url": "https://github.com/ollama/ollama/pull/8016.diff", "patch_url": "https://github.com/ollama/ollama/pull/8016.patch", "merged_at": null }
ollama currently warns when `json` output is required but it is not mentioned in the prompt. It is recommended in the docs that prompts should ask for JSON output. With the new structured output feature, the warning isn't logged if a user supplied a schema, only if they ask for `"format": "json"`. I think we need th...
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8016/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/8016/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5734
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5734/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5734/comments
https://api.github.com/repos/ollama/ollama/issues/5734/events
https://github.com/ollama/ollama/pull/5734
2,412,269,766
PR_kwDOJ0Z1Ps51kqk-
5,734
server: validate template
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[]
closed
false
null
[]
null
1
2024-07-17T00:13:11
2024-07-22T18:20:15
2024-07-19T22:24:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5734", "html_url": "https://github.com/ollama/ollama/pull/5734", "diff_url": "https://github.com/ollama/ollama/pull/5734.diff", "patch_url": "https://github.com/ollama/ollama/pull/5734.patch", "merged_at": "2024-07-19T22:24:29" }
tries to parse template and returns error if it fails. resolves: https://github.com/ollama/ollama/issues/5449
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5734/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4940
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4940/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4940/comments
https://api.github.com/repos/ollama/ollama/issues/4940/events
https://github.com/ollama/ollama/issues/4940
2,341,920,624
I_kwDOJ0Z1Ps6Llt9w
4,940
Can't run ollama using cmd on Windows
{ "login": "ziarmandhost", "id": 30569343, "node_id": "MDQ6VXNlcjMwNTY5MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/30569343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ziarmandhost", "html_url": "https://github.com/ziarmandhost", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-09T00:17:26
2024-06-09T15:27:37
2024-06-09T15:27:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I can't run ollama using windows 11 terminal app: ![image](https://github.com/ollama/ollama/assets/30569343/9daecf0f-8662-48cb-8c5c-c00bcbb7fb5b) But environment variable exists in "System variables": ![image](https://github.com/ollama/ollama/assets/30569343/de540c41-8133-408f-ad80-5c5e...
{ "login": "ziarmandhost", "id": 30569343, "node_id": "MDQ6VXNlcjMwNTY5MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/30569343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ziarmandhost", "html_url": "https://github.com/ziarmandhost", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4940/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3277
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3277/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3277/comments
https://api.github.com/repos/ollama/ollama/issues/3277/events
https://github.com/ollama/ollama/issues/3277
2,198,917,431
I_kwDOJ0Z1Ps6DENE3
3,277
Can not build ollama on windows 11
{ "login": "linkerlin", "id": 37062, "node_id": "MDQ6VXNlcjM3MDYy", "avatar_url": "https://avatars.githubusercontent.com/u/37062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/linkerlin", "html_url": "https://github.com/linkerlin", "followers_url": "https://api.github.com/users/linkerlin/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-03-21T01:38:22
2024-03-21T10:59:38
2024-03-21T10:59:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` D:\gos\ollama>go build . # github.com/jmorganca/ollama/llm llm\llm.go:52:17: undefined: gpu.CheckVRAM llm\llm.go:68:14: undefined: gpu.GetGPUInfo llm\llm.go:166:15: undefined: newDynExtServer ``` ### What did you expect to see? Succ build ### Steps to reproduce _No response_ ### A...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3277/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5624
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5624/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5624/comments
https://api.github.com/repos/ollama/ollama/issues/5624/events
https://github.com/ollama/ollama/issues/5624
2,402,020,264
I_kwDOJ0Z1Ps6PK-uo
5,624
Make full use of all GPU resources for inference
{ "login": "HeroSong666", "id": 142960235, "node_id": "U_kgDOCIVmaw", "avatar_url": "https://avatars.githubusercontent.com/u/142960235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HeroSong666", "html_url": "https://github.com/HeroSong666", "followers_url": "https://api.github.com/users/...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" }, { "id": 6677367769, "node_id": "LA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-07-11T01:25:13
2024-09-05T23:04:06
2024-09-05T23:04:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I used 4 A30 gpus to reason about qwen2-72b's model. But even at peak times, each card was not used more than 35%. At the same time, the speed of reasoning is relatively slow. ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.0
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5624/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8233
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8233/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8233/comments
https://api.github.com/repos/ollama/ollama/issues/8233/events
https://github.com/ollama/ollama/issues/8233
2,758,178,036
I_kwDOJ0Z1Ps6kZnT0
8,233
version aware linux upgrade
{ "login": "lamyergeier", "id": 42092626, "node_id": "MDQ6VXNlcjQyMDkyNjI2", "avatar_url": "https://avatars.githubusercontent.com/u/42092626?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lamyergeier", "html_url": "https://github.com/lamyergeier", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5755339642, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-12-24T18:05:51
2025-01-07T16:58:11
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama install command, `curl -fsSL https://ollama.com/install.sh | sh`, removes and reinstalls even if there is no version update. The script should not remove current version downloads, if there is no version update. ### OS Linux ### GPU Intel ### CPU Intel ### Ollama version 0.5.4
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8233/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8233/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/2990
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2990/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2990/comments
https://api.github.com/repos/ollama/ollama/issues/2990/events
https://github.com/ollama/ollama/pull/2990
2,174,616,537
PR_kwDOJ0Z1Ps5pAbJL
2,990
fix: default terminal width, height
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2024-03-07T19:29:12
2024-03-08T23:20:55
2024-03-08T23:20:54
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2990", "html_url": "https://github.com/ollama/ollama/pull/2990", "diff_url": "https://github.com/ollama/ollama/pull/2990.diff", "patch_url": "https://github.com/ollama/ollama/pull/2990.patch", "merged_at": "2024-03-08T23:20:54" }
resolves #2970
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2990/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4018
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4018/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4018/comments
https://api.github.com/repos/ollama/ollama/issues/4018/events
https://github.com/ollama/ollama/issues/4018
2,268,011,330
I_kwDOJ0Z1Ps6HLxtC
4,018
API truncates parentheses before stop token
{ "login": "IgorAlexey", "id": 18470725, "node_id": "MDQ6VXNlcjE4NDcwNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/18470725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IgorAlexey", "html_url": "https://github.com/IgorAlexey", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-04-29T02:39:07
2024-07-17T00:44:26
2024-07-17T00:44:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The generate API is truncating closing parentheses when they appear before a stop token at seemingly random occasions. This issue is reproducible across all models I've tested (Phi3, All llama3 versions, WizardLM2) so it looks like it's an API limitation? sample text ``` Everyone uses `:)...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4018/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4018/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6247
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6247/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6247/comments
https://api.github.com/repos/ollama/ollama/issues/6247/events
https://github.com/ollama/ollama/pull/6247
2,454,585,561
PR_kwDOJ0Z1Ps53xa8Q
6,247
Store layers inside manifests consistently as values.
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2024-08-08T00:31:42
2024-08-08T17:46:46
2024-08-08T17:46:43
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6247", "html_url": "https://github.com/ollama/ollama/pull/6247", "diff_url": "https://github.com/ollama/ollama/pull/6247.diff", "patch_url": "https://github.com/ollama/ollama/pull/6247.patch", "merged_at": "2024-08-08T17:46:43" }
This consistently uses layers as values (instead of pointers) inside of manifest after the change to make the config be passed by value. The interface is clearer and it reduces the need dereference and take address of in some places. I'm not sure if the changes in layer.go are considered canonical Go, so I would app...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6247/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6288
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6288/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6288/comments
https://api.github.com/repos/ollama/ollama/issues/6288/events
https://github.com/ollama/ollama/issues/6288
2,458,320,550
I_kwDOJ0Z1Ps6Shv6m
6,288
OLLAMA_LLM_LIBRARY=cpu is ignored: ErrorOutOfDeviceMemory when zero layers are offloaded to GPU through Vulkan
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivic...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-08-09T17:00:14
2024-08-13T05:42:01
2024-08-13T05:42:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama server is run on CPU: ```OLLAMA_LLM_LIBRARY=cpu ollama start``` While attempting to run the gemma model, it still attempts to use vulkan and fails: ``` 2024/08/09 09:58:04 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6288/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3213
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3213/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3213/comments
https://api.github.com/repos/ollama/ollama/issues/3213/events
https://github.com/ollama/ollama/issues/3213
2,191,283,603
I_kwDOJ0Z1Ps6CnFWT
3,213
open /home/house365ai/xxm/model/Qwen1.5-14B-Chat/tokenizer.model:
{ "login": "njhouse365", "id": 130344095, "node_id": "U_kgDOB8Tknw", "avatar_url": "https://avatars.githubusercontent.com/u/130344095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/njhouse365", "html_url": "https://github.com/njhouse365", "followers_url": "https://api.github.com/users/njh...
[]
closed
false
null
[]
null
2
2024-03-18T05:44:18
2024-03-19T00:49:43
2024-03-18T08:36:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? my Modelfile FROM /home/house365ai/xxm/model/Qwen1.5-14B-Chat ollama create Qwen1.5-14B-Chat -f Modelfile how solve it?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3213/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8341
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8341/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8341/comments
https://api.github.com/repos/ollama/ollama/issues/8341/events
https://github.com/ollama/ollama/issues/8341
2,773,670,616
I_kwDOJ0Z1Ps6lUtrY
8,341
[feature] start ollama automatically on startup
{ "login": "remco-pc", "id": 8077908, "node_id": "MDQ6VXNlcjgwNzc5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/remco-pc", "html_url": "https://github.com/remco-pc", "followers_url": "https://api.github.com/users/remco...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-07T20:23:57
2025-01-07T20:23:57
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
i've been playing with this feature to automatically start ollama serve from startup (docker php init) but it won't start with an & (background process). then i tried to put it in a script with a lock file in cron and sees if that start the script. it starts my script but it then does not start `ollama serve` which sho...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8341/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1273
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1273/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1273/comments
https://api.github.com/repos/ollama/ollama/issues/1273/events
https://github.com/ollama/ollama/pull/1273
2,010,474,590
PR_kwDOJ0Z1Ps5gWNX7
1,273
added llama_runner_timeout ModelFile parameter for longer timeouts
{ "login": "bigattichouse", "id": 67535, "node_id": "MDQ6VXNlcjY3NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/67535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bigattichouse", "html_url": "https://github.com/bigattichouse", "followers_url": "https://api.github.com/user...
[]
closed
false
null
[]
null
2
2023-11-25T04:54:03
2023-11-25T05:46:35
2023-11-25T05:45:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1273", "html_url": "https://github.com/ollama/ollama/pull/1273", "diff_url": "https://github.com/ollama/ollama/pull/1273.diff", "patch_url": "https://github.com/ollama/ollama/pull/1273.patch", "merged_at": null }
Allows the user to choose longer or shorter timeouts in the ModelFile for how long the server will wait for the llama runner. Created this patch in response to 'timed out waiting for llama runner to start' error. Defaults to the 3 minutes hard coded in the current main branch.
{ "login": "bigattichouse", "id": 67535, "node_id": "MDQ6VXNlcjY3NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/67535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bigattichouse", "html_url": "https://github.com/bigattichouse", "followers_url": "https://api.github.com/user...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1273/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1273/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7005
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7005/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7005/comments
https://api.github.com/repos/ollama/ollama/issues/7005/events
https://github.com/ollama/ollama/issues/7005
2,553,392,563
I_kwDOJ0Z1Ps6YMa2z
7,005
Docker not use GPU after idle
{ "login": "phukrit7171", "id": 64061607, "node_id": "MDQ6VXNlcjY0MDYxNjA3", "avatar_url": "https://avatars.githubusercontent.com/u/64061607?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phukrit7171", "html_url": "https://github.com/phukrit7171", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2024-09-27T17:17:00
2024-09-30T15:46:21
2024-09-30T15:46:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After the model is cleared from the graphics card RAM, when it is run again, the model is not loaded to the graphics card RAM but runs on the CPU instead, which slows it down a lot. You have to do docker stop ollama and docker start ollama to get it to run again with the graphics card. ### OS ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7005/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5141
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5141/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5141/comments
https://api.github.com/repos/ollama/ollama/issues/5141/events
https://github.com/ollama/ollama/issues/5141
2,362,404,205
I_kwDOJ0Z1Ps6Mz21t
5,141
Make "pull" support more than one model
{ "login": "Speedway1", "id": 100301611, "node_id": "U_kgDOBfp7Kw", "avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Speedway1", "html_url": "https://github.com/Speedway1", "followers_url": "https://api.github.com/users/Speedw...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-06-19T13:33:25
2024-09-24T15:42:26
2024-09-24T15:42:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
"ollama pull " currently only supports one parameter. However when setting up a new server, or when do a bulk update of LLMs, we need to do a batch of LLM pulls. It would be very handy for the command to support more than one model as parameter. E.g. ollama pull deepseek-coder-v2 phi3:14b codestral As op...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5141/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5141/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6974
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6974/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6974/comments
https://api.github.com/repos/ollama/ollama/issues/6974/events
https://github.com/ollama/ollama/issues/6974
2,549,682,283
I_kwDOJ0Z1Ps6X-RBr
6,974
Ollama on Windows occupied all available ports when downloading
{ "login": "TheStarAlight", "id": 105955974, "node_id": "U_kgDOBlDChg", "avatar_url": "https://avatars.githubusercontent.com/u/105955974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheStarAlight", "html_url": "https://github.com/TheStarAlight", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
null
[]
null
7
2024-09-26T06:54:51
2024-10-24T11:46:05
2024-09-26T19:00:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I'm trying to download a model from Ollama for Windows, after a while, my browsers cannot visit any other website, showing "connection refused". And the download would also fail (after the first part of this model finished, the next part cannot start and reports error). The log `~/AppData/...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6974/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2298
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2298/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2298/comments
https://api.github.com/repos/ollama/ollama/issues/2298/events
https://github.com/ollama/ollama/pull/2298
2,111,247,173
PR_kwDOJ0Z1Ps5loeAB
2,298
structured debug prompt
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-02-01T00:47:59
2024-02-01T21:16:50
2024-02-01T21:16:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2298", "html_url": "https://github.com/ollama/ollama/pull/2298", "diff_url": "https://github.com/ollama/ollama/pull/2298.diff", "patch_url": "https://github.com/ollama/ollama/pull/2298.patch", "merged_at": "2024-02-01T21:16:49" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2298/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2572
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2572/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2572/comments
https://api.github.com/repos/ollama/ollama/issues/2572/events
https://github.com/ollama/ollama/issues/2572
2,140,911,407
I_kwDOJ0Z1Ps5_m7cv
2,572
PrivateGPT example is broken for me
{ "login": "levicki", "id": 16415478, "node_id": "MDQ6VXNlcjE2NDE1NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/16415478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/levicki", "html_url": "https://github.com/levicki", "followers_url": "https://api.github.com/users/levick...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-02-18T10:28:12
2024-09-12T01:57:03
2024-09-12T01:57:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
After installing it as per your provided instructions and running `ingest.py` on a folder with 19 PDF documents it crashes with the following stack trace: ``` Creating new vectorstore Loading documents from source_documents Loading new documents: 100%|████████████████████| 19/19 [00:02<00:00, 7.12it/s] Loaded 169...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2572/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6523
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6523/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6523/comments
https://api.github.com/repos/ollama/ollama/issues/6523/events
https://github.com/ollama/ollama/pull/6523
2,488,103,562
PR_kwDOJ0Z1Ps55gawM
6,523
llama: clean up sync
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-08-27T02:03:02
2024-08-30T00:30:13
2024-08-30T00:30:11
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6523", "html_url": "https://github.com/ollama/ollama/pull/6523", "diff_url": "https://github.com/ollama/ollama/pull/6523.diff", "patch_url": "https://github.com/ollama/ollama/pull/6523.patch", "merged_at": "2024-08-30T00:30:11" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6523/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8585
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8585/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8585/comments
https://api.github.com/repos/ollama/ollama/issues/8585/events
https://github.com/ollama/ollama/issues/8585
2,811,203,849
I_kwDOJ0Z1Ps6nj5EJ
8,585
Error: neither ‘from’ or ‘files’ was specified when creating a model
{ "login": "latent-variable", "id": 22504489, "node_id": "MDQ6VXNlcjIyNTA0NDg5", "avatar_url": "https://avatars.githubusercontent.com/u/22504489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/latent-variable", "html_url": "https://github.com/latent-variable", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2025-01-25T22:30:06
2025-01-25T23:11:18
2025-01-25T23:11:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I’m encountering an issue when trying to create a model using ollama create on my Mac. The command fails with the following error message: transferring model data Error: neither 'from' or 'files' was specified This issue occurs despite ensuring the path to the .gguf file is correct. ...
{ "login": "latent-variable", "id": 22504489, "node_id": "MDQ6VXNlcjIyNTA0NDg5", "avatar_url": "https://avatars.githubusercontent.com/u/22504489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/latent-variable", "html_url": "https://github.com/latent-variable", "followers_url": "https://api...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8585/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2910
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2910/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2910/comments
https://api.github.com/repos/ollama/ollama/issues/2910/events
https://github.com/ollama/ollama/pull/2910
2,166,441,797
PR_kwDOJ0Z1Ps5okYH8
2,910
Run inference in a subprocess
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
2
2024-03-04T10:01:47
2024-10-17T22:38:00
2024-04-07T06:09:01
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2910", "html_url": "https://github.com/ollama/ollama/pull/2910", "diff_url": "https://github.com/ollama/ollama/pull/2910.diff", "patch_url": "https://github.com/ollama/ollama/pull/2910.patch", "merged_at": null }
This changes the underlying llama server to run in a subprocess, bringing back code from https://github.com/ollama/ollama/blob/v0.1.17/llm/llama.go while keeping the multi-variant support. This is helpful to make sure resources are freed when a model is unloaded and will help allow concurrent models to be loaded. No...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2910/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3875
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3875/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3875/comments
https://api.github.com/repos/ollama/ollama/issues/3875/events
https://github.com/ollama/ollama/issues/3875
2,261,313,798
I_kwDOJ0Z1Ps6GyOkG
3,875
Error: pull model manifest: 401
{ "login": "seedpower", "id": 11022830, "node_id": "MDQ6VXNlcjExMDIyODMw", "avatar_url": "https://avatars.githubusercontent.com/u/11022830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seedpower", "html_url": "https://github.com/seedpower", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
8
2024-04-24T13:31:16
2024-10-11T06:10:10
2024-05-21T17:45:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? OS: Ubuntu 22.04 server ollama version: 0.1.32 Using official bash script to install it or docker method to run it, both can't pull any model and get same next error: ``` # ollama run llama3 pulling manifest Error: pull model manifest: 401 ``` In same network environment, my ma...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3875/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1227
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1227/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1227/comments
https://api.github.com/repos/ollama/ollama/issues/1227/events
https://github.com/ollama/ollama/pull/1227
2,005,037,736
PR_kwDOJ0Z1Ps5gD8sf
1,227
update python client create example
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-11-21T20:01:19
2023-11-27T20:36:21
2023-11-27T20:36:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1227", "html_url": "https://github.com/ollama/ollama/pull/1227", "diff_url": "https://github.com/ollama/ollama/pull/1227.diff", "patch_url": "https://github.com/ollama/ollama/pull/1227.patch", "merged_at": "2023-11-27T20:36:20" }
When we updated our CLI to upload modelfile contents directly to the ollama server we missed updating the python example client. This change brings the logic in the python client in-line with our Go client
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1227/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5956
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5956/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5956/comments
https://api.github.com/repos/ollama/ollama/issues/5956/events
https://github.com/ollama/ollama/issues/5956
2,430,544,508
I_kwDOJ0Z1Ps6Q3yp8
5,956
Phi3-mini-4k-instruct will need to be updated for latest llama.cpp
{ "login": "kaetemi", "id": 1581053, "node_id": "MDQ6VXNlcjE1ODEwNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1581053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaetemi", "html_url": "https://github.com/kaetemi", "followers_url": "https://api.github.com/users/kaetemi/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
7
2024-07-25T16:45:14
2024-08-02T15:08:28
2024-07-30T22:34:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
See https://github.com/ggerganov/llama.cpp/pull/8627 The blob from the ollama repository fails to load on the latest llama.cpp. ``` 0|pv_scheduler | llama-server [phi3-3.8b:1280:1]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<....
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5956/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5956/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8676
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8676/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8676/comments
https://api.github.com/repos/ollama/ollama/issues/8676/events
https://github.com/ollama/ollama/pull/8676
2,819,521,168
PR_kwDOJ0Z1Ps6Jbnv2
8,676
docs: update api.md with streaming with tools is enabled
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2025-01-29T23:01:05
2025-01-30T13:08:49
2025-01-29T23:14:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8676", "html_url": "https://github.com/ollama/ollama/pull/8676", "diff_url": "https://github.com/ollama/ollama/pull/8676.diff", "patch_url": "https://github.com/ollama/ollama/pull/8676.patch", "merged_at": "2025-01-29T23:14:30" }
Shoutout to @sixlive for finding this! docs were outdated and didnt mention that we can now stream tools
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8676/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2759
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2759/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2759/comments
https://api.github.com/repos/ollama/ollama/issues/2759/events
https://github.com/ollama/ollama/pull/2759
2,153,404,573
PR_kwDOJ0Z1Ps5n38Pc
2,759
docs: Add LLM-X to Web Integration section
{ "login": "mrdjohnson", "id": 6767910, "node_id": "MDQ6VXNlcjY3Njc5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6767910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrdjohnson", "html_url": "https://github.com/mrdjohnson", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2024-02-26T07:03:20
2024-03-07T15:11:53
2024-03-07T15:11:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2759", "html_url": "https://github.com/ollama/ollama/pull/2759", "diff_url": "https://github.com/ollama/ollama/pull/2759.diff", "patch_url": "https://github.com/ollama/ollama/pull/2759.patch", "merged_at": "2024-03-07T15:11:53" }
Adding yet another web project to the list in the readme!
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2759/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8286
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8286/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8286/comments
https://api.github.com/repos/ollama/ollama/issues/8286/events
https://github.com/ollama/ollama/issues/8286
2,765,815,836
I_kwDOJ0Z1Ps6k2wAc
8,286
Allow use of locally installed CUDA or ROCm
{ "login": "erkinalp", "id": 5833034, "node_id": "MDQ6VXNlcjU4MzMwMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5833034?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erkinalp", "html_url": "https://github.com/erkinalp", "followers_url": "https://api.github.com/users/erkin...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2025-01-02T10:51:34
2025-01-03T09:10:04
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ollama tries to install its own copy of CUDA or ROCm, even when the same version is already installed as a system-wide installation
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8286/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3260
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3260/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3260/comments
https://api.github.com/repos/ollama/ollama/issues/3260/events
https://github.com/ollama/ollama/issues/3260
2,195,998,846
I_kwDOJ0Z1Ps6C5Eh-
3,260
Syntax error: end of file unexpected (expecting ";;")
{ "login": "TacitTactics", "id": 14880732, "node_id": "MDQ6VXNlcjE0ODgwNzMy", "avatar_url": "https://avatars.githubusercontent.com/u/14880732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TacitTactics", "html_url": "https://github.com/TacitTactics", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-03-19T20:53:06
2024-03-21T07:43:26
2024-03-21T07:43:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? file format issue. ### What did you expect to see? No errors ### Steps to reproduce run provided curl call to install script, as is. ### Are there any recent changes that introduced the issue? Work around: install dos2unix, download the install script, and then run " dos2unix <filename>" ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3260/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6644
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6644/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6644/comments
https://api.github.com/repos/ollama/ollama/issues/6644/events
https://github.com/ollama/ollama/pull/6644
2,506,482,257
PR_kwDOJ0Z1Ps56dOVn
6,644
Update README.md
{ "login": "jake83741", "id": 125723241, "node_id": "U_kgDOB35iaQ", "avatar_url": "https://avatars.githubusercontent.com/u/125723241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jake83741", "html_url": "https://github.com/jake83741", "followers_url": "https://api.github.com/users/jake83...
[]
closed
false
null
[]
null
1
2024-09-04T23:39:55
2024-09-04T23:48:35
2024-09-04T23:46:03
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6644", "html_url": "https://github.com/ollama/ollama/pull/6644", "diff_url": "https://github.com/ollama/ollama/pull/6644.diff", "patch_url": "https://github.com/ollama/ollama/pull/6644.patch", "merged_at": "2024-09-04T23:46:03" }
This is a pull request to include my Discord bot project, vnc-lm into the community integrations section. https://github.com/jk011ru/vnc-lm . Thanks
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6644/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8574
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8574/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8574/comments
https://api.github.com/repos/ollama/ollama/issues/8574/events
https://github.com/ollama/ollama/issues/8574
2,810,789,589
I_kwDOJ0Z1Ps6niT7V
8,574
Mini-InternVL
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2025-01-25T05:50:37
2025-01-28T13:33:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://hf-mirror.com/OpenGVLab/Mini-InternVL-Chat-4B-V1-5 thanks.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8574/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1768
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1768/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1768/comments
https://api.github.com/repos/ollama/ollama/issues/1768/events
https://github.com/ollama/ollama/issues/1768
2,064,427,724
I_kwDOJ0Z1Ps57DKrM
1,768
The API - http://127.0.0.1:11434/api doesn't work.
{ "login": "PriyaranjanMaratheDish", "id": 133165012, "node_id": "U_kgDOB-_v1A", "avatar_url": "https://avatars.githubusercontent.com/u/133165012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PriyaranjanMaratheDish", "html_url": "https://github.com/PriyaranjanMaratheDish", "followers_url...
[]
closed
false
null
[]
null
8
2024-01-03T17:51:24
2024-07-17T10:56:36
2024-01-04T19:41:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
1)The API - http://127.0.0.1:11434/api doesn't work. Are there any additional steps for http://127.0.0.1:11434/api to work correctly? Doesn't work on my mac and EC2 as well.
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1768/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7941
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7941/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7941/comments
https://api.github.com/repos/ollama/ollama/issues/7941/events
https://github.com/ollama/ollama/issues/7941
2,719,189,962
I_kwDOJ0Z1Ps6iE4vK
7,941
signal arrived during cgo execution
{ "login": "datamg-star", "id": 181604665, "node_id": "U_kgDOCtMROQ", "avatar_url": "https://avatars.githubusercontent.com/u/181604665?v=4", "gravatar_id": "", "url": "https://api.github.com/users/datamg-star", "html_url": "https://github.com/datamg-star", "followers_url": "https://api.github.com/users/...
[]
open
false
null
[]
null
1
2024-12-05T02:55:53
2024-12-19T15:31:15
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[root@localhost data]# ollama run llama3.1:8b >>> a It looks likeError: an error was encountered while running the model: unexpected EOF tail -200 /var/log/messages Dec 5 10:29:10 localhost ollama: Device 0: NVIDIA A800-SXM4-40GB, compute capability 8.0, VMM: yes Dec 5 10:29:10 localhost ollama: llm_load_t...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7941/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6053
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6053/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6053/comments
https://api.github.com/repos/ollama/ollama/issues/6053/events
https://github.com/ollama/ollama/pull/6053
2,435,599,844
PR_kwDOJ0Z1Ps52wZVB
6,053
docs: Add ingest to list of cli tools
{ "login": "sammcj", "id": 862951, "node_id": "MDQ6VXNlcjg2Mjk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sammcj", "html_url": "https://github.com/sammcj", "followers_url": "https://api.github.com/users/sammcj/follow...
[]
closed
false
null
[]
null
0
2024-07-29T14:34:51
2024-08-09T07:38:34
2024-08-09T07:38:33
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6053", "html_url": "https://github.com/ollama/ollama/pull/6053", "diff_url": "https://github.com/ollama/ollama/pull/6053.diff", "patch_url": "https://github.com/ollama/ollama/pull/6053.patch", "merged_at": null }
Add ingest (https://github.com/sammcj/ingest) to list of CLI tools for Ollama. Ingest is a tool for parsing files/directories into a LLM friendly markdown formatted prompt and can directly pass the content and prompt to Ollama.
{ "login": "sammcj", "id": 862951, "node_id": "MDQ6VXNlcjg2Mjk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sammcj", "html_url": "https://github.com/sammcj", "followers_url": "https://api.github.com/users/sammcj/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6053/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1590
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1590/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1590/comments
https://api.github.com/repos/ollama/ollama/issues/1590/events
https://github.com/ollama/ollama/issues/1590
2,047,632,584
I_kwDOJ0Z1Ps56DGTI
1,590
Add support for Intel Arc GPUs
{ "login": "taep96", "id": 64481039, "node_id": "MDQ6VXNlcjY0NDgxMDM5", "avatar_url": "https://avatars.githubusercontent.com/u/64481039?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taep96", "html_url": "https://github.com/taep96", "followers_url": "https://api.github.com/users/taep96/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6677491450, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
66
2023-12-18T23:25:37
2025-01-24T05:50:38
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1590/reactions", "total_count": 90, "+1": 56, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 17, "eyes": 17 }
https://api.github.com/repos/ollama/ollama/issues/1590/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3540
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3540/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3540/comments
https://api.github.com/repos/ollama/ollama/issues/3540/events
https://github.com/ollama/ollama/pull/3540
2,231,714,189
PR_kwDOJ0Z1Ps5sCb57
3,540
Implement 'split_mode' and 'tensor_split' support in modelfiles
{ "login": "jukofyork", "id": 69222624, "node_id": "MDQ6VXNlcjY5MjIyNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jukofyork", "html_url": "https://github.com/jukofyork", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
6
2024-04-08T17:13:21
2024-04-27T13:28:00
2024-04-10T02:34:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3540", "html_url": "https://github.com/ollama/ollama/pull/3540", "diff_url": "https://github.com/ollama/ollama/pull/3540.diff", "patch_url": "https://github.com/ollama/ollama/pull/3540.patch", "merged_at": null }
This adds support for the `tensor_split` and `split_mode` options in `llama.cpp::server`. The `split_mode` option has three possible values, and from `llama.cpp::server --help`: > How to split the model across multiple GPUs, one of: > - "layer": split layers and KV across GPUs (default). > - "row": split rows ...
{ "login": "jukofyork", "id": 69222624, "node_id": "MDQ6VXNlcjY5MjIyNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jukofyork", "html_url": "https://github.com/jukofyork", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3540/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1649
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1649/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1649/comments
https://api.github.com/repos/ollama/ollama/issues/1649/events
https://github.com/ollama/ollama/issues/1649
2,051,513,243
I_kwDOJ0Z1Ps56R5ub
1,649
Llama not using cuda cuBLAS error 13
{ "login": "hbqdev", "id": 49971676, "node_id": "MDQ6VXNlcjQ5OTcxNjc2", "avatar_url": "https://avatars.githubusercontent.com/u/49971676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hbqdev", "html_url": "https://github.com/hbqdev", "followers_url": "https://api.github.com/users/hbqdev/fo...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2023-12-21T01:14:26
2024-02-01T23:23:08
2024-02-01T23:23:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It seems this issue was first reported here https://github.com/jmorganca/ollama/issues/920**** ``` Dec 20 17:03:07 NightFuryX ollama[12288]: llama_new_context_with_model: total VRAM used: 5913.56 MiB (model: 3577.55 MiB, context: 2336.00 MiB) Dec 20 17:03:11 NightFuryX ollama[12288]: CUDA error 700 at /go/src/g...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1649/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1649/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2511
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2511/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2511/comments
https://api.github.com/repos/ollama/ollama/issues/2511/events
https://github.com/ollama/ollama/pull/2511
2,135,852,135
PR_kwDOJ0Z1Ps5m8K1D
2,511
[nit] Remove unused msg local var.
{ "login": "ttsugriy", "id": 172294, "node_id": "MDQ6VXNlcjE3MjI5NA==", "avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttsugriy", "html_url": "https://github.com/ttsugriy", "followers_url": "https://api.github.com/users/ttsugri...
[]
closed
false
null
[]
null
0
2024-02-15T07:46:38
2024-02-20T19:18:59
2024-02-20T19:02:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2511", "html_url": "https://github.com/ollama/ollama/pull/2511", "diff_url": "https://github.com/ollama/ollama/pull/2511.diff", "patch_url": "https://github.com/ollama/ollama/pull/2511.patch", "merged_at": "2024-02-20T19:02:35" }
It's not used but clutters code.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2511/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2418
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2418/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2418/comments
https://api.github.com/repos/ollama/ollama/issues/2418/events
https://github.com/ollama/ollama/issues/2418
2,126,284,310
I_kwDOJ0Z1Ps5-vIYW
2,418
What are the system requirements?
{ "login": "worikgh", "id": 5387413, "node_id": "MDQ6VXNlcjUzODc0MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/5387413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/worikgh", "html_url": "https://github.com/worikgh", "followers_url": "https://api.github.com/users/worikgh/...
[]
closed
false
null
[]
null
6
2024-02-09T00:37:47
2024-07-12T20:10:30
2024-02-18T08:57:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be very useful to have a section on system requirements in the README.md Nothing too detailed, but: * Disc space required * Main ram * Video/Compute card requirements Keep up the good work!
{ "login": "worikgh", "id": 5387413, "node_id": "MDQ6VXNlcjUzODc0MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/5387413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/worikgh", "html_url": "https://github.com/worikgh", "followers_url": "https://api.github.com/users/worikgh/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2418/reactions", "total_count": 5, "+1": 4, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2418/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2600
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2600/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2600/comments
https://api.github.com/repos/ollama/ollama/issues/2600/events
https://github.com/ollama/ollama/pull/2600
2,143,148,487
PR_kwDOJ0Z1Ps5nVHWd
2,600
Document setting server vars for windows
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-02-19T21:12:26
2024-02-19T21:46:39
2024-02-19T21:46:37
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2600", "html_url": "https://github.com/ollama/ollama/pull/2600", "diff_url": "https://github.com/ollama/ollama/pull/2600.diff", "patch_url": "https://github.com/ollama/ollama/pull/2600.patch", "merged_at": "2024-02-19T21:46:37" }
Fixes #2546
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2600/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8328
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8328/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8328/comments
https://api.github.com/repos/ollama/ollama/issues/8328/events
https://github.com/ollama/ollama/issues/8328
2,771,743,238
I_kwDOJ0Z1Ps6lNXIG
8,328
[Model request] alea-institute/kl3m-003-3.7b
{ "login": "sncix", "id": 85628682, "node_id": "MDQ6VXNlcjg1NjI4Njgy", "avatar_url": "https://avatars.githubusercontent.com/u/85628682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sncix", "html_url": "https://github.com/sncix", "followers_url": "https://api.github.com/users/sncix/follow...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2025-01-07T02:01:17
2025-01-07T02:01:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/alea-institute/kl3m-003-3.7b https://www.kl3m.ai/ KL3M is a family of language models claimed to be trained on clean, legally-permissible data. It has obtained the [Fairly Trained L-Certification](https://www.fairlytrained.org/certifications). `kl3m-003-3.7b` is the latest available model of ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8328/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5775
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5775/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5775/comments
https://api.github.com/repos/ollama/ollama/issues/5775/events
https://github.com/ollama/ollama/issues/5775
2,416,937,658
I_kwDOJ0Z1Ps6QD4q6
5,775
Assistant doesn't continue from its last message
{ "login": "yilmaz08", "id": 84680978, "node_id": "MDQ6VXNlcjg0NjgwOTc4", "avatar_url": "https://avatars.githubusercontent.com/u/84680978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yilmaz08", "html_url": "https://github.com/yilmaz08", "followers_url": "https://api.github.com/users/yil...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
4
2024-07-18T16:59:37
2024-07-21T19:02:53
2024-07-20T03:19:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I love using llama3:8b with Open WebUI's text generation and recently I've realized whatever I write there llama3:8b just says random stuff. After that I've tried the message editing in Open WebUI and even if I edit it the message continues like no assistant message was provided. Finally I...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5775/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5775/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3164
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3164/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3164/comments
https://api.github.com/repos/ollama/ollama/issues/3164/events
https://github.com/ollama/ollama/issues/3164
2,188,106,862
I_kwDOJ0Z1Ps6Ca9xu
3,164
CUDA error: an illegal memory access was encountered
{ "login": "lizhichao999", "id": 34128722, "node_id": "MDQ6VXNlcjM0MTI4NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/34128722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lizhichao999", "html_url": "https://github.com/lizhichao999", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
null
[]
null
1
2024-03-15T09:41:36
2024-03-15T19:58:45
2024-03-15T19:58:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ![image](https://github.com/ollama/ollama/assets/34128722/101417a2-b080-4f5e-9ef1-9a5daca16959) ![image](https://github.com/ollama/ollama/assets/34128722/f3cd93fa-1882-4089-927b-d9048fee5346) ![image](https://github.com/ollama/ollama/assets/34128722/fda0c05b-08e2-4ea2-b9f3-a1ce642a3e11) oll...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3164/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7544
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7544/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7544/comments
https://api.github.com/repos/ollama/ollama/issues/7544/events
https://github.com/ollama/ollama/issues/7544
2,640,217,485
I_kwDOJ0Z1Ps6dXoWN
7,544
Despite advertised, granite3-dense does not seem to support tools.
{ "login": "chhu", "id": 208672, "node_id": "MDQ6VXNlcjIwODY3Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/208672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chhu", "html_url": "https://github.com/chhu", "followers_url": "https://api.github.com/users/chhu/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-11-07T08:09:24
2025-01-13T01:22:02
2025-01-13T01:22:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? [granite3-dense](https://ollama.com/library/granite3-dense) Gave bash as tool, but it is refusing to use it, other models work fine (qwen2.5 32b outshines all others for shell use). Tool setup and sys prompt here: https://github.com/chhu/ollash/blob/main/index.js asterope:~ >ask List file c...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7544/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3921
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3921/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3921/comments
https://api.github.com/repos/ollama/ollama/issues/3921/events
https://github.com/ollama/ollama/issues/3921
2,264,485,955
I_kwDOJ0Z1Ps6G-VBD
3,921
Copying quantized models doesn't work
{ "login": "saul-jb", "id": 2025187, "node_id": "MDQ6VXNlcjIwMjUxODc=", "avatar_url": "https://avatars.githubusercontent.com/u/2025187?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saul-jb", "html_url": "https://github.com/saul-jb", "followers_url": "https://api.github.com/users/saul-jb/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
10
2024-04-25T20:59:31
2024-05-14T03:00:15
2024-05-09T22:21:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've just built the latest version through docker (5f73c08729e97eb3f760633c6ffba4f34cfe5538) and am getting errors copying some models: ``` $ ollama cp llama3:8b-instruct-q5_K_M llama3-8b-1 Error: model "llama3:8b-instruct-q5_K_M" not found $ ollama cp llama3 llama3-8b-1 Error: model "l...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3921/timeline
null
completed
false