url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/6525
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6525/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6525/comments
https://api.github.com/repos/ollama/ollama/issues/6525/events
https://github.com/ollama/ollama/issues/6525
2,488,468,101
I_kwDOJ0Z1Ps6UUwKF
6,525
ollama collapses CPU
{ "login": "Hyphaed", "id": 19622367, "node_id": "MDQ6VXNlcjE5NjIyMzY3", "avatar_url": "https://avatars.githubusercontent.com/u/19622367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hyphaed", "html_url": "https://github.com/Hyphaed", "followers_url": "https://api.github.com/users/Hyphae...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-08-27T07:00:05
2024-09-16T20:41:39
2024-08-27T15:12:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama collapses CPU even when I stop the server the CPU still stuck from 75% to 90% even when I do have an RTX 3070 and terminal is showind that is using the GPU there is no error in terminal I have no verbose since I forcelly shutedown the workstation ### OS Linux ### GPU Nvid...
{ "login": "Hyphaed", "id": 19622367, "node_id": "MDQ6VXNlcjE5NjIyMzY3", "avatar_url": "https://avatars.githubusercontent.com/u/19622367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hyphaed", "html_url": "https://github.com/Hyphaed", "followers_url": "https://api.github.com/users/Hyphae...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6525/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3972
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3972/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3972/comments
https://api.github.com/repos/ollama/ollama/issues/3972/events
https://github.com/ollama/ollama/pull/3972
2,266,851,869
PR_kwDOJ0Z1Ps5t5oCp
3,972
Add support for building on Windows ARM64
{ "login": "hmartinez82", "id": 1100440, "node_id": "MDQ6VXNlcjExMDA0NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1100440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmartinez82", "html_url": "https://github.com/hmartinez82", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
7
2024-04-27T05:38:44
2024-05-08T04:05:14
2024-04-28T21:52:59
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3972", "html_url": "https://github.com/ollama/ollama/pull/3972", "diff_url": "https://github.com/ollama/ollama/pull/3972.diff", "patch_url": "https://github.com/ollama/ollama/pull/3972.patch", "merged_at": "2024-04-28T21:52:58" }
Part of #2589 - Builds only the cpu runner for ARM64 Also, the existing CMake recipe already enables NEON and Armv8.2 extensions when ARM64 is detected. - I'll create another PR with build instructions. The main trick is that MSY2 has the CLANGARM64 environment that provides gcc aliases to Clang. Maintainer chan...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3972/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5552
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5552/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5552/comments
https://api.github.com/repos/ollama/ollama/issues/5552/events
https://github.com/ollama/ollama/pull/5552
2,396,671,216
PR_kwDOJ0Z1Ps50wDvt
5,552
docs
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-07-08T22:17:57
2024-07-25T23:26:21
2024-07-25T23:26:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5552", "html_url": "https://github.com/ollama/ollama/pull/5552", "diff_url": "https://github.com/ollama/ollama/pull/5552.diff", "patch_url": "https://github.com/ollama/ollama/pull/5552.patch", "merged_at": "2024-07-25T23:26:19" }
part of #5216 part of #5284 part of #5207
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5552/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/422
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/422/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/422/comments
https://api.github.com/repos/ollama/ollama/issues/422/events
https://github.com/ollama/ollama/issues/422
1,868,154,838
I_kwDOJ0Z1Ps5vWcfW
422
`Error: Post "http://localhost:11434/api/generate": EOF` with long propmts with phind-codellama
{ "login": "tomduncalf", "id": 5458070, "node_id": "MDQ6VXNlcjU0NTgwNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/5458070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomduncalf", "html_url": "https://github.com/tomduncalf", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
20
2023-08-26T16:00:33
2024-04-22T09:12:40
2023-09-07T11:08:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It seems like if you provide a long prompt (I was using one of 1,000ish tokens according to OpenAI tokenizer) with this model, you get an error `Error: Post "http://localhost:11434/api/generate": EOF`. It may or may not relate to the contents of the prompt as well as the length
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/422/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7405
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7405/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7405/comments
https://api.github.com/repos/ollama/ollama/issues/7405/events
https://github.com/ollama/ollama/issues/7405
2,619,529,082
I_kwDOJ0Z1Ps6cItd6
7,405
Feature request: Add CLI argument to specify a system prompt
{ "login": "Kerrick", "id": 552093, "node_id": "MDQ6VXNlcjU1MjA5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/552093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kerrick", "html_url": "https://github.com/Kerrick", "followers_url": "https://api.github.com/users/Kerrick/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-10-28T20:54:54
2024-10-29T21:48:49
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'd like to be able to set the system prompt from the call to `ollama` in my shell, rather than in the conversation. For example: ```sh ollama run llama3.1 --system="Your nickname is 'Grass' now" ``` ...or... ```sh ollama run llama3.1 -s "system" "Your nickname is 'Grass' now" ``` With this ability, I c...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7405/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7620
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7620/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7620/comments
https://api.github.com/repos/ollama/ollama/issues/7620/events
https://github.com/ollama/ollama/pull/7620
2,649,452,239
PR_kwDOJ0Z1Ps6BhB1y
7,620
api: fix typo in Golang API types docs
{ "login": "neomantra", "id": 26842, "node_id": "MDQ6VXNlcjI2ODQy", "avatar_url": "https://avatars.githubusercontent.com/u/26842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neomantra", "html_url": "https://github.com/neomantra", "followers_url": "https://api.github.com/users/neomantra/...
[]
closed
false
null
[]
null
0
2024-11-11T14:10:40
2024-12-08T17:32:55
2024-11-12T00:21:58
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7620", "html_url": "https://github.com/ollama/ollama/pull/7620", "diff_url": "https://github.com/ollama/ollama/pull/7620.diff", "patch_url": "https://github.com/ollama/ollama/pull/7620.patch", "merged_at": "2024-11-12T00:21:58" }
Fixes minor typos and grammar in `api/types.go` I had only reviewed `client.go` in my commit yesterday, sorry I didn't check this one too. Somehow the last PR had Python in the title, but it and this affect Golang.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7620/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8485
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8485/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8485/comments
https://api.github.com/repos/ollama/ollama/issues/8485/events
https://github.com/ollama/ollama/issues/8485
2,797,657,711
I_kwDOJ0Z1Ps6mwN5v
8,485
[0.5.7] small models are loaded to GPU, but inference is slow and using a lot of CPU
{ "login": "kha84", "id": 110789576, "node_id": "U_kgDOBpqDyA", "avatar_url": "https://avatars.githubusercontent.com/u/110789576?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kha84", "html_url": "https://github.com/kha84", "followers_url": "https://api.github.com/users/kha84/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2025-01-19T14:33:04
2025-01-20T09:50:51
2025-01-20T09:50:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello there. Just upgraded from ollama 0.4.x version to the latest one (0.5.7) and immediately noticed that inference of all models (even small ones, like llama 3.2 3B) become very slow. Like orders of magnitude slow. I can see that during inference CPU is being used intensively, even though th...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8485/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8485/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4145
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4145/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4145/comments
https://api.github.com/repos/ollama/ollama/issues/4145/events
https://github.com/ollama/ollama/pull/4145
2,278,580,561
PR_kwDOJ0Z1Ps5uhXWg
4,145
Fix lint warnings
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-05-03T23:44:35
2024-05-03T23:53:20
2024-05-03T23:53:17
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4145", "html_url": "https://github.com/ollama/ollama/pull/4145", "diff_url": "https://github.com/ollama/ollama/pull/4145.diff", "patch_url": "https://github.com/ollama/ollama/pull/4145.patch", "merged_at": "2024-05-03T23:53:17" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4145/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/290
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/290/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/290/comments
https://api.github.com/repos/ollama/ollama/issues/290/events
https://github.com/ollama/ollama/pull/290
1,837,466,485
PR_kwDOJ0Z1Ps5XPFzC
290
implement loading ggml lora adapters through the modelfile
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-05T00:21:45
2023-08-11T00:23:03
2023-08-11T00:23:01
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/290", "html_url": "https://github.com/ollama/ollama/pull/290", "diff_url": "https://github.com/ollama/ollama/pull/290.diff", "patch_url": "https://github.com/ollama/ollama/pull/290.patch", "merged_at": "2023-08-11T00:23:01" }
LoRA adapters can be added to Ollama models through the Modelfile and automatically applied when the model is loaded: ``` FROM llama2:13b TEMPLATE {{ .Prompt }} ADAPTER ./llama2-13b-storywriter-lora.ggml.bin ``` A few caveats: * LoRA adapters must be GGML. If the adapter isn't GGML, it can be converted with ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/290/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/290/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6651
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6651/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6651/comments
https://api.github.com/repos/ollama/ollama/issues/6651/events
https://github.com/ollama/ollama/issues/6651
2,507,064,853
I_kwDOJ0Z1Ps6VbsYV
6,651
The speed of using embedded models is much slower compared to xinference
{ "login": "yushengliao", "id": 29765903, "node_id": "MDQ6VXNlcjI5NzY1OTAz", "avatar_url": "https://avatars.githubusercontent.com/u/29765903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yushengliao", "html_url": "https://github.com/yushengliao", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5808482718, "node_id": ...
open
false
null
[]
null
0
2024-09-05T07:58:41
2024-09-05T16:17:42
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I use the BGE-M3 model and send the same request, especially with xinference taking about 10 seconds and ollama taking about 200 seconds. I'm sure they all use GPUs. I found that xinference allocates more video memory, while ollama's video memory usage remains basically unchanged. Perhaps this is the reason for the s...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6651/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/6651/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/891
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/891/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/891/comments
https://api.github.com/repos/ollama/ollama/issues/891/events
https://github.com/ollama/ollama/issues/891
1,959,768,778
I_kwDOJ0Z1Ps50z7LK
891
Support remote `ollama create`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
0
2023-10-24T17:47:30
2023-11-16T00:41:14
2023-11-16T00:41:14
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
`ollama create` should support remote instances of Ollama with `OLLAMA_HOST` ``` OLLAMA_HOST=my-test-host:11434 ollama create my-model -f ./Modelfile ```
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/891/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3323
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3323/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3323/comments
https://api.github.com/repos/ollama/ollama/issues/3323/events
https://github.com/ollama/ollama/issues/3323
2,204,255,976
I_kwDOJ0Z1Ps6DYkbo
3,323
Feat req: Add "Last updated" sorting to the hub
{ "login": "knoopx", "id": 100993, "node_id": "MDQ6VXNlcjEwMDk5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/100993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/knoopx", "html_url": "https://github.com/knoopx", "followers_url": "https://api.github.com/users/knoopx/follow...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
closed
false
null
[]
null
1
2024-03-24T09:09:12
2024-07-18T19:04:28
2024-07-18T19:04:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? There's no way to discover recent updates for existing models. ### How should we solve this? Add "Last updated" sort choice ### What is the impact of not solving this? No way to find out recently updated models ### Anything else? _No response_
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3323/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3323/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7820
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7820/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7820/comments
https://api.github.com/repos/ollama/ollama/issues/7820/events
https://github.com/ollama/ollama/issues/7820
2,687,973,667
I_kwDOJ0Z1Ps6gNzkj
7,820
Instant closure when using shell input with piped output.
{ "login": "WyvernDotRed", "id": 41121402, "node_id": "MDQ6VXNlcjQxMTIxNDAy", "avatar_url": "https://avatars.githubusercontent.com/u/41121402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WyvernDotRed", "html_url": "https://github.com/WyvernDotRed", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-11-24T16:36:36
2024-12-10T21:07:24
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When running `ollama run [model] | cat` or `ollama run [model] > [file]`, ollama now closes immediately and does not accept any manual input. `ollama run [model]` still functions as expected. While `cat | ollama run [model] ...` seems to be the workaround, this requires entering ^d to have...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7820/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7239
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7239/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7239/comments
https://api.github.com/repos/ollama/ollama/issues/7239/events
https://github.com/ollama/ollama/issues/7239
2,594,416,445
I_kwDOJ0Z1Ps6ao6c9
7,239
Add Tab-Enabled Autocomplete for Local Model Parameters in Ollama CLI
{ "login": "lucianoayres", "id": 20209393, "node_id": "MDQ6VXNlcjIwMjA5Mzkz", "avatar_url": "https://avatars.githubusercontent.com/u/20209393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucianoayres", "html_url": "https://github.com/lucianoayres", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-10-17T11:20:14
2025-01-13T00:46:55
2025-01-13T00:46:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would greatly enhance usability if the Ollama CLI supported tab-autocomplete for model names when using commands like `run`, `show`, `list`, etc. For example: ```bash # This would autocomplete to something like `llama3.2`, based on the locally available models. ollama run lla<TAB> ``` Implementing this acro...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7239/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7239/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2138
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2138/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2138/comments
https://api.github.com/repos/ollama/ollama/issues/2138/events
https://github.com/ollama/ollama/pull/2138
2,094,374,620
PR_kwDOJ0Z1Ps5kvv-F
2,138
Update README.md - Community Integrations - Obsidian Local GPT plugin
{ "login": "pfrankov", "id": 584632, "node_id": "MDQ6VXNlcjU4NDYzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/584632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfrankov", "html_url": "https://github.com/pfrankov", "followers_url": "https://api.github.com/users/pfranko...
[]
closed
false
null
[]
null
0
2024-01-22T17:11:44
2024-02-22T15:52:36
2024-02-22T15:52:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2138", "html_url": "https://github.com/ollama/ollama/pull/2138", "diff_url": "https://github.com/ollama/ollama/pull/2138.diff", "patch_url": "https://github.com/ollama/ollama/pull/2138.patch", "merged_at": "2024-02-22T15:52:36" }
Local GPT plugin for Obsidian mainly relies on Ollama provider ![image](https://github.com/pfrankov/obsidian-local-gpt/assets/584632/724d4399-cb6c-4531-9f04-a1e5df2e3dad) Also works with images <img width="400" src="https://github.com/pfrankov/obsidian-local-gpt/assets/584632/a05d68fa-5419-4386-ac43-82b9513999...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2138/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5783
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5783/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5783/comments
https://api.github.com/repos/ollama/ollama/issues/5783/events
https://github.com/ollama/ollama/issues/5783
2,417,335,104
I_kwDOJ0Z1Ps6QFZtA
5,783
erorr loading models x3 7900 XTX #5708
{ "login": "darwinvelez58", "id": 118543481, "node_id": "U_kgDOBxDUeQ", "avatar_url": "https://avatars.githubusercontent.com/u/118543481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darwinvelez58", "html_url": "https://github.com/darwinvelez58", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-18T20:28:02
2024-07-22T23:07:52
2024-07-22T23:07:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Few Days ago I report this error #5708, #5710 suppose to fix the issue but I still have the same error. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.2.6-rocm
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5783/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1528
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1528/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1528/comments
https://api.github.com/repos/ollama/ollama/issues/1528/events
https://github.com/ollama/ollama/pull/1528
2,042,606,696
PR_kwDOJ0Z1Ps5iDRhN
1,528
Add unit test of API routes
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-12-14T22:35:10
2023-12-15T00:47:41
2023-12-15T00:47:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1528", "html_url": "https://github.com/ollama/ollama/pull/1528", "diff_url": "https://github.com/ollama/ollama/pull/1528.diff", "patch_url": "https://github.com/ollama/ollama/pull/1528.patch", "merged_at": "2023-12-15T00:47:40" }
This change modifies the base server to allow it to be more easily unit tested. It also adds in a simple unit test to "/api/version" to demonstrate how to add unit tests in the future.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1528/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5830
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5830/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5830/comments
https://api.github.com/repos/ollama/ollama/issues/5830/events
https://github.com/ollama/ollama/issues/5830
2,421,426,851
I_kwDOJ0Z1Ps6QVAqj
5,830
OpenAI endpoint gives 404
{ "login": "defaultsecurity", "id": 34036534, "node_id": "MDQ6VXNlcjM0MDM2NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/34036534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/defaultsecurity", "html_url": "https://github.com/defaultsecurity", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-21T14:20:07
2024-07-22T06:18:03
2024-07-22T06:18:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? - http://localhost:11434/v1/chat/completions (gives 404) - http://localhost:11434 (shows ollama is running) Otherwise Ollama is working. I'm not sure what to do. ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.2.7
{ "login": "defaultsecurity", "id": 34036534, "node_id": "MDQ6VXNlcjM0MDM2NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/34036534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/defaultsecurity", "html_url": "https://github.com/defaultsecurity", "followers_url": "https://api...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5830/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2920
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2920/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2920/comments
https://api.github.com/repos/ollama/ollama/issues/2920/events
https://github.com/ollama/ollama/issues/2920
2,167,396,829
I_kwDOJ0Z1Ps6BL9nd
2,920
ollama call failed with status code 500 llama 2
{ "login": "sabahatza", "id": 135341585, "node_id": "U_kgDOCBEmEQ", "avatar_url": "https://avatars.githubusercontent.com/u/135341585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sabahatza", "html_url": "https://github.com/sabahatza", "followers_url": "https://api.github.com/users/sabaha...
[]
closed
false
null
[]
null
3
2024-03-04T17:44:32
2024-03-04T17:57:26
2024-03-04T17:48:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi team, I am trying to run the llama2 model locally ( I was doing it previously for the last couple of weeks without any problems), but now I face the following error when I am trying to -> ollama run llama2 `Error: error loading model /Users/S_Z/.ollama/models/blobs/sha256:8934d96d3f08982e95922b2b7a2c626a1fe873d7...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2920/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/601
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/601/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/601/comments
https://api.github.com/repos/ollama/ollama/issues/601/events
https://github.com/ollama/ollama/pull/601
1,912,778,743
PR_kwDOJ0Z1Ps5bMTBe
601
Update README.md for linux + cleanup
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2023-09-26T06:30:46
2023-09-26T06:44:54
2023-09-26T06:44:53
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/601", "html_url": "https://github.com/ollama/ollama/pull/601", "diff_url": "https://github.com/ollama/ollama/pull/601.diff", "patch_url": "https://github.com/ollama/ollama/pull/601.patch", "merged_at": "2023-09-26T06:44:53" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/601/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5208
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5208/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5208/comments
https://api.github.com/repos/ollama/ollama/issues/5208/events
https://github.com/ollama/ollama/pull/5208
2,367,386,322
PR_kwDOJ0Z1Ps5zOuoI
5,208
Support image input for OpenAI chat compatibility
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
8
2024-06-22T00:31:49
2024-07-30T20:10:05
2024-07-14T05:07:45
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5208", "html_url": "https://github.com/ollama/ollama/pull/5208", "diff_url": "https://github.com/ollama/ollama/pull/5208.diff", "patch_url": "https://github.com/ollama/ollama/pull/5208.patch", "merged_at": "2024-07-14T05:07:45" }
Supports passing in base64 encoded image into image_url. E.g. ``` curl http://localhost:11434/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "llava", "messages": [ { "role": "user", "content": [ { "type": "text", ...
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5208/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 6, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5208/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1640
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1640/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1640/comments
https://api.github.com/repos/ollama/ollama/issues/1640/events
https://github.com/ollama/ollama/pull/1640
2,051,142,771
PR_kwDOJ0Z1Ps5igMrN
1,640
added logprobs (`n_probs`)
{ "login": "janpf", "id": 9437600, "node_id": "MDQ6VXNlcjk0Mzc2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/9437600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/janpf", "html_url": "https://github.com/janpf", "followers_url": "https://api.github.com/users/janpf/follower...
[]
closed
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
42
2023-12-20T19:26:27
2025-01-07T19:25:56
2025-01-07T19:25:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1640", "html_url": "https://github.com/ollama/ollama/pull/1640", "diff_url": "https://github.com/ollama/ollama/pull/1640.diff", "patch_url": "https://github.com/ollama/ollama/pull/1640.patch", "merged_at": null }
As discussed on discord I implemented the feature. It just passes through the probs from the llamacpp server. Sorry, first time writing Go, might have missed something. https://discord.com/channels/1128867683291627614/1128867684130508875/1187028494228664340
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1640/reactions", "total_count": 38, "+1": 17, "-1": 0, "laugh": 0, "hooray": 11, "confused": 0, "heart": 10, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1640/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3065
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3065/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3065/comments
https://api.github.com/repos/ollama/ollama/issues/3065/events
https://github.com/ollama/ollama/pull/3065
2,180,154,843
PR_kwDOJ0Z1Ps5pTJW3
3,065
relay load model errors to the client
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2024-03-11T20:25:07
2024-03-11T20:48:28
2024-03-11T20:48:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3065", "html_url": "https://github.com/ollama/ollama/pull/3065", "diff_url": "https://github.com/ollama/ollama/pull/3065.diff", "patch_url": "https://github.com/ollama/ollama/pull/3065.patch", "merged_at": "2024-03-11T20:48:27" }
Relay errors on model load, this is needed to help people troubleshoot the specific problem they are experiencing when running a model. This function is a bottle-neck where many different errors can occur. As seen in #2753, there are many issues when the generic "failed to load model" error being reported. In order...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3065/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/661
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/661/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/661/comments
https://api.github.com/repos/ollama/ollama/issues/661/events
https://github.com/ollama/ollama/pull/661
1,920,414,250
PR_kwDOJ0Z1Ps5bmHz9
661
Documenting Docker Hub image and OpenAI compatibility
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
2
2023-09-30T22:09:47
2023-10-25T20:18:44
2023-10-24T23:15:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/661", "html_url": "https://github.com/ollama/ollama/pull/661", "diff_url": "https://github.com/ollama/ollama/pull/661.diff", "patch_url": "https://github.com/ollama/ollama/pull/661.patch", "merged_at": null }
- Closes https://github.com/jmorganca/ollama/issues/538 - Upstreams some knowledge from https://github.com/jmorganca/ollama/issues/546 - Simplifies `brew install` to one line
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/661/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5233
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5233/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5233/comments
https://api.github.com/repos/ollama/ollama/issues/5233/events
https://github.com/ollama/ollama/issues/5233
2,368,103,862
I_kwDOJ0Z1Ps6NJmW2
5,233
filtering library models based on tags?
{ "login": "itsPreto", "id": 45348368, "node_id": "MDQ6VXNlcjQ1MzQ4MzY4", "avatar_url": "https://avatars.githubusercontent.com/u/45348368?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsPreto", "html_url": "https://github.com/itsPreto", "followers_url": "https://api.github.com/users/its...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
open
false
null
[]
null
0
2024-06-23T01:02:38
2024-07-08T17:17:16
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
can we get something like this for the models library? would be reallyyyy nice! <img width="387" alt="Screenshot 2024-06-22 at 9 01 25 PM" src="https://github.com/ollama/ollama/assets/45348368/bd142627-9426-41af-b451-67dc82c427df">
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5233/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5233/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5806
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5806/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5806/comments
https://api.github.com/repos/ollama/ollama/issues/5806/events
https://github.com/ollama/ollama/issues/5806
2,420,556,455
I_kwDOJ0Z1Ps6QRsKn
5,806
allowing ollama 3 to access local txt files for a larger memory?
{ "login": "boba1234567890", "id": 107341969, "node_id": "U_kgDOBmXokQ", "avatar_url": "https://avatars.githubusercontent.com/u/107341969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boba1234567890", "html_url": "https://github.com/boba1234567890", "followers_url": "https://api.github.c...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-07-20T04:34:36
2024-09-04T04:29:27
2024-09-04T04:29:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
is there a way to allow ollama 3 to use local txt files for a larger memory and maybe other stuff?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5806/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5878
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5878/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5878/comments
https://api.github.com/repos/ollama/ollama/issues/5878/events
https://github.com/ollama/ollama/issues/5878
2,425,437,900
I_kwDOJ0Z1Ps6QkT7M
5,878
Apple LLM -> DCLM-7B
{ "login": "dvelez3815", "id": 40648189, "node_id": "MDQ6VXNlcjQwNjQ4MTg5", "avatar_url": "https://avatars.githubusercontent.com/u/40648189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dvelez3815", "html_url": "https://github.com/dvelez3815", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-07-23T14:59:28
2024-07-23T15:00:50
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
apple llm https://huggingface.co/apple/DCLM-7B ![image](https://github.com/user-attachments/assets/39d07483-f4e0-4884-9ee4-8149efc45b79)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5878/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 5, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5878/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7300
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7300/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7300/comments
https://api.github.com/repos/ollama/ollama/issues/7300/events
https://github.com/ollama/ollama/issues/7300
2,603,110,739
I_kwDOJ0Z1Ps6bKFFT
7,300
Llama3.2-vision Run Error
{ "login": "mruckman1", "id": 10116867, "node_id": "MDQ6VXNlcjEwMTE2ODY3", "avatar_url": "https://avatars.githubusercontent.com/u/10116867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mruckman1", "html_url": "https://github.com/mruckman1", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
21
2024-10-21T16:40:09
2024-11-05T16:16:29
2024-10-23T01:29:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 1. Updated Ollama this morning. 2. Entered `ollama run x/llama3.2-vision` on macbook 3. Got below output: > pulling manifest > pulling 652e85aa1e14... 100% ▕████████████████▏ 6.0 GB > pulling 622429e8d318... 100% ▕████████████████▏ 1.9 GB ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7300/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7300/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7437
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7437/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7437/comments
https://api.github.com/repos/ollama/ollama/issues/7437/events
https://github.com/ollama/ollama/pull/7437
2,625,863,096
PR_kwDOJ0Z1Ps6AeYjb
7,437
Give unicode test more time to run
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-31T02:58:26
2024-10-31T20:35:33
2024-10-31T20:35:31
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7437", "html_url": "https://github.com/ollama/ollama/pull/7437", "diff_url": "https://github.com/ollama/ollama/pull/7437.diff", "patch_url": "https://github.com/ollama/ollama/pull/7437.patch", "merged_at": "2024-10-31T20:35:31" }
Some slower GPUs (or partial CPU/GPU loads) can take more than the default 30s to complete this test
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7437/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/800
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/800/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/800/comments
https://api.github.com/repos/ollama/ollama/issues/800/events
https://github.com/ollama/ollama/pull/800
1,944,775,176
PR_kwDOJ0Z1Ps5c4Ey-
800
API docs link fix
{ "login": "richawo", "id": 35015261, "node_id": "MDQ6VXNlcjM1MDE1MjYx", "avatar_url": "https://avatars.githubusercontent.com/u/35015261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richawo", "html_url": "https://github.com/richawo", "followers_url": "https://api.github.com/users/richaw...
[]
closed
false
null
[]
null
3
2023-10-16T09:28:16
2023-10-21T16:00:22
2023-10-21T16:00:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/800", "html_url": "https://github.com/ollama/ollama/pull/800", "diff_url": "https://github.com/ollama/ollama/pull/800.diff", "patch_url": "https://github.com/ollama/ollama/pull/800.patch", "merged_at": null }
For some reason, the relative API docs link is broken (api is a particular path in Github). Replaced the API docs link in README.md with the absolute path. Fixes issue #802.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/800/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4174
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4174/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4174/comments
https://api.github.com/repos/ollama/ollama/issues/4174/events
https://github.com/ollama/ollama/pull/4174
2,279,618,845
PR_kwDOJ0Z1Ps5ukkak
4,174
update libraries for langchain_community + llama3 changed from llama2
{ "login": "Drlordbasil", "id": 126736516, "node_id": "U_kgDOB43YhA", "avatar_url": "https://avatars.githubusercontent.com/u/126736516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Drlordbasil", "html_url": "https://github.com/Drlordbasil", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-05-05T16:46:34
2024-05-06T02:06:32
2024-05-05T23:07:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4174", "html_url": "https://github.com/ollama/ollama/pull/4174", "diff_url": "https://github.com/ollama/ollama/pull/4174.diff", "patch_url": "https://github.com/ollama/ollama/pull/4174.patch", "merged_at": "2024-05-05T23:07:04" }
Changed: - run ->invoke for updated lib - updated langchain libraries for non-depreciated - updated llama2 to llama3
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4174/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3782
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3782/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3782/comments
https://api.github.com/repos/ollama/ollama/issues/3782/events
https://github.com/ollama/ollama/pull/3782
2,254,683,690
PR_kwDOJ0Z1Ps5tQTYN
3,782
Critical fix from llama.cpp JSON grammar to forbid un-escaped escape characters in JSON strings
{ "login": "hughescr", "id": 46348, "node_id": "MDQ6VXNlcjQ2MzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/46348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hughescr", "html_url": "https://github.com/hughescr", "followers_url": "https://api.github.com/users/hughescr/foll...
[]
closed
false
null
[]
null
1
2024-04-20T19:18:56
2024-06-10T01:53:52
2024-06-09T17:57:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3782", "html_url": "https://github.com/ollama/ollama/pull/3782", "diff_url": "https://github.com/ollama/ollama/pull/3782.diff", "patch_url": "https://github.com/ollama/ollama/pull/3782.patch", "merged_at": "2024-06-09T17:57:09" }
JSON generation is broken, as models can insert control characters inside strings, which violates JSON. For example, with the current JSON grammar, models could generate: ``` { "key": "value broken" } ``` This is incorrect, and if a linebreak is wanted in the middle of the string there, it should be: ``` ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3782/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3782/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7608
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7608/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7608/comments
https://api.github.com/repos/ollama/ollama/issues/7608/events
https://github.com/ollama/ollama/issues/7608
2,648,022,235
I_kwDOJ0Z1Ps6d1Zzb
7,608
pulling manifest error
{ "login": "the-nine-nation", "id": 103977945, "node_id": "U_kgDOBjKT2Q", "avatar_url": "https://avatars.githubusercontent.com/u/103977945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/the-nine-nation", "html_url": "https://github.com/the-nine-nation", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-11-11T03:32:24
2024-11-11T03:34:34
2024-11-11T03:34:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? How to skip pulling manifest ? because my mechine without internet. ### OS Linux, Docker ### GPU _No response_ ### CPU Intel ### Ollama version newest
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7608/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3118
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3118/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3118/comments
https://api.github.com/repos/ollama/ollama/issues/3118/events
https://github.com/ollama/ollama/issues/3118
2,184,554,272
I_kwDOJ0Z1Ps6CNacg
3,118
ollama RAM use on orangepi 5
{ "login": "parzzd", "id": 103915075, "node_id": "U_kgDOBjGeQw", "avatar_url": "https://avatars.githubusercontent.com/u/103915075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parzzd", "html_url": "https://github.com/parzzd", "followers_url": "https://api.github.com/users/parzzd/follower...
[]
closed
false
null
[]
null
2
2024-03-13T17:35:48
2024-03-13T18:21:56
2024-03-13T18:15:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Im trying model on my SBC, it has 16gb RAM, but the execution just uses 1.6gb, making the model take so much time to process. Is there any parameter or configuration to allow it. im new on Ollama, so any answer would be appreciated. ![scr_proceso](https://github.com/ollama/ollama/assets/103915075/5e43931c-9cb0-40d...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3118/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/393
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/393/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/393/comments
https://api.github.com/repos/ollama/ollama/issues/393/events
https://github.com/ollama/ollama/pull/393
1,860,413,586
PR_kwDOJ0Z1Ps5YcUmI
393
use url.URL
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-22T01:56:35
2023-08-22T22:51:34
2023-08-22T22:51:33
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/393", "html_url": "https://github.com/ollama/ollama/pull/393", "diff_url": "https://github.com/ollama/ollama/pull/393.diff", "patch_url": "https://github.com/ollama/ollama/pull/393.patch", "merged_at": "2023-08-22T22:51:33" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/393/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/961
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/961/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/961/comments
https://api.github.com/repos/ollama/ollama/issues/961/events
https://github.com/ollama/ollama/issues/961
1,972,308,192
I_kwDOJ0Z1Ps51jwjg
961
garbage output on small models spread to many GPUs
{ "login": "chymian", "id": 1899961, "node_id": "MDQ6VXNlcjE4OTk5NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1899961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chymian", "html_url": "https://github.com/chymian", "followers_url": "https://api.github.com/users/chymian/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2023-11-01T12:43:10
2024-04-23T15:31:40
2024-04-23T15:31:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
when loading a small model on multiple GPUs, it produces garbage. the machine has 4 x 3070 (8GB) and an older i5-7400, UBU 22.04, Cuda 11.8 ### How to reproduce starting the server by hand ```bash ollama serve ``` ```bash ollama run zephyr >>> why is the sky blue? acia####################################...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/961/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/961/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1978
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1978/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1978/comments
https://api.github.com/repos/ollama/ollama/issues/1978/events
https://github.com/ollama/ollama/issues/1978
2,080,398,998
I_kwDOJ0Z1Ps58AF6W
1,978
Error "unknown architecture MistralModel" during quantization
{ "login": "philippgille", "id": 170670, "node_id": "MDQ6VXNlcjE3MDY3MA==", "avatar_url": "https://avatars.githubusercontent.com/u/170670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philippgille", "html_url": "https://github.com/philippgille", "followers_url": "https://api.github.com/u...
[]
closed
false
null
[]
null
2
2024-01-13T17:16:28
2024-05-07T00:08:53
2024-05-06T23:48:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello :wave: , First of all thank you very much for creating and maintaining ollama! It's so simple to use :+1: Now I wanted to use ollama for creating embeddings, and saw https://huggingface.co/intfloat/e5-mistral-7b-instruct performing very well on the [embeddings benchmark](https://huggingface.co/spaces/mteb/lea...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1978/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4861
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4861/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4861/comments
https://api.github.com/repos/ollama/ollama/issues/4861/events
https://github.com/ollama/ollama/issues/4861
2,338,549,595
I_kwDOJ0Z1Ps6LY29b
4,861
Jetson - "ollama run" command loads until timeout
{ "login": "Vassar-HARPER-Project", "id": 171359116, "node_id": "U_kgDOCja7jA", "avatar_url": "https://avatars.githubusercontent.com/u/171359116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vassar-HARPER-Project", "html_url": "https://github.com/Vassar-HARPER-Project", "followers_url": ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
11
2024-06-06T15:34:26
2024-11-12T18:31:55
2024-11-12T18:31:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Upon running "ollama run gemma:2b" (though this happens for all tested models: llama3, phi, tinyllama), the loading animation appears and after ~5 minutes (estimate, untimed), the response / result of the command is: `Error: timed out waiting for llama runner to start - progress 1.00 - ` ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4861/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4861/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7141
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7141/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7141/comments
https://api.github.com/repos/ollama/ollama/issues/7141/events
https://github.com/ollama/ollama/pull/7141
2,573,860,762
PR_kwDOJ0Z1Ps59-4qQ
7,141
Fix build leakages
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-08T18:00:51
2024-10-08T20:05:03
2024-10-08T20:05:00
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7141", "html_url": "https://github.com/ollama/ollama/pull/7141", "diff_url": "https://github.com/ollama/ollama/pull/7141.diff", "patch_url": "https://github.com/ollama/ollama/pull/7141.patch", "merged_at": "2024-10-08T20:05:00" }
The recent change to applying patches leaves the submodule dirty based on "new commits" being present. This ensures we clean up so the tree no longer reports dirty after a `go generate ./...` run. The Makefile was being a bit too aggressive in cleaning things up and would result in deleting the placeholder files wh...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7141/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6269
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6269/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6269/comments
https://api.github.com/repos/ollama/ollama/issues/6269/events
https://github.com/ollama/ollama/issues/6269
2,456,908,603
I_kwDOJ0Z1Ps6ScXM7
6,269
Please add LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct model
{ "login": "xest", "id": 4961215, "node_id": "MDQ6VXNlcjQ5NjEyMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/4961215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xest", "html_url": "https://github.com/xest", "followers_url": "https://api.github.com/users/xest/followers", ...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
9
2024-08-09T01:07:48
2024-12-10T08:05:04
2024-12-10T08:05:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
* huggingface: [LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6269/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6269/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3931
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3931/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3931/comments
https://api.github.com/repos/ollama/ollama/issues/3931/events
https://github.com/ollama/ollama/issues/3931
2,264,958,659
I_kwDOJ0Z1Ps6HAIbD
3,931
Digest mismatch, file must be downloaded again
{ "login": "tttt-0814", "id": 39620928, "node_id": "MDQ6VXNlcjM5NjIwOTI4", "avatar_url": "https://avatars.githubusercontent.com/u/39620928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tttt-0814", "html_url": "https://github.com/tttt-0814", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-04-26T04:52:11
2025-01-30T02:39:33
2024-05-09T21:08:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I tried to pull nomic-embed-text, but got an error below. I also tried to pull another models, but got the same error. $ ollama pull nomic-embed-text pulling manifest pulling 970aa74c0a90... 100% ▕████████████████████████████████████████████████████████████████████████████████████████...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3931/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3931/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4879
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4879/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4879/comments
https://api.github.com/repos/ollama/ollama/issues/4879/events
https://github.com/ollama/ollama/pull/4879
2,338,955,742
PR_kwDOJ0Z1Ps5xuCen
4,879
API app/browser access
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-06-06T18:56:08
2024-06-06T22:19:04
2024-06-06T22:19:03
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4879", "html_url": "https://github.com/ollama/ollama/pull/4879", "diff_url": "https://github.com/ollama/ollama/pull/4879.diff", "patch_url": "https://github.com/ollama/ollama/pull/4879.patch", "merged_at": "2024-06-06T22:19:03" }
Fixes #4791 Fixes #3799 Fixes #4388
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4879/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4879/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1764
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1764/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1764/comments
https://api.github.com/repos/ollama/ollama/issues/1764/events
https://github.com/ollama/ollama/pull/1764
2,063,147,841
PR_kwDOJ0Z1Ps5jFzXu
1,764
keyboard shortcut help
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-01-03T01:59:41
2024-01-03T02:04:13
2024-01-03T02:04:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1764", "html_url": "https://github.com/ollama/ollama/pull/1764", "diff_url": "https://github.com/ollama/ollama/pull/1764.diff", "patch_url": "https://github.com/ollama/ollama/pull/1764.patch", "merged_at": "2024-01-03T02:04:13" }
This change adds some help in the REPL for using the keyboard shortcut commands.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1764/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4200
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4200/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4200/comments
https://api.github.com/repos/ollama/ollama/issues/4200/events
https://github.com/ollama/ollama/issues/4200
2,280,915,245
I_kwDOJ0Z1Ps6H9AEt
4,200
http://localhost:11434/api endpoint giving 404 error
{ "login": "ritesh7911", "id": 64787172, "node_id": "MDQ6VXNlcjY0Nzg3MTcy", "avatar_url": "https://avatars.githubusercontent.com/u/64787172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ritesh7911", "html_url": "https://github.com/ritesh7911", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-05-06T13:42:10
2024-05-08T20:27:28
2024-05-08T20:27:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am using latest version of windows . As per readme file when I am hitting http://localhost:11434 i am getting "ollama is running" but "http://localhost:11434/api" is giving error 404 ### OS Windows ### GPU AMD ### CPU Intel ### Ollama version 0.1.33
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4200/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3554
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3554/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3554/comments
https://api.github.com/repos/ollama/ollama/issues/3554/events
https://github.com/ollama/ollama/issues/3554
2,233,075,542
I_kwDOJ0Z1Ps6FGgdW
3,554
Potential problems with the `llm/ext_server/server.cpp` not accepting `--ubatch-size ` option
{ "login": "jukofyork", "id": 69222624, "node_id": "MDQ6VXNlcjY5MjIyNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jukofyork", "html_url": "https://github.com/jukofyork", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[ { "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://...
null
1
2024-04-09T10:01:50
2024-11-23T20:17:29
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Not sure what to list this issue under (it's a potential bug I think). Recently `llama.cpp` has added an option called `--ubatch-size ` and appears to have changed the default value (and possibly meaning of) the old `--batch-size ` option: https://github.com/ggerganov/llama.cpp/pull/601...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3554/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3554/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4742
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4742/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4742/comments
https://api.github.com/repos/ollama/ollama/issues/4742/events
https://github.com/ollama/ollama/issues/4742
2,327,001,333
I_kwDOJ0Z1Ps6Kszj1
4,742
VRAM allocation error when loading different models with different OLLAMA_VRAM_MAX configurations
{ "login": "hamkido", "id": 43724352, "node_id": "MDQ6VXNlcjQzNzI0MzUy", "avatar_url": "https://avatars.githubusercontent.com/u/43724352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamkido", "html_url": "https://github.com/hamkido", "followers_url": "https://api.github.com/users/hamkid...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-05-31T05:38:13
2024-06-05T06:34:19
2024-05-31T06:31:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have two amd 7900xtx 24g gpu. When using ollama, I encounter different memory allocation errors and exit errors. 1. No OLLAMA_VRAM_MAX configuration The large model deepseek-llm:67b-chat can be loaded correctly But if you call something bigger, such as qwen:72b and command-r-plus, the dis...
{ "login": "hamkido", "id": 43724352, "node_id": "MDQ6VXNlcjQzNzI0MzUy", "avatar_url": "https://avatars.githubusercontent.com/u/43724352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamkido", "html_url": "https://github.com/hamkido", "followers_url": "https://api.github.com/users/hamkid...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4742/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8526
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8526/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8526/comments
https://api.github.com/repos/ollama/ollama/issues/8526/events
https://github.com/ollama/ollama/issues/8526
2,803,177,332
I_kwDOJ0Z1Ps6nFRd0
8,526
how to get English output
{ "login": "jarkkop", "id": 5814285, "node_id": "MDQ6VXNlcjU4MTQyODU=", "avatar_url": "https://avatars.githubusercontent.com/u/5814285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarkkop", "html_url": "https://github.com/jarkkop", "followers_url": "https://api.github.com/users/jarkkop/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2025-01-22T01:51:56
2025-01-22T04:17:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? M:\AI\ollama>ollama run deepseek-r1:7b >>> list philosophers <think> </think> Here is a list of some of the most influential and notable philosophers throughout history, organized by era and region: ### Ancient Philosophy (c. 600–321 BCE) - **Thales of Miletus** (c. 624–548 BCE):被认为是第一个哲学家,提出...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8526/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/672
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/672/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/672/comments
https://api.github.com/repos/ollama/ollama/issues/672/events
https://github.com/ollama/ollama/pull/672
1,922,337,068
PR_kwDOJ0Z1Ps5bse6w
672
Relay default values to llama runner
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
3
2023-10-02T17:32:53
2023-10-02T18:53:17
2023-10-02T18:53:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/672", "html_url": "https://github.com/ollama/ollama/pull/672", "diff_url": "https://github.com/ollama/ollama/pull/672.diff", "patch_url": "https://github.com/ollama/ollama/pull/672.patch", "merged_at": "2023-10-02T18:53:16" }
Thanks to @hallh for #663. This change cherry-picks that PR, relays all our defaults, and does some re-organizing of the code to make it easier to read.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/672/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8113
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8113/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8113/comments
https://api.github.com/repos/ollama/ollama/issues/8113/events
https://github.com/ollama/ollama/pull/8113
2,741,618,542
PR_kwDOJ0Z1Ps6FULcl
8,113
llama: add qwen2vl support
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
open
false
null
[]
null
0
2024-12-16T07:59:10
2025-01-15T11:14:59
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8113", "html_url": "https://github.com/ollama/ollama/pull/8113", "diff_url": "https://github.com/ollama/ollama/pull/8113.diff", "patch_url": "https://github.com/ollama/ollama/pull/8113.patch", "merged_at": null }
Still missing: add 4 positions per embedding when creating a batch
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8113/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/ollama/ollama/issues/8113/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/685
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/685/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/685/comments
https://api.github.com/repos/ollama/ollama/issues/685/events
https://github.com/ollama/ollama/issues/685
1,923,133,063
I_kwDOJ0Z1Ps5yoK6H
685
Question: where are all the `Modelfile`s?
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
8
2023-10-03T01:50:20
2023-10-06T15:15:28
2023-10-04T02:40:43
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://ollama.ai/library has a lot of models. I would like to add a new model, and want to make sure it uses the GPU. So I am looking to refer to `Modelfile`s for models featured on https://ollama.ai/library. Where is the source `Modelfile`s for the current "built in" models?
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/685/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/685/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2246
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2246/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2246/comments
https://api.github.com/repos/ollama/ollama/issues/2246/events
https://github.com/ollama/ollama/pull/2246
2,104,401,475
PR_kwDOJ0Z1Ps5lRIEy
2,246
Don't disable GPUs on arm without AVX
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-28T23:38:00
2024-01-29T00:26:58
2024-01-29T00:26:55
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2246", "html_url": "https://github.com/ollama/ollama/pull/2246", "diff_url": "https://github.com/ollama/ollama/pull/2246.diff", "patch_url": "https://github.com/ollama/ollama/pull/2246.patch", "merged_at": "2024-01-29T00:26:55" }
AVX is an x86 feature, so ARM should be excluded from the check. Related to #1979
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2246/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2489
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2489/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2489/comments
https://api.github.com/repos/ollama/ollama/issues/2489/events
https://github.com/ollama/ollama/issues/2489
2,133,991,624
I_kwDOJ0Z1Ps5_MiDI
2,489
what is smallest model that know about comp system administration, network admin, etc?
{ "login": "zinwelzl", "id": 113045180, "node_id": "U_kgDOBrzuvA", "avatar_url": "https://avatars.githubusercontent.com/u/113045180?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zinwelzl", "html_url": "https://github.com/zinwelzl", "followers_url": "https://api.github.com/users/zinwelzl/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
1
2024-02-14T10:15:25
2024-03-14T00:01:41
2024-03-14T00:01:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I run ollama locally and need some small model for help with system administration, network administration, etc? I try few small but are really bad.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2489/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1872
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1872/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1872/comments
https://api.github.com/repos/ollama/ollama/issues/1872/events
https://github.com/ollama/ollama/issues/1872
2,072,777,144
I_kwDOJ0Z1Ps57jBG4
1,872
Error when install on Ubuntu 22.04
{ "login": "dekogroup", "id": 126862835, "node_id": "U_kgDOB4_F8w", "avatar_url": "https://avatars.githubusercontent.com/u/126862835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dekogroup", "html_url": "https://github.com/dekogroup", "followers_url": "https://api.github.com/users/dekogr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
5
2024-01-09T16:55:31
2024-03-13T00:13:27
2024-03-13T00:13:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
curl https://ollama.ai/install.sh | sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 8354 0 8354 0 0 16163 0 --:--:-- --:--:-- --:--:-- 16189 >>> Downloading ollama... #############...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1872/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6461
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6461/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6461/comments
https://api.github.com/repos/ollama/ollama/issues/6461/events
https://github.com/ollama/ollama/issues/6461
2,480,507,753
I_kwDOJ0Z1Ps6T2Ytp
6,461
"/clear" command is not clearing history
{ "login": "devstefancho", "id": 61320923, "node_id": "MDQ6VXNlcjYxMzIwOTIz", "avatar_url": "https://avatars.githubusercontent.com/u/61320923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devstefancho", "html_url": "https://github.com/devstefancho", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-08-22T11:07:39
2024-08-22T17:00:16
2024-08-22T17:00:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `Ctrl` + `l` is clearing history but `/clear` command is not clearing history https://github.com/user-attachments/assets/511ee922-9252-41d4-8b5f-ac324a75aaf1 ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.6
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6461/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3473
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3473/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3473/comments
https://api.github.com/repos/ollama/ollama/issues/3473/events
https://github.com/ollama/ollama/pull/3473
2,222,435,172
PR_kwDOJ0Z1Ps5rimQ_
3,473
Add BrainSoup to compatible clients list
{ "login": "Nurgo", "id": 11637957, "node_id": "MDQ6VXNlcjExNjM3OTU3", "avatar_url": "https://avatars.githubusercontent.com/u/11637957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nurgo", "html_url": "https://github.com/Nurgo", "followers_url": "https://api.github.com/users/Nurgo/follow...
[]
closed
false
null
[]
null
1
2024-04-03T09:41:58
2024-05-06T20:42:16
2024-05-06T20:42:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3473", "html_url": "https://github.com/ollama/ollama/pull/3473", "diff_url": "https://github.com/ollama/ollama/pull/3473.diff", "patch_url": "https://github.com/ollama/ollama/pull/3473.patch", "merged_at": "2024-05-06T20:42:16" }
Hi there, BrainSoup is a native multi-LLM client for Windows with advanced features such as local document indexing, RAG, multi-modality, multi-agent automation, code interpreter, sandboxed file system and the ability for agents to interact with the local system via customizable events and tools. More information ca...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3473/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6431
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6431/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6431/comments
https://api.github.com/repos/ollama/ollama/issues/6431/events
https://github.com/ollama/ollama/issues/6431
2,474,397,214
I_kwDOJ0Z1Ps6TfE4e
6,431
GLM4 tools support
{ "login": "napa3um", "id": 665538, "node_id": "MDQ6VXNlcjY2NTUzOA==", "avatar_url": "https://avatars.githubusercontent.com/u/665538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/napa3um", "html_url": "https://github.com/napa3um", "followers_url": "https://api.github.com/users/napa3um/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-08-19T22:57:39
2024-08-19T22:57:39
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
GML4 support tools - https://github.com/THUDM/GLM-4/blob/main/finetune_demo/README_en.md How to fix the template in https://ollama.com/library/glm4 to make the ollama-tools mechanism work?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6431/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4823
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4823/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4823/comments
https://api.github.com/repos/ollama/ollama/issues/4823/events
https://github.com/ollama/ollama/issues/4823
2,334,769,831
I_kwDOJ0Z1Ps6LKcKn
4,823
I encountered this error when converting the Tongyi-Finance-14B-Chat-Int4-AWQ model
{ "login": "wangkai111111", "id": 74865581, "node_id": "MDQ6VXNlcjc0ODY1NTgx", "avatar_url": "https://avatars.githubusercontent.com/u/74865581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangkai111111", "html_url": "https://github.com/wangkai111111", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2024-06-05T02:27:12
2024-06-05T20:39:40
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `(.venv) [root@bastion ollama]# python llm/llama.cpp/convert-hf-to-gguf.py ./model --outtype f16 --outfile converted.bin INFO:hf-to-gguf:Loading model: model INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only INFO:hf-to-gguf:Set model parameters INFO:hf-to-gguf:Set model t...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4823/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7973
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7973/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7973/comments
https://api.github.com/repos/ollama/ollama/issues/7973/events
https://github.com/ollama/ollama/pull/7973
2,723,621,546
PR_kwDOJ0Z1Ps6EW47n
7,973
Document that `--format` now supports passing JSON Schemas
{ "login": "joliss", "id": 524783, "node_id": "MDQ6VXNlcjUyNDc4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joliss", "html_url": "https://github.com/joliss", "followers_url": "https://api.github.com/users/joliss/follow...
[]
open
false
null
[]
null
0
2024-12-06T17:53:16
2024-12-12T23:50:09
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7973", "html_url": "https://github.com/ollama/ollama/pull/7973", "diff_url": "https://github.com/ollama/ollama/pull/7973.diff", "patch_url": "https://github.com/ollama/ollama/pull/7973.patch", "merged_at": null }
JSON Schema support was added in #7900. -------- I removed `e.g.` because I don't believe it supports anything else, right? Let me know if that's wrong.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7973/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5466
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5466/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5466/comments
https://api.github.com/repos/ollama/ollama/issues/5466/events
https://github.com/ollama/ollama/pull/5466
2,389,318,735
PR_kwDOJ0Z1Ps50XSIm
5,466
Fix clip model loading with unicode paths
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-07-03T19:38:27
2024-07-05T15:17:01
2024-07-05T15:16:58
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5466", "html_url": "https://github.com/ollama/ollama/pull/5466", "diff_url": "https://github.com/ollama/ollama/pull/5466.diff", "patch_url": "https://github.com/ollama/ollama/pull/5466.patch", "merged_at": "2024-07-05T15:16:58" }
On windows, if the model dir contained unicode characters clip models would fail to load. This fixes the file name handling in clip.cpp to support utf16 on windows. Fixes #5329 #4365
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5466/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3581
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3581/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3581/comments
https://api.github.com/repos/ollama/ollama/issues/3581/events
https://github.com/ollama/ollama/issues/3581
2,236,296,576
I_kwDOJ0Z1Ps6FSy2A
3,581
MacOS Ollama not binding to 0.0.0.0
{ "login": "kellerkind84", "id": 2842721, "node_id": "MDQ6VXNlcjI4NDI3MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/2842721?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kellerkind84", "html_url": "https://github.com/kellerkind84", "followers_url": "https://api.github.com...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
32
2024-04-10T19:37:49
2025-01-22T12:51:14
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? So when set the OLLAMA_HOST to 0.0.0.0, I cannot access Ollama via the IP, but I can still access it via localhost. ### What did you expect to see? I expect it to be available under <myIP>:11434 ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue?...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3581/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3581/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/2049
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2049/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2049/comments
https://api.github.com/repos/ollama/ollama/issues/2049/events
https://github.com/ollama/ollama/issues/2049
2,088,341,295
I_kwDOJ0Z1Ps58eY8v
2,049
Embedding API could return empty embedding while using completion API from LiteLLM
{ "login": "James4Ever0", "id": 103997068, "node_id": "U_kgDOBjLejA", "avatar_url": "https://avatars.githubusercontent.com/u/103997068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/James4Ever0", "html_url": "https://github.com/James4Ever0", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677485533, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgJX3Q...
open
false
null
[]
null
0
2024-01-18T13:54:52
2024-11-06T19:02:39
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
To reproduce: Launch a LiteLLM service: ```bash litellm --model ollama/openhermes2.5-mistral --drop_params ``` Call the service `/completion` API continuously first, meanwhile you call embedding API via Langchain, and hopefully during the very gap (very short) between each `/completion` call you get empty em...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2049/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/946
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/946/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/946/comments
https://api.github.com/repos/ollama/ollama/issues/946/events
https://github.com/ollama/ollama/issues/946
1,967,119,694
I_kwDOJ0Z1Ps51P91O
946
ollama show --modelfile gives incorrect FROM when multiple tags of base model are downloaded.
{ "login": "easp", "id": 414705, "node_id": "MDQ6VXNlcjQxNDcwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/easp", "html_url": "https://github.com/easp", "followers_url": "https://api.github.com/users/easp/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg...
closed
false
null
[]
null
6
2023-10-29T19:13:09
2023-12-04T18:32:40
2023-12-04T18:32:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've pulled two tags for codellama and created two new models, one based on each. ``` % ollama list NAME ID SIZE MODIFIED [...] codellama:13b 9f438cb9cd58 7.4 GB 27 hours ago codellama:13b-16k e86141f13814 7.4 GB 45 hour...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/946/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2743
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2743/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2743/comments
https://api.github.com/repos/ollama/ollama/issues/2743/events
https://github.com/ollama/ollama/issues/2743
2,152,745,980
I_kwDOJ0Z1Ps6AUEv8
2,743
What is the different between "gemma-instruct", "gemma-text" and "gemma". Same to other models.
{ "login": "XinyueZ", "id": 7869833, "node_id": "MDQ6VXNlcjc4Njk4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7869833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XinyueZ", "html_url": "https://github.com/XinyueZ", "followers_url": "https://api.github.com/users/XinyueZ/...
[]
closed
false
null
[]
null
1
2024-02-25T12:39:09
2024-02-26T11:01:38
2024-02-26T11:01:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "XinyueZ", "id": 7869833, "node_id": "MDQ6VXNlcjc4Njk4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7869833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XinyueZ", "html_url": "https://github.com/XinyueZ", "followers_url": "https://api.github.com/users/XinyueZ/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2743/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2743/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5166
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5166/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5166/comments
https://api.github.com/repos/ollama/ollama/issues/5166/events
https://github.com/ollama/ollama/issues/5166
2,364,002,946
I_kwDOJ0Z1Ps6M59KC
5,166
In dockerGPU containers ollama still uses the CPU
{ "login": "Zxyy-mo", "id": 48347974, "node_id": "MDQ6VXNlcjQ4MzQ3OTc0", "avatar_url": "https://avatars.githubusercontent.com/u/48347974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zxyy-mo", "html_url": "https://github.com/Zxyy-mo", "followers_url": "https://api.github.com/users/Zxyy-m...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-06-20T09:43:38
2024-06-21T15:38:53
2024-06-21T15:38:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## desc I implemented the deployment following the official Docker GPU container tutorial. And successfully got the graphics card information using nvidia-smi in the Docker container. I'm using a nvidia discrete graphics card 3090 ```info # ollama ps NAME ID SIZE PROCESSOR ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5166/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8122
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8122/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8122/comments
https://api.github.com/repos/ollama/ollama/issues/8122/events
https://github.com/ollama/ollama/pull/8122
2,743,390,539
PR_kwDOJ0Z1Ps6FaUC2
8,122
build: streamline build
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
open
false
null
[]
null
0
2024-12-16T21:00:53
2024-12-16T21:14:49
null
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8122", "html_url": "https://github.com/ollama/ollama/pull/8122", "diff_url": "https://github.com/ollama/ollama/pull/8122.diff", "patch_url": "https://github.com/ollama/ollama/pull/8122.patch", "merged_at": null }
This wiring was intended to make a faster developer build by disabling flash attention but the added complexity and friction on updates makes this less useful
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8122/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1106
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1106/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1106/comments
https://api.github.com/repos/ollama/ollama/issues/1106/events
https://github.com/ollama/ollama/pull/1106
1,989,866,478
PR_kwDOJ0Z1Ps5fQfz5
1,106
Add Dart library to README.md
{ "login": "breitburg", "id": 25728414, "node_id": "MDQ6VXNlcjI1NzI4NDE0", "avatar_url": "https://avatars.githubusercontent.com/u/25728414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/breitburg", "html_url": "https://github.com/breitburg", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2023-11-13T04:25:49
2023-11-14T04:08:36
2023-11-13T19:50:42
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1106", "html_url": "https://github.com/ollama/ollama/pull/1106", "diff_url": "https://github.com/ollama/ollama/pull/1106.diff", "patch_url": "https://github.com/ollama/ollama/pull/1106.patch", "merged_at": "2023-11-13T19:50:42" }
Good afternoon! I have completed the first version of the Ollama library for Dart, making it possible to integrate Ollama into Flutter applications. I thought it would be nice to mention it in the readme file. ![](https://media.giphy.com/media/3oNMQtqpnse0dbFe06/giphy.gif)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1106/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6317
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6317/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6317/comments
https://api.github.com/repos/ollama/ollama/issues/6317/events
https://github.com/ollama/ollama/issues/6317
2,459,986,027
I_kwDOJ0Z1Ps6SoGhr
6,317
Feature request : Tools support of Qwen2
{ "login": "trinhkiet0105", "id": 76981747, "node_id": "MDQ6VXNlcjc2OTgxNzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/76981747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trinhkiet0105", "html_url": "https://github.com/trinhkiet0105", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-08-12T03:54:30
2024-09-02T23:49:18
2024-09-02T23:49:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Why ?** So i just found out that Qwen2 have tools support. However, ollama current do not have tools support for qwen2 models. And there is a section of [Qwen2 github talking about ollama on tools use](https://github.com/QwenLM/Qwen2?tab=readme-ov-file#-run-locally ). And this seems prombles of ollama in older ver...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6317/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7517
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7517/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7517/comments
https://api.github.com/repos/ollama/ollama/issues/7517/events
https://github.com/ollama/ollama/pull/7517
2,636,321,952
PR_kwDOJ0Z1Ps6A-N6i
7,517
Doc updates for supporting Llama3.2
{ "login": "frances720", "id": 8753634, "node_id": "MDQ6VXNlcjg3NTM2MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8753634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frances720", "html_url": "https://github.com/frances720", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-11-05T19:44:45
2024-11-15T23:41:09
2024-11-11T03:04:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7517", "html_url": "https://github.com/ollama/ollama/pull/7517", "diff_url": "https://github.com/ollama/ollama/pull/7517.diff", "patch_url": "https://github.com/ollama/ollama/pull/7517.patch", "merged_at": "2024-11-11T03:04:24" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7517/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/757
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/757/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/757/comments
https://api.github.com/repos/ollama/ollama/issues/757/events
https://github.com/ollama/ollama/pull/757
1,938,405,377
PR_kwDOJ0Z1Ps5cjJ1f
757
cleanup format time
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-10-11T18:06:06
2023-10-11T18:12:30
2023-10-11T18:12:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/757", "html_url": "https://github.com/ollama/ollama/pull/757", "diff_url": "https://github.com/ollama/ollama/pull/757.diff", "patch_url": "https://github.com/ollama/ollama/pull/757.patch", "merged_at": "2023-10-11T18:12:29" }
only `HumanTime` is actually being used
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/757/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5870
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5870/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5870/comments
https://api.github.com/repos/ollama/ollama/issues/5870/events
https://github.com/ollama/ollama/issues/5870
2,424,725,170
I_kwDOJ0Z1Ps6Qhl6y
5,870
The embeddings api interface is not working properly.
{ "login": "xldistance", "id": 29418474, "node_id": "MDQ6VXNlcjI5NDE4NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/29418474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xldistance", "html_url": "https://github.com/xldistance", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
11
2024-07-23T09:33:21
2025-01-04T10:50:19
2024-07-30T17:55:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I use the bge-m3 model in graphrag with the following parameters ``` embeddings: ## parallelization: override the global parallelization settings for embeddings async_mode: asyncio llm: api_key: type: openai_embedding # or azure_openai_embedding model: chatfire/bge-m...
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5870/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5870/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5291
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5291/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5291/comments
https://api.github.com/repos/ollama/ollama/issues/5291/events
https://github.com/ollama/ollama/issues/5291
2,374,355,746
I_kwDOJ0Z1Ps6Nhcsi
5,291
请上架cogvlm2
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2024-06-26T05:57:32
2024-06-26T12:12:42
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/THUDM/cogvlm2-llama3-chinese-chat-19B thanks 谢谢 ollama0.1.43 error format not yet support! 错误格式尚不支持! 错误格式尚不支持! 搞了多次 均报错
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5291/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3544
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3544/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3544/comments
https://api.github.com/repos/ollama/ollama/issues/3544/events
https://github.com/ollama/ollama/issues/3544
2,232,444,606
I_kwDOJ0Z1Ps6FEGa-
3,544
ollama 0.1.31 Segmentation fault (core dumped)
{ "login": "zhqfdn", "id": 25156863, "node_id": "MDQ6VXNlcjI1MTU2ODYz", "avatar_url": "https://avatars.githubusercontent.com/u/25156863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhqfdn", "html_url": "https://github.com/zhqfdn", "followers_url": "https://api.github.com/users/zhqfdn/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-04-09T01:36:09
2024-05-01T16:43:13
2024-05-01T16:43:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? [root@localhost ~]# cat /etc/redhat-release AlmaLinux release 9.3 (Shamrock Pampas Cat) [root@localhost ~]# ollama -v Warning: could not connect to a running Ollama instance Warning: client version is 0.1.30 [root@localhost ~]# ./ollama -v Segmentation fault (core dumped) [root@localhost...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3544/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5712
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5712/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5712/comments
https://api.github.com/repos/ollama/ollama/issues/5712/events
https://github.com/ollama/ollama/pull/5712
2,409,800,448
PR_kwDOJ0Z1Ps51cb6s
5,712
Add Windows arm64 support to official builds
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
14
2024-07-15T23:18:15
2024-09-20T20:09:41
2024-09-20T20:09:38
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5712", "html_url": "https://github.com/ollama/ollama/pull/5712", "diff_url": "https://github.com/ollama/ollama/pull/5712.diff", "patch_url": "https://github.com/ollama/ollama/pull/5712.patch", "merged_at": "2024-09-20T20:09:38" }
Wire up CI and build rigging to generate a unified Windows installer with x64 and arm64 payloads. At install time, the correct binaries will be installed for the platform. I was unable to find a combination of hand-picked msvc redist DLLs manually that yielded a working setup on a pristine Windows 11 install, but r...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5712/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5712/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2120
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2120/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2120/comments
https://api.github.com/repos/ollama/ollama/issues/2120/events
https://github.com/ollama/ollama/issues/2120
2,092,519,133
I_kwDOJ0Z1Ps58uU7d
2,120
How to install libnvidia-ml.so?
{ "login": "silverwind63", "id": 104142549, "node_id": "U_kgDOBjUW1Q", "avatar_url": "https://avatars.githubusercontent.com/u/104142549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverwind63", "html_url": "https://github.com/silverwind63", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
4
2024-01-21T10:16:16
2024-01-27T11:25:08
2024-01-26T21:06:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi guys! I have been using ollama with ollama webui this month.However,it output ``` WARNING: You should always run with libnvidia-ml.so that is installed with your NVIDIA Display Driver. By default it's installed in /usr/lib and /usr/lib64. libnvidia-ml.so in GDK package is a stub library that is attached only ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2120/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2639
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2639/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2639/comments
https://api.github.com/repos/ollama/ollama/issues/2639/events
https://github.com/ollama/ollama/issues/2639
2,147,069,563
I_kwDOJ0Z1Ps5_-a57
2,639
History via up arrow and down arrow not working on windows using `ollama run`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
null
[]
null
0
2024-02-21T15:39:35
2024-03-26T22:21:57
2024-03-26T22:21:57
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2639/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/612
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/612/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/612/comments
https://api.github.com/repos/ollama/ollama/issues/612/events
https://github.com/ollama/ollama/pull/612
1,914,484,120
PR_kwDOJ0Z1Ps5bSJCL
612
prune empty directories
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-27T00:40:27
2023-09-29T18:23:41
2023-09-29T18:23:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/612", "html_url": "https://github.com/ollama/ollama/pull/612", "diff_url": "https://github.com/ollama/ollama/pull/612.diff", "patch_url": "https://github.com/ollama/ollama/pull/612.patch", "merged_at": "2023-09-29T18:23:40" }
Resolves #270
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/612/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1033
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1033/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1033/comments
https://api.github.com/repos/ollama/ollama/issues/1033/events
https://github.com/ollama/ollama/issues/1033
1,981,678,318
I_kwDOJ0Z1Ps52HgLu
1,033
Are these system specs good enough for any models?
{ "login": "simoovara", "id": 100516318, "node_id": "U_kgDOBf3B3g", "avatar_url": "https://avatars.githubusercontent.com/u/100516318?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simoovara", "html_url": "https://github.com/simoovara", "followers_url": "https://api.github.com/users/simoov...
[]
closed
false
null
[]
null
6
2023-11-07T15:50:36
2023-11-07T21:32:31
2023-11-07T21:32:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Just a question, I have an old laptop that i turned into a server with Ubuntu LTS. It has an AMD E1-6015 APU and 8gb of ram. I would like to know if that's enough to run any of these models, thank you!
{ "login": "simoovara", "id": 100516318, "node_id": "U_kgDOBf3B3g", "avatar_url": "https://avatars.githubusercontent.com/u/100516318?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simoovara", "html_url": "https://github.com/simoovara", "followers_url": "https://api.github.com/users/simoov...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1033/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5907
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5907/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5907/comments
https://api.github.com/repos/ollama/ollama/issues/5907/events
https://github.com/ollama/ollama/issues/5907
2,427,317,383
I_kwDOJ0Z1Ps6QreyH
5,907
Support token embeddings for `v1/embeddings`
{ "login": "WoJiaoFuXiaoYun", "id": 30924105, "node_id": "MDQ6VXNlcjMwOTI0MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/30924105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WoJiaoFuXiaoYun", "html_url": "https://github.com/WoJiaoFuXiaoYun", "followers_url": "https://api...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
3
2024-07-24T11:17:36
2024-11-06T01:00:33
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When encoding with `tiktoken`, the interface is no longer compatible ``` tiktoken.get_encoding("cl100k_base").encode(text) ``` ```json { "input": [30624,99849,64479,51392,31809,29207,233,45829], "model": "nomic-embed-text" } ``` ``` { "error": { "messag...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5907/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/5907/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5107
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5107/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5107/comments
https://api.github.com/repos/ollama/ollama/issues/5107/events
https://github.com/ollama/ollama/issues/5107
2,358,707,183
I_kwDOJ0Z1Ps6MlwPv
5,107
ollama 模型授权
{ "login": "yawzhe", "id": 127652671, "node_id": "U_kgDOB5vTPw", "avatar_url": "https://avatars.githubusercontent.com/u/127652671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yawzhe", "html_url": "https://github.com/yawzhe", "followers_url": "https://api.github.com/users/yawzhe/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-18T02:16:01
2024-06-18T11:28:38
2024-06-18T11:28:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 我想问一下ollama怎么自定义设置KEY_传参, 每个模型定义不同的key,2.ollama是否支持模型授权,加密模型之类的, ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 最新的
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5107/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4621
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4621/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4621/comments
https://api.github.com/repos/ollama/ollama/issues/4621/events
https://github.com/ollama/ollama/issues/4621
2,316,267,571
I_kwDOJ0Z1Ps6KD3Az
4,621
phi3-medium-128k wrong number of tensors
{ "login": "EthanGraber", "id": 18070053, "node_id": "MDQ6VXNlcjE4MDcwMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/18070053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EthanGraber", "html_url": "https://github.com/EthanGraber", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-24T21:11:10
2024-05-24T22:13:06
2024-05-24T22:13:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm getting the following error when testing the new 128k versions of phi3-medium: ```sh $ ollama run phi3:14b-medium-128k-instruct-q4_0 Error: llama runner process has terminated: signal: abort trap error:done_getting_tensors: wrong number of tensors; expected 245, got 243 ``` ```sh $...
{ "login": "EthanGraber", "id": 18070053, "node_id": "MDQ6VXNlcjE4MDcwMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/18070053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EthanGraber", "html_url": "https://github.com/EthanGraber", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4621/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4621/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4226
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4226/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4226/comments
https://api.github.com/repos/ollama/ollama/issues/4226/events
https://github.com/ollama/ollama/issues/4226
2,282,922,640
I_kwDOJ0Z1Ps6IEqKQ
4,226
run llama3-70B-q8_0 error
{ "login": "leoHostProject", "id": 87935281, "node_id": "MDQ6VXNlcjg3OTM1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/87935281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leoHostProject", "html_url": "https://github.com/leoHostProject", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
null
[]
null
1
2024-05-07T10:44:08
2024-07-25T18:53:03
2024-07-25T18:53:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? api call error message: {"error":{"message":"timed out waiting for llama runner to start:CUDA error:uncorrectable ECC error encountered\ n current device:0,in function ggml cuda_compute_forward at /go/src/github.com/ollama/ollama/11m/1lama.cpp/ggml -cuda.cu:2300\n err\nGGML_ASSERT:/go/src/gi...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4226/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5750
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5750/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5750/comments
https://api.github.com/repos/ollama/ollama/issues/5750/events
https://github.com/ollama/ollama/pull/5750
2,414,156,448
PR_kwDOJ0Z1Ps51q_iz
5,750
stub response
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-07-17T17:28:46
2024-07-17T17:39:25
2024-07-17T17:39:22
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5750", "html_url": "https://github.com/ollama/ollama/pull/5750", "diff_url": "https://github.com/ollama/ollama/pull/5750.diff", "patch_url": "https://github.com/ollama/ollama/pull/5750.patch", "merged_at": "2024-07-17T17:39:22" }
for compatibility, `{{ .Response }}` cannot be in any template control flow structures. therefore any template execution should set an empty Response if one should not be rendered otherwise the output will contain `<no value>` in place of `{{ .Response }}`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5750/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4183
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4183/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4183/comments
https://api.github.com/repos/ollama/ollama/issues/4183/events
https://github.com/ollama/ollama/issues/4183
2,279,709,927
I_kwDOJ0Z1Ps6H4Zzn
4,183
pull orca2:7b-fp16 Error: EOF
{ "login": "MarkWard0110", "id": 90335263, "node_id": "MDQ6VXNlcjkwMzM1MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkWard0110", "html_url": "https://github.com/MarkWard0110", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-05-05T20:14:27
2024-05-05T20:17:17
2024-05-05T20:17:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `ollama pull orca2:7b-fp16` results in `Error: EOF` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33
{ "login": "MarkWard0110", "id": 90335263, "node_id": "MDQ6VXNlcjkwMzM1MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkWard0110", "html_url": "https://github.com/MarkWard0110", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4183/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3397
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3397/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3397/comments
https://api.github.com/repos/ollama/ollama/issues/3397/events
https://github.com/ollama/ollama/pull/3397
2,214,232,629
PR_kwDOJ0Z1Ps5rG7Q3
3,397
Parallel requests
{ "login": "0x77dev", "id": 46429701, "node_id": "MDQ6VXNlcjQ2NDI5NzAx", "avatar_url": "https://avatars.githubusercontent.com/u/46429701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0x77dev", "html_url": "https://github.com/0x77dev", "followers_url": "https://api.github.com/users/0x77de...
[]
closed
false
null
[]
null
2
2024-03-28T21:54:46
2024-03-30T22:41:18
2024-03-30T22:41:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3397", "html_url": "https://github.com/ollama/ollama/pull/3397", "diff_url": "https://github.com/ollama/ollama/pull/3397.diff", "patch_url": "https://github.com/ollama/ollama/pull/3397.patch", "merged_at": null }
Stage: PoC Related issue: #358 - loaded.mu.{Lock,Unlock}() is not implemented correctly in this change - sparams.n_parallel is hardcoded to 4
{ "login": "0x77dev", "id": 46429701, "node_id": "MDQ6VXNlcjQ2NDI5NzAx", "avatar_url": "https://avatars.githubusercontent.com/u/46429701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0x77dev", "html_url": "https://github.com/0x77dev", "followers_url": "https://api.github.com/users/0x77de...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3397/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1158
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1158/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1158/comments
https://api.github.com/repos/ollama/ollama/issues/1158/events
https://github.com/ollama/ollama/issues/1158
1,997,987,411
I_kwDOJ0Z1Ps53Ft5T
1,158
max retries exceeded: unexpected EOF
{ "login": "priamai", "id": 57333254, "node_id": "MDQ6VXNlcjU3MzMzMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/57333254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/priamai", "html_url": "https://github.com/priamai", "followers_url": "https://api.github.com/users/priama...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
15
2023-11-16T23:47:19
2025-01-28T16:11:44
2024-03-11T18:25:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi there, I am not sure if this is related to your file service, but I am getting this connection drops out very often. ![Screenshot from 2023-11-16 23-45-54](https://github.com/jmorganca/ollama/assets/57333254/d530f24e-af82-49d8-9435-0653922d1eec) Maybe there is a way to throttle requests?
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1158/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/216
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/216/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/216/comments
https://api.github.com/repos/ollama/ollama/issues/216/events
https://github.com/ollama/ollama/issues/216
1,822,180,868
I_kwDOJ0Z1Ps5snEYE
216
Something might still be wrong with K-Quant
{ "login": "nkoehring", "id": 246402, "node_id": "MDQ6VXNlcjI0NjQwMg==", "avatar_url": "https://avatars.githubusercontent.com/u/246402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nkoehring", "html_url": "https://github.com/nkoehring", "followers_url": "https://api.github.com/users/nkoe...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2023-07-26T11:17:46
2023-08-02T19:03:27
2023-08-02T19:03:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I run a 30B model (in this case upstage-llama-30b-instruct-2048.ggmlv3.q5_K_M.bin) the debug output in ollama talks about a 13B model size: ![Screenshot from 2023-07-26 13-07-51](https://github.com/jmorganca/ollama/assets/246402/36bb44f1-a534-44ae-94bb-3e87d7ce5a74) when running the same model with llama.cpp i...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/216/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/4880
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4880/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4880/comments
https://api.github.com/repos/ollama/ollama/issues/4880/events
https://github.com/ollama/ollama/issues/4880
2,339,156,469
I_kwDOJ0Z1Ps6LbLH1
4,880
Extend ollama show command
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[ { "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.git...
null
0
2024-06-06T21:07:21
2024-06-26T17:31:00
2024-06-26T17:31:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In reference to #3570
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4880/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7355
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7355/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7355/comments
https://api.github.com/repos/ollama/ollama/issues/7355/events
https://github.com/ollama/ollama/issues/7355
2,613,814,322
I_kwDOJ0Z1Ps6by6Qy
7,355
Released binaries have High severity CVEs due to Go version 1.22.5
{ "login": "pivotal-marcela-campo", "id": 20945140, "node_id": "MDQ6VXNlcjIwOTQ1MTQw", "avatar_url": "https://avatars.githubusercontent.com/u/20945140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pivotal-marcela-campo", "html_url": "https://github.com/pivotal-marcela-campo", "followers_...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-10-25T11:17:12
2024-10-27T00:03:38
2024-10-27T00:03:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Scanning linux binary with `grype` yields the following report ![Screenshot 2024-10-25 at 11 49 39](https://github.com/user-attachments/assets/7c4fe7af-13d4-4ddc-9339-2bef323691a8) Upgrading to 1.22.7+ for building would fix this issue: https://github.com/ollama/ollama/blob/3085c47bea5...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7355/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7355/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5004
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5004/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5004/comments
https://api.github.com/repos/ollama/ollama/issues/5004/events
https://github.com/ollama/ollama/pull/5004
2,349,490,461
PR_kwDOJ0Z1Ps5yRrGr
5,004
fix: multiple templates when creating from model
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-06-12T19:00:13
2024-06-12T21:39:29
2024-06-12T21:39:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5004", "html_url": "https://github.com/ollama/ollama/pull/5004", "diff_url": "https://github.com/ollama/ollama/pull/5004.diff", "patch_url": "https://github.com/ollama/ollama/pull/5004.patch", "merged_at": "2024-06-12T21:39:29" }
multiple templates may appear in a model if a model is created from another model that 1) has an autodetected template and 2) defines a custom template this fixes the bug by not detecting chat template when inheriting from another model
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5004/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6676
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6676/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6676/comments
https://api.github.com/repos/ollama/ollama/issues/6676/events
https://github.com/ollama/ollama/issues/6676
2,510,163,293
I_kwDOJ0Z1Ps6Vng1d
6,676
on ollama.com , the centrate new profile picture page , looked on andro chrome canary , out of bound
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw...
open
false
{ "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.github.com/users/hoyyev...
[ { "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.git...
null
0
2024-09-06T10:55:18
2024-09-10T21:07:41
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when upload profile pic , a page to move the pic into o round place .. the canvas doesnt fit into the display , of andro chrome canary ![IMG_20240906_125448_752](https://github.com/user-attachments/assets/56ce834d-575a-4570-9052-3cf683bb2b19) ### OS _No response_ ### GPU _No response_ ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6676/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4902
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4902/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4902/comments
https://api.github.com/repos/ollama/ollama/issues/4902/events
https://github.com/ollama/ollama/issues/4902
2,339,983,408
I_kwDOJ0Z1Ps6LeVAw
4,902
Performance issue with CPU only inference start 0.1.39 - to latest version of todate.
{ "login": "raymond-infinitecode", "id": 4714784, "node_id": "MDQ6VXNlcjQ3MTQ3ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4714784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raymond-infinitecode", "html_url": "https://github.com/raymond-infinitecode", "followers_url":...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2024-06-07T09:17:48
2024-07-03T23:34:02
2024-07-03T23:34:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am running the ollama on intel xeon 32 processors (CPU only) previously which high token generation count using version 0.1.38 However, once I migrate to the latest ollama version 0.1.41, I found that the inference speed for even a model like phi3 on pure CPU slow to a halt. I retest the v...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4902/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4902/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6115
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6115/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6115/comments
https://api.github.com/repos/ollama/ollama/issues/6115/events
https://github.com/ollama/ollama/pull/6115
2,441,859,247
PR_kwDOJ0Z1Ps53Fqs2
6,115
Fix context in /api/generate grows too much (#5980).
{ "login": "slouffka", "id": 8129, "node_id": "MDQ6VXNlcjgxMjk=", "avatar_url": "https://avatars.githubusercontent.com/u/8129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slouffka", "html_url": "https://github.com/slouffka", "followers_url": "https://api.github.com/users/slouffka/follow...
[]
closed
false
null
[]
null
6
2024-08-01T08:47:37
2024-08-01T22:14:00
2024-08-01T22:13:59
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6115", "html_url": "https://github.com/ollama/ollama/pull/6115", "diff_url": "https://github.com/ollama/ollama/pull/6115.diff", "patch_url": "https://github.com/ollama/ollama/pull/6115.patch", "merged_at": "2024-08-01T22:13:59" }
This PR fixes [Context in /api/generate response grows too big. #5980 ](https://github.com/ollama/ollama/issues/5980)
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6115/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6094
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6094/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6094/comments
https://api.github.com/repos/ollama/ollama/issues/6094/events
https://github.com/ollama/ollama/issues/6094
2,439,598,593
I_kwDOJ0Z1Ps6RaVIB
6,094
"embedding generation failed: do embedding request: Post \"http://127.0.0.1:33967/embedding\": EOF"
{ "login": "yeexiangzhen1001", "id": 70881071, "node_id": "MDQ6VXNlcjcwODgxMDcx", "avatar_url": "https://avatars.githubusercontent.com/u/70881071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeexiangzhen1001", "html_url": "https://github.com/yeexiangzhen1001", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
19
2024-07-31T09:39:08
2025-01-10T08:14:20
2024-09-02T23:36:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 2024/07/31 09:18:15 routes.go:1099: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.8547758...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6094/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6094/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6743
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6743/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6743/comments
https://api.github.com/repos/ollama/ollama/issues/6743/events
https://github.com/ollama/ollama/pull/6743
2,518,666,601
PR_kwDOJ0Z1Ps57GpZP
6,743
Fixed no redirect URL scenario when downloading blobs
{ "login": "JingWoo", "id": 21989093, "node_id": "MDQ6VXNlcjIxOTg5MDkz", "avatar_url": "https://avatars.githubusercontent.com/u/21989093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JingWoo", "html_url": "https://github.com/JingWoo", "followers_url": "https://api.github.com/users/JingWo...
[]
open
false
null
[]
null
1
2024-09-11T06:34:01
2024-09-30T09:08:37
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6743", "html_url": "https://github.com/ollama/ollama/pull/6743", "diff_url": "https://github.com/ollama/ollama/pull/6743.diff", "patch_url": "https://github.com/ollama/ollama/pull/6743.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6743/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1312
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1312/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1312/comments
https://api.github.com/repos/ollama/ollama/issues/1312/events
https://github.com/ollama/ollama/issues/1312
2,016,449,283
I_kwDOJ0Z1Ps54MJMD
1,312
trouble with deepseek-coder
{ "login": "niknoproblems", "id": 3484515, "node_id": "MDQ6VXNlcjM0ODQ1MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3484515?v=4", "gravatar_id": "", "url": "https://api.github.com/users/niknoproblems", "html_url": "https://github.com/niknoproblems", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
4
2023-11-29T12:10:21
2024-03-12T01:15:06
2024-03-12T01:15:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm having trouble getting this model to run on mac m1 16gb ram: ollama run deepseek-coder:6.7b-base-q8_0 but this model work without any troubles: ollama run neural-chat:7b-v3.1-q8_0 it has more weights and bigger file size.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1312/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1312/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2529
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2529/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2529/comments
https://api.github.com/repos/ollama/ollama/issues/2529/events
https://github.com/ollama/ollama/issues/2529
2,137,653,184
I_kwDOJ0Z1Ps5_af_A
2,529
Ollama Windows is much slower at inference than Ollama on WSL2
{ "login": "devinprater", "id": 15256014, "node_id": "MDQ6VXNlcjE1MjU2MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/15256014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devinprater", "html_url": "https://github.com/devinprater", "followers_url": "https://api.github.com/...
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
null
[]
null
6
2024-02-16T00:18:17
2024-02-21T09:25:21
2024-02-19T21:23:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
CPU: AMD 5500U with Radion internal GPU. Ollama runs on CPU mode on both WSL2 and Windows. Attached are the logs from Windows, and Linux. [server.log](https://github.com/ollama/ollama/files/14303692/server.log) [ollama-log-linux.log](https://github.com/ollama/ollama/files/14303696/ollama-log-linux.log)
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2529/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2529/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3068
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3068/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3068/comments
https://api.github.com/repos/ollama/ollama/issues/3068/events
https://github.com/ollama/ollama/pull/3068
2,180,371,586
PR_kwDOJ0Z1Ps5pT6Cw
3,068
Use stdin for term discovery on windows
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-03-11T22:28:55
2024-03-14T18:55:22
2024-03-14T18:55:19
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3068", "html_url": "https://github.com/ollama/ollama/pull/3068", "diff_url": "https://github.com/ollama/ollama/pull/3068.diff", "patch_url": "https://github.com/ollama/ollama/pull/3068.patch", "merged_at": "2024-03-14T18:55:19" }
When you feed input to the cmd via a pipe it no longer reports a warning Before: ``` > echo "what is the captial of australia" | .\ollama.exe run phi failed to get console mode for stdin: The handle is invalid. The capital of Australia is Canberra. It's located in the Australian Capital Territory, about 120 k...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3068/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5146
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5146/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5146/comments
https://api.github.com/repos/ollama/ollama/issues/5146/events
https://github.com/ollama/ollama/pull/5146
2,362,711,736
PR_kwDOJ0Z1Ps5y-1ig
5,146
Put back temporary intel GPU env var
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-06-19T15:58:50
2024-06-19T16:12:48
2024-06-19T16:12:45
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5146", "html_url": "https://github.com/ollama/ollama/pull/5146", "diff_url": "https://github.com/ollama/ollama/pull/5146.diff", "patch_url": "https://github.com/ollama/ollama/pull/5146.patch", "merged_at": "2024-06-19T16:12:45" }
Until we merge #4876 lets keep the opt-in env var to avoid confusion in the binary releases if we discover an Intel GPU but don't actually have the runner built in. This reverts commit 755b4e4fc291366595ed7bfb37c2a91ff5834df8.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5146/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4219
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4219/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4219/comments
https://api.github.com/repos/ollama/ollama/issues/4219/events
https://github.com/ollama/ollama/issues/4219
2,282,184,189
I_kwDOJ0Z1Ps6IB139
4,219
模型整理 - Categorize models on ollama.com
{ "login": "syssbs", "id": 129733386, "node_id": "U_kgDOB7uTCg", "avatar_url": "https://avatars.githubusercontent.com/u/129733386?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syssbs", "html_url": "https://github.com/syssbs", "followers_url": "https://api.github.com/users/syssbs/follower...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
open
false
null
[]
null
3
2024-05-07T03:16:01
2024-07-25T18:15:27
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
麻烦把官网的模型进行分类整理下吧,模型太多了以后也会越来越多,官网的模型列表感觉会很乱
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4219/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8307
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8307/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8307/comments
https://api.github.com/repos/ollama/ollama/issues/8307/events
https://github.com/ollama/ollama/pull/8307
2,769,004,347
PR_kwDOJ0Z1Ps6GvOdm
8,307
fix: correct endpoint URL to avoid 404 error
{ "login": "ubaldus", "id": 660076, "node_id": "MDQ6VXNlcjY2MDA3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/660076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ubaldus", "html_url": "https://github.com/ubaldus", "followers_url": "https://api.github.com/users/ubaldus/fo...
[]
closed
false
null
[]
null
0
2025-01-04T21:14:22
2025-01-04T23:45:16
2025-01-04T23:45:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8307", "html_url": "https://github.com/ollama/ollama/pull/8307", "diff_url": "https://github.com/ollama/ollama/pull/8307.diff", "patch_url": "https://github.com/ollama/ollama/pull/8307.patch", "merged_at": "2025-01-04T23:45:16" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8307/timeline
null
null
true