url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/4091
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4091/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4091/comments
https://api.github.com/repos/ollama/ollama/issues/4091/events
https://github.com/ollama/ollama/issues/4091
2,274,504,005
I_kwDOJ0Z1Ps6Hki1F
4,091
Unable to access ollama from other machine
{ "login": "rebas3", "id": 168698930, "node_id": "U_kgDOCg4kMg", "avatar_url": "https://avatars.githubusercontent.com/u/168698930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rebas3", "html_url": "https://github.com/rebas3", "followers_url": "https://api.github.com/users/rebas3/follower...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-05-02T03:06:32
2024-05-02T16:40:04
2024-05-02T16:40:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi Im new to AI or develop in general and I had a question: When I try to access on the same machine then it work normally but when I Try to connect from other machine it doesn't allow me to do that how can I allow it to connect? ### OS macOS ### GPU Apple ### CPU Apple ### Ollama versio...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4091/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6391
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6391/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6391/comments
https://api.github.com/repos/ollama/ollama/issues/6391/events
https://github.com/ollama/ollama/pull/6391
2,470,366,839
PR_kwDOJ0Z1Ps54lYil
6,391
doc: fixed spelling error
{ "login": "Carter907", "id": 102479896, "node_id": "U_kgDOBhu4GA", "avatar_url": "https://avatars.githubusercontent.com/u/102479896?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Carter907", "html_url": "https://github.com/Carter907", "followers_url": "https://api.github.com/users/Carter...
[]
closed
false
null
[]
null
0
2024-08-16T14:16:08
2024-09-04T13:42:33
2024-09-04T13:42:33
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6391", "html_url": "https://github.com/ollama/ollama/pull/6391", "diff_url": "https://github.com/ollama/ollama/pull/6391.diff", "patch_url": "https://github.com/ollama/ollama/pull/6391.patch", "merged_at": "2024-09-04T13:42:33" }
Changed "dorrect" to "correct".
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6391/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5573
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5573/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5573/comments
https://api.github.com/repos/ollama/ollama/issues/5573/events
https://github.com/ollama/ollama/issues/5573
2,398,330,253
I_kwDOJ0Z1Ps6O852N
5,573
ggml_cuda_init: failed to initialize CUDA: system has unsupported display driver / cuda driver combination
{ "login": "skinnynpale", "id": 52371356, "node_id": "MDQ6VXNlcjUyMzcxMzU2", "avatar_url": "https://avatars.githubusercontent.com/u/52371356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skinnynpale", "html_url": "https://github.com/skinnynpale", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
null
[]
null
17
2024-07-09T14:06:38
2024-07-11T03:01:53
2024-07-11T03:01:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ```bash time=2024-07-09T13:56:46.484Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-72b1bc75-c26b-1c04-f9cd-ff1942a73215 parallel=4...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5573/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6206
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6206/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6206/comments
https://api.github.com/repos/ollama/ollama/issues/6206/events
https://github.com/ollama/ollama/issues/6206
2,451,334,507
I_kwDOJ0Z1Ps6SHGVr
6,206
[question] How to default to CPU?
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivic...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
11
2024-08-06T17:05:23
2024-08-06T20:13:47
2024-08-06T17:25:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I created the FreeBSD port for ollama. However, GPU isn't available and all 'ollama run' commands fail with the ollama server printing this: ``` time=2024-08-06T09:57:27.238-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.06509013 model=/home/...
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivic...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6206/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1342
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1342/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1342/comments
https://api.github.com/repos/ollama/ollama/issues/1342/events
https://github.com/ollama/ollama/issues/1342
2,020,344,009
I_kwDOJ0Z1Ps54bADJ
1,342
German umlaut missing with deepseek-llm
{ "login": "p3d-dev", "id": 105526632, "node_id": "U_kgDOBko1aA", "avatar_url": "https://avatars.githubusercontent.com/u/105526632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/p3d-dev", "html_url": "https://github.com/p3d-dev", "followers_url": "https://api.github.com/users/p3d-dev/foll...
[]
closed
false
null
[]
null
2
2023-12-01T08:13:32
2023-12-01T17:30:48
2023-12-01T17:30:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Here are the responses for few models and deepseek-llm cannot output "ö" and "ü": ``` %ollama run orca2:13b "Please repeat: wäre, Tür, höchstens" wäre, Tür, höchstens Translation: would be, door, at most %ollama run codellama:34b "Please repeat: wäre, Tür, höchstens" Wäre, Tür, höchstens. %ollama run d...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1342/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1342/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1524
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1524/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1524/comments
https://api.github.com/repos/ollama/ollama/issues/1524/events
https://github.com/ollama/ollama/pull/1524
2,042,102,441
PR_kwDOJ0Z1Ps5iBhyg
1,524
restore model load duration on generate response
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-12-14T17:01:29
2023-12-14T17:15:51
2023-12-14T17:15:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1524", "html_url": "https://github.com/ollama/ollama/pull/1524", "diff_url": "https://github.com/ollama/ollama/pull/1524.diff", "patch_url": "https://github.com/ollama/ollama/pull/1524.patch", "merged_at": "2023-12-14T17:15:50" }
- set model load duration on generate and chat done response - calculate createAt time when response created resolves #1523
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1524/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1735/comments
https://api.github.com/repos/ollama/ollama/issues/1735/events
https://github.com/ollama/ollama/issues/1735
2,058,938,226
I_kwDOJ0Z1Ps56uOdy
1,735
Server doesn't listen on all available interfaces
{ "login": "zine999", "id": 155118056, "node_id": "U_kgDOCT7p6A", "avatar_url": "https://avatars.githubusercontent.com/u/155118056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zine999", "html_url": "https://github.com/zine999", "followers_url": "https://api.github.com/users/zine999/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2023-12-28T23:28:43
2024-01-04T02:23:20
2024-01-04T02:23:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I think this might be a problem recently introduced in v0.1.17 but I'm not 100% sure. `ollama serve` doesn't listen on `0.0.0.0` and therefore doesn't make itself available on all interfaces. This causes problems when trying to connect to it via an interface other than `localhost`. A (hopefully temporary) workaro...
{ "login": "zine999", "id": 155118056, "node_id": "U_kgDOCT7p6A", "avatar_url": "https://avatars.githubusercontent.com/u/155118056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zine999", "html_url": "https://github.com/zine999", "followers_url": "https://api.github.com/users/zine999/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1735/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1735/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7613
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7613/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7613/comments
https://api.github.com/repos/ollama/ollama/issues/7613/events
https://github.com/ollama/ollama/pull/7613
2,648,313,555
PR_kwDOJ0Z1Ps6BduJt
7,613
Update type for ToolFunction to support new json serialization
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
2
2024-11-11T06:34:22
2024-11-13T17:25:57
2024-11-13T17:25:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7613", "html_url": "https://github.com/ollama/ollama/pull/7613", "diff_url": "https://github.com/ollama/ollama/pull/7613.diff", "patch_url": "https://github.com/ollama/ollama/pull/7613.patch", "merged_at": null }
Need to update the `ToolFunction` type to support the tool passing from client libraries
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7613/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1545
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1545/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1545/comments
https://api.github.com/repos/ollama/ollama/issues/1545/events
https://github.com/ollama/ollama/issues/1545
2,044,031,840
I_kwDOJ0Z1Ps551XNg
1,545
Error Ollama + Langchain + Google Colab + ngrok
{ "login": "SerhiyProtsenko", "id": 33152729, "node_id": "MDQ6VXNlcjMzMTUyNzI5", "avatar_url": "https://avatars.githubusercontent.com/u/33152729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SerhiyProtsenko", "html_url": "https://github.com/SerhiyProtsenko", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5895046125, "node_id": "LA_kwDOJ0Z1Ps8AAAABX19D7Q...
closed
false
null
[]
null
3
2023-12-15T16:40:27
2024-03-11T18:47:23
2024-03-11T18:47:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I use the combination: Ollama + Langchain + Google Colab + ngrok. I get an error (The models are downloaded, I can see them in Ollama list) ``` llm = Ollama( model="run deepseek-coder:6.7b", base_url="https://e12b-35-231-226-171.ngrok.io/") responce = llm.predict('What do you know about Falco?') Ou...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1545/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2650
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2650/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2650/comments
https://api.github.com/repos/ollama/ollama/issues/2650/events
https://github.com/ollama/ollama/issues/2650
2,147,556,756
I_kwDOJ0Z1Ps6AAR2U
2,650
Gemma 7B produces gibberish output
{ "login": "aniketmaurya", "id": 21018714, "node_id": "MDQ6VXNlcjIxMDE4NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/21018714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aniketmaurya", "html_url": "https://github.com/aniketmaurya", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
9
2024-02-21T19:48:59
2024-04-17T11:11:24
2024-02-23T01:26:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
* Gemma 7B produces gibberish output * 2B seem to be working well though ![image](https://github.com/ollama/ollama/assets/21018714/99de1a65-8321-469f-914f-6ecb37eebf83)
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2650/reactions", "total_count": 28, "+1": 28, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2650/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7367
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7367/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7367/comments
https://api.github.com/repos/ollama/ollama/issues/7367/events
https://github.com/ollama/ollama/pull/7367
2,615,258,652
PR_kwDOJ0Z1Ps5_9ksb
7,367
CI testing
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-25T22:26:53
2024-10-25T23:50:58
2024-10-25T23:50:58
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7367", "html_url": "https://github.com/ollama/ollama/pull/7367", "diff_url": "https://github.com/ollama/ollama/pull/7367.diff", "patch_url": "https://github.com/ollama/ollama/pull/7367.patch", "merged_at": null }
Nothing to see here....
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7367/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4165
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4165/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4165/comments
https://api.github.com/repos/ollama/ollama/issues/4165/events
https://github.com/ollama/ollama/issues/4165
2,279,378,308
I_kwDOJ0Z1Ps6H3I2E
4,165
`OLLAMA_NUM_PARALLEL` and multi-modal models lead to `failed processing images` error
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-05-05T07:49:42
2024-05-05T07:49:43
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When processing multiple requests using multi-modal models such as `llava` or `moondream` generation freezes and an error is printed in the server logs: `failed processing images` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4165/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 7 }
https://api.github.com/repos/ollama/ollama/issues/4165/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3919
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3919/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3919/comments
https://api.github.com/repos/ollama/ollama/issues/3919/events
https://github.com/ollama/ollama/issues/3919
2,264,327,111
I_kwDOJ0Z1Ps6G9uPH
3,919
trying to use llama3 with ollama embeddings getting error model 'llama2' not found
{ "login": "SatouKuzuma1", "id": 67365797, "node_id": "MDQ6VXNlcjY3MzY1Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/67365797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SatouKuzuma1", "html_url": "https://github.com/SatouKuzuma1", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-25T19:18:14
2024-05-11T19:21:51
2024-04-30T06:01:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? im using this code ``` from langchain_community.llms import Ollama from langchain_community.embeddings import OllamaEmbeddings from langchain_community.document_loaders import PyPDFLoader from langchain_community.vectorstores import Chroma MODEL = 'llama3' model = Ollama(model=MODE...
{ "login": "SatouKuzuma1", "id": 67365797, "node_id": "MDQ6VXNlcjY3MzY1Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/67365797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SatouKuzuma1", "html_url": "https://github.com/SatouKuzuma1", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3919/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1585
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1585/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1585/comments
https://api.github.com/repos/ollama/ollama/issues/1585/events
https://github.com/ollama/ollama/issues/1585
2,047,382,036
I_kwDOJ0Z1Ps56CJIU
1,585
CUDA error 2 [...] out of memory when using mixtral:8x7b-instruct-v0.1-q3_K_M but not on bigger models
{ "login": "AlessandroSpallina", "id": 10786872, "node_id": "MDQ6VXNlcjEwNzg2ODcy", "avatar_url": "https://avatars.githubusercontent.com/u/10786872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlessandroSpallina", "html_url": "https://github.com/AlessandroSpallina", "followers_url": "ht...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA...
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
11
2023-12-18T20:16:26
2024-05-10T00:25:43
2024-05-10T00:25:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I'm opening this issue because I noticed a weird behavior running ollama on docker with GPU support and trying different mixtral 8x7B sizes: I can easily do inference on my GPU with models like mixtral:8x7b-instruct-v0.1-q4_K_M but I see a memory failure when running smaller models like mixtral:8x7b-instruct-v0.1-q...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1585/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8279
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8279/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8279/comments
https://api.github.com/repos/ollama/ollama/issues/8279/events
https://github.com/ollama/ollama/pull/8279
2,764,987,796
PR_kwDOJ0Z1Ps6GhtLi
8,279
Improved offline installation experience (install.sh)
{ "login": "PatZer0", "id": 96248319, "node_id": "U_kgDOBbyh_w", "avatar_url": "https://avatars.githubusercontent.com/u/96248319?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PatZer0", "html_url": "https://github.com/PatZer0", "followers_url": "https://api.github.com/users/PatZer0/follow...
[]
open
false
null
[]
null
0
2025-01-01T10:36:41
2025-01-01T10:38:22
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8279", "html_url": "https://github.com/ollama/ollama/pull/8279", "diff_url": "https://github.com/ollama/ollama/pull/8279.diff", "patch_url": "https://github.com/ollama/ollama/pull/8279.patch", "merged_at": null }
As the current install.sh script requires good internet connection, this improvement enables user to pre-download required files in other ways to install offline with the script. Add 2 new functions: `note`: Display a note text in yellow. `download_and_extract`: Handle all the download operations. Shows the filena...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8279/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3984
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3984/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3984/comments
https://api.github.com/repos/ollama/ollama/issues/3984/events
https://github.com/ollama/ollama/pull/3984
2,267,257,865
PR_kwDOJ0Z1Ps5t64Hm
3,984
types/model: relax name length constraint from 2 to 1
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-04-27T23:45:51
2024-04-28T00:58:42
2024-04-28T00:58:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3984", "html_url": "https://github.com/ollama/ollama/pull/3984", "diff_url": "https://github.com/ollama/ollama/pull/3984.diff", "patch_url": "https://github.com/ollama/ollama/pull/3984.patch", "merged_at": "2024-04-28T00:58:41" }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3984/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2856
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2856/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2856/comments
https://api.github.com/repos/ollama/ollama/issues/2856/events
https://github.com/ollama/ollama/issues/2856
2,162,689,469
I_kwDOJ0Z1Ps6A6AW9
2,856
I hope Ollama can add an embeddings interface compatible with OpenAI API
{ "login": "zhijianguo", "id": 3388592, "node_id": "MDQ6VXNlcjMzODg1OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3388592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhijianguo", "html_url": "https://github.com/zhijianguo", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-03-01T06:18:06
2024-03-12T00:17:50
2024-03-12T00:17:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I hope Ollama can add an embeddings interface compatible with OpenAI API
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2856/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2856/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5316
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5316/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5316/comments
https://api.github.com/repos/ollama/ollama/issues/5316/events
https://github.com/ollama/ollama/pull/5316
2,377,052,279
PR_kwDOJ0Z1Ps5zt1ac
5,316
llm: architecture patch
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-06-27T04:10:16
2024-06-27T04:38:15
2024-06-27T04:38:13
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5316", "html_url": "https://github.com/ollama/ollama/pull/5316", "diff_url": "https://github.com/ollama/ollama/pull/5316.diff", "patch_url": "https://github.com/ollama/ollama/pull/5316.patch", "merged_at": "2024-06-27T04:38:13" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5316/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2251
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2251/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2251/comments
https://api.github.com/repos/ollama/ollama/issues/2251/events
https://github.com/ollama/ollama/pull/2251
2,104,780,112
PR_kwDOJ0Z1Ps5lSZ3d
2,251
update submodule to `1cfb5372cf5707c8ec6dde7c874f4a44a6c4c915`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-29T06:53:55
2024-02-07T20:08:13
2024-02-07T20:08:13
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2251", "html_url": "https://github.com/ollama/ollama/pull/2251", "diff_url": "https://github.com/ollama/ollama/pull/2251.diff", "patch_url": "https://github.com/ollama/ollama/pull/2251.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2251/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6632
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6632/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6632/comments
https://api.github.com/repos/ollama/ollama/issues/6632/events
https://github.com/ollama/ollama/issues/6632
2,505,083,951
I_kwDOJ0Z1Ps6VUIwv
6,632
New Command-r models output nonsense
{ "login": "xmaayy", "id": 21166352, "node_id": "MDQ6VXNlcjIxMTY2MzUy", "avatar_url": "https://avatars.githubusercontent.com/u/21166352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xmaayy", "html_url": "https://github.com/xmaayy", "followers_url": "https://api.github.com/users/xmaayy/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A...
closed
false
null
[]
null
7
2024-09-04T11:33:14
2024-09-04T18:04:18
2024-09-04T14:02:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The new 4-bit quants of command-r (I dont have the VRAM for higher quants) output nonsense. ``` bash ≻ ollama run command-r pulling manifest pulling 8e0609b8f0fe... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 18 GB pulling ...
{ "login": "xmaayy", "id": 21166352, "node_id": "MDQ6VXNlcjIxMTY2MzUy", "avatar_url": "https://avatars.githubusercontent.com/u/21166352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xmaayy", "html_url": "https://github.com/xmaayy", "followers_url": "https://api.github.com/users/xmaayy/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6632/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4750
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4750/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4750/comments
https://api.github.com/repos/ollama/ollama/issues/4750/events
https://github.com/ollama/ollama/issues/4750
2,327,518,807
I_kwDOJ0Z1Ps6Kux5X
4,750
Garbage output running llama3 GGUF model
{ "login": "DiptenduIDEAS", "id": 156412399, "node_id": "U_kgDOCVKp7w", "avatar_url": "https://avatars.githubusercontent.com/u/156412399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DiptenduIDEAS", "html_url": "https://github.com/DiptenduIDEAS", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-31T10:38:49
2024-07-09T07:04:34
2024-07-05T04:04:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I downloaded https://huggingface.co/QuantFactory/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B.Q2_K.gguf Created a Modelfile using `ollama create example -f Modelfile` and ran `ollama run example` On asking the _question why is the sky blue?_ on the >>> prompt I am getting garbage (a seri...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4750/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4339
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4339/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4339/comments
https://api.github.com/repos/ollama/ollama/issues/4339/events
https://github.com/ollama/ollama/pull/4339
2,290,628,111
PR_kwDOJ0Z1Ps5vJXEh
4,339
chore: update dependencies across the board
{ "login": "appleboy", "id": 21979, "node_id": "MDQ6VXNlcjIxOTc5", "avatar_url": "https://avatars.githubusercontent.com/u/21979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/appleboy", "html_url": "https://github.com/appleboy", "followers_url": "https://api.github.com/users/appleboy/foll...
[]
closed
false
null
[]
null
1
2024-05-11T03:19:53
2024-12-29T19:24:24
2024-12-29T19:24:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4339", "html_url": "https://github.com/ollama/ollama/pull/4339", "diff_url": "https://github.com/ollama/ollama/pull/4339.diff", "patch_url": "https://github.com/ollama/ollama/pull/4339.patch", "merged_at": null }
- Update `github.com/gin-gonic/gin` from `v1.9.1` to `v1.10.0` - Update `github.com/stretchr/testify` from `v1.8.4` to `v1.9.0` - Add `github.com/bytedance/sonic/loader` and `github.com/cloudwego/*` as new indirect dependencies - Update `github.com/bytedance/sonic` from `v1.9.1` to `v1.11.6` and remove old indirect ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4339/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2744
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2744/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2744/comments
https://api.github.com/repos/ollama/ollama/issues/2744/events
https://github.com/ollama/ollama/pull/2744
2,152,812,372
PR_kwDOJ0Z1Ps5n1_lQ
2,744
Update types.go
{ "login": "eltociear", "id": 22633385, "node_id": "MDQ6VXNlcjIyNjMzMzg1", "avatar_url": "https://avatars.githubusercontent.com/u/22633385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eltociear", "html_url": "https://github.com/eltociear", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-02-25T15:21:39
2024-02-25T18:41:26
2024-02-25T18:41:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2744", "html_url": "https://github.com/ollama/ollama/pull/2744", "diff_url": "https://github.com/ollama/ollama/pull/2744.diff", "patch_url": "https://github.com/ollama/ollama/pull/2744.patch", "merged_at": "2024-02-25T18:41:25" }
specfied -> specified
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2744/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/260
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/260/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/260/comments
https://api.github.com/repos/ollama/ollama/issues/260/events
https://github.com/ollama/ollama/pull/260
1,833,816,760
PR_kwDOJ0Z1Ps5XC0de
260
override ggml-metal if the file is different
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-02T19:51:01
2023-08-02T20:01:47
2023-08-02T20:01:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/260", "html_url": "https://github.com/ollama/ollama/pull/260", "diff_url": "https://github.com/ollama/ollama/pull/260.diff", "patch_url": "https://github.com/ollama/ollama/pull/260.patch", "merged_at": "2023-08-02T20:01:46" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/260/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1286
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1286/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1286/comments
https://api.github.com/repos/ollama/ollama/issues/1286/events
https://github.com/ollama/ollama/issues/1286
2,011,934,231
I_kwDOJ0Z1Ps53664X
1,286
Change enviroment-variables as settings to command parameters
{ "login": "Talleyrand-34", "id": 119809076, "node_id": "U_kgDOByQkNA", "avatar_url": "https://avatars.githubusercontent.com/u/119809076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Talleyrand-34", "html_url": "https://github.com/Talleyrand-34", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2023-11-27T10:12:00
2024-02-20T01:18:58
2024-02-20T01:18:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
## Changethe method of configure settings Instead of enviroment variables uses internal settings, at least as user interface. ### Example Instead of: > OLLAMA_MODELS=/path/to/file; ollama run model Run: >ollama conf path_to_models /path/to/file >ollama run model Or: >ollama run model -f /path/to/file
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1286/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2553
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2553/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2553/comments
https://api.github.com/repos/ollama/ollama/issues/2553/events
https://github.com/ollama/ollama/pull/2553
2,139,667,115
PR_kwDOJ0Z1Ps5nJQKS
2,553
Harden AMD driver lookup logic
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-02-17T00:22:52
2024-02-17T01:23:15
2024-02-17T01:23:12
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2553", "html_url": "https://github.com/ollama/ollama/pull/2553", "diff_url": "https://github.com/ollama/ollama/pull/2553.diff", "patch_url": "https://github.com/ollama/ollama/pull/2553.patch", "merged_at": "2024-02-17T01:23:12" }
It looks like the version file doesn't exist on older(?) drivers Fixes #2502
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2553/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2553/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8519
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8519/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8519/comments
https://api.github.com/repos/ollama/ollama/issues/8519/events
https://github.com/ollama/ollama/issues/8519
2,802,085,802
I_kwDOJ0Z1Ps6nBG-q
8,519
CLI: Managing models like (Docker) containers via ID
{ "login": "wijjj", "id": 726919, "node_id": "MDQ6VXNlcjcyNjkxOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/726919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wijjj", "html_url": "https://github.com/wijjj", "followers_url": "https://api.github.com/users/wijjj/followers"...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-21T14:58:12
2025-01-21T14:58:30
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Could we please have `ollama ps` `ollama stop <ID>` instead of `ollama stop this-is-a-llm-with-a-pretty-long-name:1337b_instruzioni_v3.33_q5_K_S` or at least tab autocompletion? Sorry in advance: Maybe this is already a duplicate (did look for it!), as it is not really a highly inventive idea.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8519/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8463
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8463/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8463/comments
https://api.github.com/repos/ollama/ollama/issues/8463/events
https://github.com/ollama/ollama/issues/8463
2,793,927,929
I_kwDOJ0Z1Ps6mh_T5
8,463
AMD Radeon RX6700XT unable to take input
{ "login": "bitfl0wer", "id": 39242991, "node_id": "MDQ6VXNlcjM5MjQyOTkx", "avatar_url": "https://avatars.githubusercontent.com/u/39242991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bitfl0wer", "html_url": "https://github.com/bitfl0wer", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2025-01-16T22:41:02
2025-01-18T10:09:36
2025-01-18T10:09:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When trying to use ollamas APIs, the llama server crashes when loading. ## Obligatory System Information CPU: AMD Ryzen 9 7900 RAM: 64GB DDR5 OS: Fedora Linux 41 (Workstation Edition) x86_64 Ollama host: Docker ### docker-compose.yml ```yml services: webui: image: ghcr.io/open-webui/op...
{ "login": "bitfl0wer", "id": 39242991, "node_id": "MDQ6VXNlcjM5MjQyOTkx", "avatar_url": "https://avatars.githubusercontent.com/u/39242991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bitfl0wer", "html_url": "https://github.com/bitfl0wer", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8463/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3927
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3927/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3927/comments
https://api.github.com/repos/ollama/ollama/issues/3927/events
https://github.com/ollama/ollama/issues/3927
2,264,807,387
I_kwDOJ0Z1Ps6G_jfb
3,927
function calling with autogen does not work
{ "login": "patrickwasp", "id": 70671760, "node_id": "MDQ6VXNlcjcwNjcxNzYw", "avatar_url": "https://avatars.githubusercontent.com/u/70671760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickwasp", "html_url": "https://github.com/patrickwasp", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-26T02:10:55
2024-07-30T02:50:42
2024-07-26T00:50:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ```python #!/usr/local/bin/python3.12 from typing import Literal from pydantic import BaseModel, Field from typing_extensions import Annotated import autogen from autogen.cache import Cache # MODEL_NAME = "gpt-3.5-turbo" # API_URL = "https://api.openai.com/v1/" # API_KEY = "sk-X...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3927/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7527
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7527/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7527/comments
https://api.github.com/repos/ollama/ollama/issues/7527/events
https://github.com/ollama/ollama/pull/7527
2,638,418,645
PR_kwDOJ0Z1Ps6BEmu1
7,527
Fix minor inconsistency
{ "login": "edmcman", "id": 1017189, "node_id": "MDQ6VXNlcjEwMTcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmcman", "html_url": "https://github.com/edmcman", "followers_url": "https://api.github.com/users/edmcman/...
[]
closed
false
null
[]
null
1
2024-11-06T15:24:36
2024-11-08T17:36:17
2024-11-08T17:36:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7527", "html_url": "https://github.com/ollama/ollama/pull/7527", "diff_url": "https://github.com/ollama/ollama/pull/7527.diff", "patch_url": "https://github.com/ollama/ollama/pull/7527.patch", "merged_at": "2024-11-08T17:36:17" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7527/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4300
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4300/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4300/comments
https://api.github.com/repos/ollama/ollama/issues/4300/events
https://github.com/ollama/ollama/pull/4300
2,288,528,117
PR_kwDOJ0Z1Ps5vCP33
4,300
Add LlamaScript to Community Projects
{ "login": "zanderlewis", "id": 158775116, "node_id": "U_kgDOCXa3TA", "avatar_url": "https://avatars.githubusercontent.com/u/158775116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanderlewis", "html_url": "https://github.com/zanderlewis", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-05-09T21:58:54
2024-05-09T22:30:49
2024-05-09T22:30:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4300", "html_url": "https://github.com/ollama/ollama/pull/4300", "diff_url": "https://github.com/ollama/ollama/pull/4300.diff", "patch_url": "https://github.com/ollama/ollama/pull/4300.patch", "merged_at": "2024-05-09T22:30:49" }
Pull Request for #4061
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4300/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1557
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1557/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1557/comments
https://api.github.com/repos/ollama/ollama/issues/1557/events
https://github.com/ollama/ollama/issues/1557
2,044,513,369
I_kwDOJ0Z1Ps553MxZ
1,557
Increasing slow response - CPU only on Linux Azure
{ "login": "benmarinic", "id": 1210218, "node_id": "MDQ6VXNlcjEyMTAyMTg=", "avatar_url": "https://avatars.githubusercontent.com/u/1210218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benmarinic", "html_url": "https://github.com/benmarinic", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
11
2023-12-16T00:14:30
2024-04-15T15:51:30
2024-03-13T00:22:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm using the following VM in azure: Standard D8s v3 vCPUs 8, RAM 32 GiB Have tried Mistral 7b and Orca-mini. I've also tried 4 bit versions. Ollama is responding increasingly slowly. After the 4th simple query ("hi" or "what's the capital of ...") I'm waiting in excess of 60 seconds for it to begin to respond...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1557/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1557/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4656
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4656/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4656/comments
https://api.github.com/repos/ollama/ollama/issues/4656/events
https://github.com/ollama/ollama/pull/4656
2,318,262,026
PR_kwDOJ0Z1Ps5wnbT7
4,656
Add `OLLAMA_HOME` for setting `~/.ollama`
{ "login": "maaslalani", "id": 42545625, "node_id": "MDQ6VXNlcjQyNTQ1NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/42545625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maaslalani", "html_url": "https://github.com/maaslalani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
7
2024-05-27T05:26:18
2024-08-05T18:51:52
2024-08-05T18:51:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4656", "html_url": "https://github.com/ollama/ollama/pull/4656", "diff_url": "https://github.com/ollama/ollama/pull/4656.diff", "patch_url": "https://github.com/ollama/ollama/pull/4656.patch", "merged_at": null }
Fixes https://github.com/ollama/ollama/issues/228 This PR adds the optional configuration for `OLLAMA_HOME` to prevent cluttering the user's home directory. `OLLAMA_HOME` is optional and uses the current behavior if not provided. If `OLLAMA_MODELS` is not explicitly, the default value is `~/$OLLAMA_HOME/models`.
{ "login": "maaslalani", "id": 42545625, "node_id": "MDQ6VXNlcjQyNTQ1NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/42545625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maaslalani", "html_url": "https://github.com/maaslalani", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4656/reactions", "total_count": 4, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4656/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5054
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5054/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5054/comments
https://api.github.com/repos/ollama/ollama/issues/5054/events
https://github.com/ollama/ollama/issues/5054
2,354,412,931
I_kwDOJ0Z1Ps6MVX2D
5,054
Windows - `go generate` failing on build_cpu
{ "login": "JerrettDavis", "id": 2610199, "node_id": "MDQ6VXNlcjI2MTAxOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JerrettDavis", "html_url": "https://github.com/JerrettDavis", "followers_url": "https://api.github.com...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-06-15T02:17:50
2024-11-04T19:15:44
2024-11-04T19:15:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've been trying to get a Windows dev environment up and running following the [development](https://github.com/ollama/ollama/blob/main/docs/development.md) guide. I've attempted installing both MinGW-w64 and MSYS2, along with the latest Visual Studio build tools, but the existing Windows build ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5054/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7090
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7090/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7090/comments
https://api.github.com/repos/ollama/ollama/issues/7090/events
https://github.com/ollama/ollama/issues/7090
2,563,840,018
I_kwDOJ0Z1Ps6Y0RgS
7,090
ollama_models path not working any longer
{ "login": "Molnfront", "id": 935328, "node_id": "MDQ6VXNlcjkzNTMyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/935328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Molnfront", "html_url": "https://github.com/Molnfront", "followers_url": "https://api.github.com/users/Moln...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
4
2024-10-03T11:42:01
2024-12-02T23:04:45
2024-12-02T23:04:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Last week I added ollama_models path to my env file in my Mac. Olama picked up the settings and saved the models to my path (external SSD). Now yesterday when I picked gemma 2 and got it downloaded it ignored the path and downloaded it to .ollama. ### OS macOS ### GPU Apple ### CPU Apple...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7090/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4270
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4270/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4270/comments
https://api.github.com/repos/ollama/ollama/issues/4270/events
https://github.com/ollama/ollama/issues/4270
2,286,706,401
I_kwDOJ0Z1Ps6ITF7h
4,270
windows ollama 0.1.34 can not use GPU,with nvidia RTX 4060
{ "login": "zhafree", "id": 25758100, "node_id": "MDQ6VXNlcjI1NzU4MTAw", "avatar_url": "https://avatars.githubusercontent.com/u/25758100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhafree", "html_url": "https://github.com/zhafree", "followers_url": "https://api.github.com/users/zhafre...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-05-09T00:58:28
2024-06-02T00:16:21
2024-06-02T00:16:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` C:\Users\zh_af>nvidia-smi Thu May 9 08:53:43 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 537.70 Driver Version: 537.70 CUDA Version: 12.2 | |-----------------------------------------+-----------...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4270/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1461
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1461/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1461/comments
https://api.github.com/repos/ollama/ollama/issues/1461/events
https://github.com/ollama/ollama/issues/1461
2,034,926,824
I_kwDOJ0Z1Ps55SoTo
1,461
Mistral not providing license information
{ "login": "neural-loop", "id": 654993, "node_id": "MDQ6VXNlcjY1NDk5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/654993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neural-loop", "html_url": "https://github.com/neural-loop", "followers_url": "https://api.github.com/user...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2023-12-11T06:30:58
2024-01-25T22:56:35
2024-01-25T22:32:20
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![image](https://github.com/jmorganca/ollama/assets/654993/dfb4a673-8bff-4c40-95be-077014e6a55f) It is maybe because they don't include a license.txt in their repository. However, they do specify that it is Apache 2.0 ![image](https://github.com/jmorganca/ollama/assets/654993/0855ffb3-ad27-46fb-b326-0086243b2f39)...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1461/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4039
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4039/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4039/comments
https://api.github.com/repos/ollama/ollama/issues/4039/events
https://github.com/ollama/ollama/pull/4039
2,270,522,545
PR_kwDOJ0Z1Ps5uF_a6
4,039
types/model: reduce Name.Filepath allocs from 5 to 2
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-04-30T05:14:34
2024-04-30T18:09:20
2024-04-30T18:09:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4039", "html_url": "https://github.com/ollama/ollama/pull/4039", "diff_url": "https://github.com/ollama/ollama/pull/4039.diff", "patch_url": "https://github.com/ollama/ollama/pull/4039.patch", "merged_at": "2024-04-30T18:09:19" }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4039/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/560
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/560/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/560/comments
https://api.github.com/repos/ollama/ollama/issues/560/events
https://github.com/ollama/ollama/issues/560
1,905,749,865
I_kwDOJ0Z1Ps5xl29p
560
Is IPv6 supported?
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
3
2023-09-20T21:33:12
2023-09-21T16:28:17
2023-09-21T02:54:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
With the Ollama server running: ```bash > curl -X POST --header 'Content-Type: application/json' "http://[::1]:11434/api/generate" -d '{ "model": "llama2:13b", "prompt": "Your first prompt goes here" }' curl: (7) Failed to connect to ::1 port 11434 after 5 ms: Couldn't connect to server ``` I am wonderi...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/560/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1361
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1361/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1361/comments
https://api.github.com/repos/ollama/ollama/issues/1361/events
https://github.com/ollama/ollama/issues/1361
2,022,410,814
I_kwDOJ0Z1Ps54i4o-
1,361
Add support for gpt4-x-alpaca
{ "login": "priamai", "id": 57333254, "node_id": "MDQ6VXNlcjU3MzMzMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/57333254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/priamai", "html_url": "https://github.com/priamai", "followers_url": "https://api.github.com/users/priama...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2023-12-03T07:50:31
2024-03-12T06:35:10
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi there, this is an amazing model: https://huggingface.co/chavinlo/gpt4-x-alpaca Cheers.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1361/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6714
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6714/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6714/comments
https://api.github.com/repos/ollama/ollama/issues/6714/events
https://github.com/ollama/ollama/pull/6714
2,514,941,013
PR_kwDOJ0Z1Ps5653f9
6,714
catch when model vocab size is set correctly
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-09-09T21:19:20
2024-09-10T00:18:57
2024-09-10T00:18:55
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6714", "html_url": "https://github.com/ollama/ollama/pull/6714", "diff_url": "https://github.com/ollama/ollama/pull/6714.diff", "patch_url": "https://github.com/ollama/ollama/pull/6714.patch", "merged_at": "2024-09-10T00:18:55" }
This check catches if there are too many tokens in the tokenizer vs. the expected number of tokens specified in the `vocab_size` field of `config.json`. This typically happens if the `added_tokens` array in `tokenizer.json` ends up has too many tokens. Right now this results in the back end barfing during inference ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6714/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/778
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/778/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/778/comments
https://api.github.com/repos/ollama/ollama/issues/778/events
https://github.com/ollama/ollama/pull/778
1,942,147,639
PR_kwDOJ0Z1Ps5cv_e7
778
show request to server rather than local check
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-10-13T15:14:41
2023-10-16T21:27:26
2023-10-16T21:27:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/778", "html_url": "https://github.com/ollama/ollama/pull/778", "diff_url": "https://github.com/ollama/ollama/pull/778.diff", "patch_url": "https://github.com/ollama/ollama/pull/778.patch", "merged_at": "2023-10-16T21:27:25" }
The show command should send a request to the server, rather than making a direct call to the function locally. resolces #776
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/778/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1604
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1604/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1604/comments
https://api.github.com/repos/ollama/ollama/issues/1604/events
https://github.com/ollama/ollama/pull/1604
2,048,514,225
PR_kwDOJ0Z1Ps5iXLIp
1,604
Updated syntax in client.py
{ "login": "omcodedthis", "id": 119602009, "node_id": "U_kgDOByD7WQ", "avatar_url": "https://avatars.githubusercontent.com/u/119602009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omcodedthis", "html_url": "https://github.com/omcodedthis", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2023-12-19T11:58:00
2024-01-18T22:27:41
2024-01-18T22:27:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1604", "html_url": "https://github.com/ollama/ollama/pull/1604", "diff_url": "https://github.com/ollama/ollama/pull/1604.diff", "patch_url": "https://github.com/ollama/ollama/pull/1604.patch", "merged_at": null }
* Updated the syntax for `heartbeat()` in `client.py`. * Functionality is maintained.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1604/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1874
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1874/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1874/comments
https://api.github.com/repos/ollama/ollama/issues/1874/events
https://github.com/ollama/ollama/pull/1874
2,073,027,262
PR_kwDOJ0Z1Ps5jnP__
1,874
Set corret CUDA minimum compute capability version
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-09T19:29:52
2024-01-09T19:37:22
2024-01-09T19:37:22
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1874", "html_url": "https://github.com/ollama/ollama/pull/1874", "diff_url": "https://github.com/ollama/ollama/pull/1874.diff", "patch_url": "https://github.com/ollama/ollama/pull/1874.patch", "merged_at": "2024-01-09T19:37:22" }
If you attempt to run the current CUDA build on compute capability 5.2 cards, you'll hit the following failure: cuBLAS error 15 at ggml-cuda.cu:7956: the requested functionality is not supported
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1874/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/567
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/567/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/567/comments
https://api.github.com/repos/ollama/ollama/issues/567/events
https://github.com/ollama/ollama/pull/567
1,907,689,851
PR_kwDOJ0Z1Ps5a7XZk
567
update submodule
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-21T20:13:28
2023-09-21T20:22:24
2023-09-21T20:22:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/567", "html_url": "https://github.com/ollama/ollama/pull/567", "diff_url": "https://github.com/ollama/ollama/pull/567.diff", "patch_url": "https://github.com/ollama/ollama/pull/567.patch", "merged_at": "2023-09-21T20:22:23" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/567/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4013
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4013/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4013/comments
https://api.github.com/repos/ollama/ollama/issues/4013/events
https://github.com/ollama/ollama/issues/4013
2,267,947,120
I_kwDOJ0Z1Ps6HLiBw
4,013
API Endpoint for Listing Loaded Running Models
{ "login": "strikeoncmputrz", "id": 648143, "node_id": "MDQ6VXNlcjY0ODE0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/648143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/strikeoncmputrz", "html_url": "https://github.com/strikeoncmputrz", "followers_url": "https://api.git...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-04-29T01:25:17
2024-05-14T00:17:37
2024-05-14T00:17:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be excellent to be able to interrogate the API to determine which models are running at any given time, rather than just seeing which checkpoints were pulled. I use a variety of clients to interact with Ollama's API. I sometimes run models with a long `keep_alive` and assume others have similar use cases...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4013/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2869
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2869/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2869/comments
https://api.github.com/repos/ollama/ollama/issues/2869/events
https://github.com/ollama/ollama/issues/2869
2,164,318,072
I_kwDOJ0Z1Ps6BAN94
2,869
Ollama doesn't use Radeon RX 6600
{ "login": "nameiwillforget", "id": 81373487, "node_id": "MDQ6VXNlcjgxMzczNDg3", "avatar_url": "https://avatars.githubusercontent.com/u/81373487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nameiwillforget", "html_url": "https://github.com/nameiwillforget", "followers_url": "https://api...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
22
2024-03-01T22:57:13
2024-09-06T20:08:20
2024-03-12T07:27:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm using Arch Linux with the latest updates installed and ollama installed from its AUR package. When I use the Smaug model, it uses my CPU considerably but my GPU not at all: ![amdgpu](https://github.com/ollama/ollama/assets/81373487/be629472-a4eb-4f31-b8e9-726e2f9a8c21) I put the output of `ollama serve` and ollam...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2869/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2063
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2063/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2063/comments
https://api.github.com/repos/ollama/ollama/issues/2063/events
https://github.com/ollama/ollama/pull/2063
2,089,401,919
PR_kwDOJ0Z1Ps5kfBUs
2,063
Save and load sessions
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
1
2024-01-19T01:54:02
2024-02-12T20:10:33
2024-01-25T20:12:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2063", "html_url": "https://github.com/ollama/ollama/pull/2063", "diff_url": "https://github.com/ollama/ollama/pull/2063.diff", "patch_url": "https://github.com/ollama/ollama/pull/2063.patch", "merged_at": "2024-01-25T20:12:36" }
This change allows users to interactively save a session from the REPL, and then load it back up again later. It also adds a new `MESSAGE` command for Modelfiles so that users can build their own session which can be created with `ollama create`.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2063/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2063/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2286
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2286/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2286/comments
https://api.github.com/repos/ollama/ollama/issues/2286/events
https://github.com/ollama/ollama/issues/2286
2,109,272,858
I_kwDOJ0Z1Ps59uPMa
2,286
Codellama70b runs, but Codellama70b-Instruct spins forever after downloading
{ "login": "ewebgh33", "id": 123797054, "node_id": "U_kgDOB2D-Pg", "avatar_url": "https://avatars.githubusercontent.com/u/123797054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ewebgh33", "html_url": "https://github.com/ewebgh33", "followers_url": "https://api.github.com/users/ewebgh33/...
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": ...
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
2
2024-01-31T04:44:58
2024-07-19T21:39:51
2024-07-19T21:39:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Wondering if this is a config issue or something else? IE are any of the additional model files that are downloaded alongside the 38gb main file, borked in any way? Ollama is via WSL in windows. `ollama run codellama:70b` works and gives me code `ollama run codellama:70b-instruct` downloads but has the spinn...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2286/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5703
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5703/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5703/comments
https://api.github.com/repos/ollama/ollama/issues/5703/events
https://github.com/ollama/ollama/issues/5703
2,408,951,725
I_kwDOJ0Z1Ps6Pla-t
5,703
Mixtral truncates output after year
{ "login": "alexander-fischer", "id": 7881637, "node_id": "MDQ6VXNlcjc4ODE2Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7881637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexander-fischer", "html_url": "https://github.com/alexander-fischer", "followers_url": "https:/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-07-15T14:56:29
2024-07-15T15:02:22
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Output from Mixtral stops after year of date. I could recreate the issue within ollama that appears in vLLM as well: https://github.com/vllm-project/vllm/issues/2464 The model I used was: `[mixtral:8x7b-instruct-v0.1-q8_0](https://ollama.com/library/mixtral:8x7b-instruct-v0.1-q8_0)` Y...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5703/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5703/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4612
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4612/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4612/comments
https://api.github.com/repos/ollama/ollama/issues/4612/events
https://github.com/ollama/ollama/pull/4612
2,315,484,894
PR_kwDOJ0Z1Ps5weDLL
4,612
added new community integration (headless-ollama)
{ "login": "nischalj10", "id": 55933460, "node_id": "MDQ6VXNlcjU1OTMzNDYw", "avatar_url": "https://avatars.githubusercontent.com/u/55933460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nischalj10", "html_url": "https://github.com/nischalj10", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
0
2024-05-24T13:58:18
2024-06-09T01:51:16
2024-06-09T01:51:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4612", "html_url": "https://github.com/ollama/ollama/pull/4612", "diff_url": "https://github.com/ollama/ollama/pull/4612.diff", "patch_url": "https://github.com/ollama/ollama/pull/4612.patch", "merged_at": "2024-06-09T01:51:16" }
ollama makes it wonderfully easy to build desktop apps that rely on local LLMs with its js and python libraries. > however, the user's system needs to have ollama already installed for the desktop app to use the libraries and make calls to the LLMs. Making users install ollama client separately isn't good UX tbh. th...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4612/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5364
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5364/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5364/comments
https://api.github.com/repos/ollama/ollama/issues/5364/events
https://github.com/ollama/ollama/pull/5364
2,381,118,549
PR_kwDOJ0Z1Ps5z7dYK
5,364
Document concurrent behavior and settings
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-06-28T20:16:56
2024-07-01T16:49:52
2024-07-01T16:49:49
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5364", "html_url": "https://github.com/ollama/ollama/pull/5364", "diff_url": "https://github.com/ollama/ollama/pull/5364.diff", "patch_url": "https://github.com/ollama/ollama/pull/5364.patch", "merged_at": "2024-07-01T16:49:49" }
Merge after #4218
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5364/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1662
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1662/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1662/comments
https://api.github.com/repos/ollama/ollama/issues/1662/events
https://github.com/ollama/ollama/pull/1662
2,052,940,782
PR_kwDOJ0Z1Ps5imXZC
1,662
Update README.md - Community Integrations - Obsidian Local GPT plugin
{ "login": "pfrankov", "id": 584632, "node_id": "MDQ6VXNlcjU4NDYzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/584632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfrankov", "html_url": "https://github.com/pfrankov", "followers_url": "https://api.github.com/users/pfranko...
[]
closed
false
null
[]
null
0
2023-12-21T19:13:07
2024-01-22T17:04:04
2024-01-22T17:04:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1662", "html_url": "https://github.com/ollama/ollama/pull/1662", "diff_url": "https://github.com/ollama/ollama/pull/1662.diff", "patch_url": "https://github.com/ollama/ollama/pull/1662.patch", "merged_at": null }
Local GPT plugin for Obsidian mainly relies on Ollama provider ![image](https://github.com/pfrankov/obsidian-local-gpt/assets/584632/724d4399-cb6c-4531-9f04-a1e5df2e3dad) ![image](https://github.com/jmorganca/ollama/assets/584632/199b11c2-dc2a-4168-8466-247af40b572c) But it's also possible to use OpenAI-like local s...
{ "login": "pfrankov", "id": 584632, "node_id": "MDQ6VXNlcjU4NDYzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/584632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfrankov", "html_url": "https://github.com/pfrankov", "followers_url": "https://api.github.com/users/pfranko...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1662/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6817
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6817/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6817/comments
https://api.github.com/repos/ollama/ollama/issues/6817/events
https://github.com/ollama/ollama/issues/6817
2,527,223,280
I_kwDOJ0Z1Ps6Wol3w
6,817
llama 3.1 8b params downloaded from huggingface, strange num_ctx behavior
{ "login": "akseg73", "id": 45887240, "node_id": "MDQ6VXNlcjQ1ODg3MjQw", "avatar_url": "https://avatars.githubusercontent.com/u/45887240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akseg73", "html_url": "https://github.com/akseg73", "followers_url": "https://api.github.com/users/akseg7...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-09-15T22:04:14
2024-12-02T22:51:10
2024-12-02T22:51:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I downloaded llama3.1 8b quantized to 8 bits from huggingface. It appears to have a default context size of 132k. Looking at numerous sources on the internet it seemed reasonable that in order to utilize the model i should reduce the context size with Parameter num_ctx 32k. However when i ut...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6817/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5729
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5729/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5729/comments
https://api.github.com/repos/ollama/ollama/issues/5729/events
https://github.com/ollama/ollama/pull/5729
2,411,984,656
PR_kwDOJ0Z1Ps51jus4
5,729
OpenAI: update message processing
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-07-16T20:25:11
2024-07-19T18:19:21
2024-07-19T18:19:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5729", "html_url": "https://github.com/ollama/ollama/pull/5729", "diff_url": "https://github.com/ollama/ollama/pull/5729.diff", "patch_url": "https://github.com/ollama/ollama/pull/5729.patch", "merged_at": "2024-07-19T18:19:20" }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5729/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3386
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3386/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3386/comments
https://api.github.com/repos/ollama/ollama/issues/3386/events
https://github.com/ollama/ollama/issues/3386
2,213,170,172
I_kwDOJ0Z1Ps6D6kv8
3,386
Loading the model on VM from attached volumes is extremely slow
{ "login": "levy42", "id": 8012024, "node_id": "MDQ6VXNlcjgwMTIwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8012024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/levy42", "html_url": "https://github.com/levy42", "followers_url": "https://api.github.com/users/levy42/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-03-28T12:52:29
2024-06-01T22:39:50
2024-06-01T22:39:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When pulling the model and running it the first time everything works fine. However, after deallocating the VM and starting it again (attaching a permanent disk with Ollama models downloaded) it takes more than 20 minutes to load any large model. It seems it's loading it to the CPU first with...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3386/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3386/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7672
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7672/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7672/comments
https://api.github.com/repos/ollama/ollama/issues/7672/events
https://github.com/ollama/ollama/issues/7672
2,660,218,043
I_kwDOJ0Z1Ps6ej7S7
7,672
Moondream v2 (CPU) crashes with images (post predict EOF error) on 0.4.1
{ "login": "rvkwi", "id": 122366820, "node_id": "U_kgDOB0srZA", "avatar_url": "https://avatars.githubusercontent.com/u/122366820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rvkwi", "html_url": "https://github.com/rvkwi", "followers_url": "https://api.github.com/users/rvkwi/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-14T22:31:06
2024-11-14T22:42:45
2024-11-14T22:42:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Moondream v2 seems to run into an issue with images on CPU with 0.4.1, resulting in `Error: POST predict: Post "http://127.0.0.1:33685/completion": EOF`. Does not seem to affect GPU. ``` ~ $ ollama run moondream:v2 "please describe this image /home/kwi/demo-2.png" --verbose Added imag...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7672/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/1885
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1885/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1885/comments
https://api.github.com/repos/ollama/ollama/issues/1885/events
https://github.com/ollama/ollama/pull/1885
2,073,674,231
PR_kwDOJ0Z1Ps5jpcGg
1,885
Update submodule to `6efb8eb30e7025b168f3fda3ff83b9b386428ad6`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-10T06:19:01
2024-01-10T21:48:39
2024-01-10T21:48:38
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1885", "html_url": "https://github.com/ollama/ollama/pull/1885", "diff_url": "https://github.com/ollama/ollama/pull/1885.diff", "patch_url": "https://github.com/ollama/ollama/pull/1885.patch", "merged_at": "2024-01-10T21:48:38" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1885/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2996
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2996/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2996/comments
https://api.github.com/repos/ollama/ollama/issues/2996/events
https://github.com/ollama/ollama/issues/2996
2,175,154,079
I_kwDOJ0Z1Ps6Bpjef
2,996
ollama pull qwen:1.8b error:Error: Head "https://registry.ollama.ai/v2/library/qwen/blobs/sha256:1296b084ed6bc4c6eaee99255d73e9c715d38e0087b6467fd1c498b908180614": unexpected EOF
{ "login": "wuwenrui", "id": 20716568, "node_id": "MDQ6VXNlcjIwNzE2NTY4", "avatar_url": "https://avatars.githubusercontent.com/u/20716568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuwenrui", "html_url": "https://github.com/wuwenrui", "followers_url": "https://api.github.com/users/wuw...
[ { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "https://api.github.com/repos/ollama/ollama/labels/networking", "name": "networking", "color": "0B5368", "default": false, "description": "Issues relating to ollama pull and push" } ]
closed
false
null
[]
null
2
2024-03-08T02:24:22
2024-03-11T22:21:41
2024-03-11T22:21:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ollama pull qwen:1.8b error: Error: Head "https://registry.ollama.ai/v2/library/qwen/blobs/sha256:1296b084ed6bc4c6eaee99255d73e9c715d38e0087b6467fd1c498b908180614": unexpected EOF
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2996/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/854
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/854/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/854/comments
https://api.github.com/repos/ollama/ollama/issues/854/events
https://github.com/ollama/ollama/issues/854
1,954,551,781
I_kwDOJ0Z1Ps50gBfl
854
Better Doc / Explanation and Examples of Template Syntax
{ "login": "redhermes", "id": 6583939, "node_id": "MDQ6VXNlcjY1ODM5Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/6583939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/redhermes", "html_url": "https://github.com/redhermes", "followers_url": "https://api.github.com/users/re...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
4
2023-10-20T15:41:20
2023-10-25T19:29:59
2023-10-25T19:29:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Really like ollama for is simple setup and usage with both a CLI and API. The only thing that has tripped me up is getting the modelfile template correct for an imported model. It could be my inexperience but the documentation seems very sparse. I have been unable to get the JackalopeAI (on HuggingFace) to run afte...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/854/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3615
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3615/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3615/comments
https://api.github.com/repos/ollama/ollama/issues/3615/events
https://github.com/ollama/ollama/pull/3615
2,239,963,708
PR_kwDOJ0Z1Ps5se07_
3,615
Install Ollama on OSTree systems
{ "login": "ericcurtin", "id": 1694275, "node_id": "MDQ6VXNlcjE2OTQyNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1694275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ericcurtin", "html_url": "https://github.com/ericcurtin", "followers_url": "https://api.github.com/users...
[]
open
false
null
[]
null
3
2024-04-12T11:47:22
2024-04-14T09:26:49
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3615", "html_url": "https://github.com/ollama/ollama/pull/3615", "diff_url": "https://github.com/ollama/ollama/pull/3615.diff", "patch_url": "https://github.com/ollama/ollama/pull/3615.patch", "merged_at": null }
There's a large plethora of OSTree OSes in the Fedora family: Silverblue, Kinoite, CoreOS, IoT, Onyx, Sericea, Vauxite In the CentOS Stream family: Automotive Stream Distribution, CoreOS In the Red Hat family: Red Hat In-Vehicle Operating System, Red Hat Enterprise Linux CoreOS, RHEL for Edge Then the...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3615/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8030
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8030/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8030/comments
https://api.github.com/repos/ollama/ollama/issues/8030/events
https://github.com/ollama/ollama/pull/8030
2,730,861,255
PR_kwDOJ0Z1Ps6Evv7T
8,030
readme: include IBM Granite models
{ "login": "andresdanielmtz", "id": 103913163, "node_id": "U_kgDOBjGWyw", "avatar_url": "https://avatars.githubusercontent.com/u/103913163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andresdanielmtz", "html_url": "https://github.com/andresdanielmtz", "followers_url": "https://api.githu...
[]
open
false
null
[]
null
1
2024-12-10T18:17:15
2024-12-16T09:18:45
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8030", "html_url": "https://github.com/ollama/ollama/pull/8030", "diff_url": "https://github.com/ollama/ollama/pull/8030.diff", "patch_url": "https://github.com/ollama/ollama/pull/8030.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8030/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3109
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3109/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3109/comments
https://api.github.com/repos/ollama/ollama/issues/3109/events
https://github.com/ollama/ollama/issues/3109
2,184,258,885
I_kwDOJ0Z1Ps6CMSVF
3,109
OpenAI API and templates
{ "login": "pierreeliseeflory", "id": 46896737, "node_id": "MDQ6VXNlcjQ2ODk2NzM3", "avatar_url": "https://avatars.githubusercontent.com/u/46896737?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierreeliseeflory", "html_url": "https://github.com/pierreeliseeflory", "followers_url": "https...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-03-13T15:11:40
2024-04-09T20:05:38
2024-03-15T11:17:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Does the new OpenAI API compatible endpoint `/v1/chat/completions` uses the default templates defined in the Modefile ? Thank you
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3109/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7349
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7349/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7349/comments
https://api.github.com/repos/ollama/ollama/issues/7349/events
https://github.com/ollama/ollama/issues/7349
2,612,784,172
I_kwDOJ0Z1Ps6bu-ws
7,349
add termux compile instructions to web page
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7700262114, "node_id": ...
open
false
null
[]
null
1
2024-10-25T00:16:43
2024-11-04T19:18:44
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
pkg ugrade -y golang clang cmake libandroid-execinfo gzip git git clone https://github.com/ollama/ollama ollama cd ollama go generate ./... go build . cp ollama ~/../usr/bin this used to work to 0.3.13 then .14 the err came but i believe u change well , err will be gone , cmd working again .. greets ..
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7349/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7834
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7834/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7834/comments
https://api.github.com/repos/ollama/ollama/issues/7834/events
https://github.com/ollama/ollama/pull/7834
2,692,516,960
PR_kwDOJ0Z1Ps6DGwNn
7,834
server: fix Transport override
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-11-25T22:48:36
2024-11-25T23:08:36
2024-11-25T23:08:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7834", "html_url": "https://github.com/ollama/ollama/pull/7834", "diff_url": "https://github.com/ollama/ollama/pull/7834.diff", "patch_url": "https://github.com/ollama/ollama/pull/7834.patch", "merged_at": "2024-11-25T23:08:34" }
This changes makeRequest to update the http client Transport if and only if testMakeRequestDialContext is set. This is to avoid overriding the default Transport when testMakeRequestDialContext is nil, which broke existing behavior, included proxies, timeouts, and other behaviors. Fixes #7829 Fixes #7788
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7834/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7834/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7627
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7627/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7627/comments
https://api.github.com/repos/ollama/ollama/issues/7627/events
https://github.com/ollama/ollama/issues/7627
2,651,692,740
I_kwDOJ0Z1Ps6eDZ7E
7,627
support multiple lora adapters
{ "login": "lyingbug", "id": 11257935, "node_id": "MDQ6VXNlcjExMjU3OTM1", "avatar_url": "https://avatars.githubusercontent.com/u/11257935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lyingbug", "html_url": "https://github.com/lyingbug", "followers_url": "https://api.github.com/users/lyi...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-11-12T10:08:50
2024-11-27T19:00:06
2024-11-27T19:00:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
llama.cpp support multiple adapters, see https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md why ollama support only one adapter? https://github.com/ollama/ollama/blob/65973ceb6417c2e2796fa59bd3225bc7bd79b403/llm/server.go#L203-L206
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7627/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/797
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/797/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/797/comments
https://api.github.com/repos/ollama/ollama/issues/797/events
https://github.com/ollama/ollama/issues/797
1,944,725,328
I_kwDOJ0Z1Ps5z6idQ
797
Support GPU on older NVIDIA GPU and CUDA drivers
{ "login": "Syulin7", "id": 37265556, "node_id": "MDQ6VXNlcjM3MjY1NTU2", "avatar_url": "https://avatars.githubusercontent.com/u/37265556?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Syulin7", "html_url": "https://github.com/Syulin7", "followers_url": "https://api.github.com/users/Syulin...
[]
closed
false
null
[]
null
24
2023-10-16T09:01:24
2024-02-26T11:53:38
2023-11-28T21:26:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am testing using ollama on linux and docker, and its not using the GPU at all. it appears that ollma is not using the CUDA image. I resolved the issue by replacing the base image. https://github.com/jmorganca/ollama/blob/92578798bb1abcedd6bc99479d804f32d9ee2f6c/Dockerfile#L17-L23 change ubuntu:22.04 to nv...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/797/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/797/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8371
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8371/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8371/comments
https://api.github.com/repos/ollama/ollama/issues/8371/events
https://github.com/ollama/ollama/issues/8371
2,779,254,168
I_kwDOJ0Z1Ps6lqA2Y
8,371
ollama not working
{ "login": "Rachit199", "id": 141905808, "node_id": "U_kgDOCHVPkA", "avatar_url": "https://avatars.githubusercontent.com/u/141905808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rachit199", "html_url": "https://github.com/Rachit199", "followers_url": "https://api.github.com/users/Rachit...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
open
false
null
[]
null
1
2025-01-10T04:27:15
2025-01-10T23:57:31
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I installed ollama in ubuntu via curl command but not working when using ollama. So I check ollama version.>ollama -v ollama version is 0.0.0 Warning: client version is 0.5.0 ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8371/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5787
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5787/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5787/comments
https://api.github.com/repos/ollama/ollama/issues/5787/events
https://github.com/ollama/ollama/issues/5787
2,418,160,984
I_kwDOJ0Z1Ps6QIjVY
5,787
ollama run deepseek-coder-v2 creates gibberish output
{ "login": "flo-ivar", "id": 143725475, "node_id": "U_kgDOCJETow", "avatar_url": "https://avatars.githubusercontent.com/u/143725475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flo-ivar", "html_url": "https://github.com/flo-ivar", "followers_url": "https://api.github.com/users/flo-ivar/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-07-19T06:40:47
2024-09-17T01:39:45
2024-09-17T01:39:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I am trying to run the 16b ollama deepseek-coder-v2, which leads to a "gibberish" output. Strangely enough it works after a fresh download, but then after trying to run it in Aider it doesnt. ![image](https://github.com/user-attachments/assets/9e6df4f7-dc47-49bc-a306-2e73c73b4098) ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5787/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/1247
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1247/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1247/comments
https://api.github.com/repos/ollama/ollama/issues/1247/events
https://github.com/ollama/ollama/issues/1247
2,007,161,530
I_kwDOJ0Z1Ps53otq6
1,247
Better validation for model names in `ollama create` and `ollama cp`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg...
closed
false
null
[]
null
1
2023-11-22T21:49:44
2023-11-29T20:54:30
2023-11-29T20:54:30
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Today creating `ollama create mymodel:my:tag` will work
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1247/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/490
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/490/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/490/comments
https://api.github.com/repos/ollama/ollama/issues/490/events
https://github.com/ollama/ollama/pull/490
1,886,657,684
PR_kwDOJ0Z1Ps5Z0rrI
490
Add OLLAMA_HOME environment variable support.
{ "login": "akhilcacharya", "id": 3621384, "node_id": "MDQ6VXNlcjM2MjEzODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3621384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akhilcacharya", "html_url": "https://github.com/akhilcacharya", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
2
2023-09-07T22:53:51
2023-11-03T16:57:16
2023-10-25T22:34:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/490", "html_url": "https://github.com/ollama/ollama/pull/490", "diff_url": "https://github.com/ollama/ollama/pull/490.diff", "patch_url": "https://github.com/ollama/ollama/pull/490.patch", "merged_at": null }
## Problem I'd like to run Ollama on my Linux server, but I have a small home directory disk. As a result, rather than changing the home directory to my mass storage pool, I propose adding the environment variable ```OLLAMA_HOME``` to set the top-level filepath for Ollama. ## Change Switch out os.UserHomeDir w...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/490/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5436
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5436/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5436/comments
https://api.github.com/repos/ollama/ollama/issues/5436/events
https://github.com/ollama/ollama/issues/5436
2,386,507,001
I_kwDOJ0Z1Ps6OPzT5
5,436
Updates to Phi-3 mini 4k/128k
{ "login": "Qualzz", "id": 35169816, "node_id": "MDQ6VXNlcjM1MTY5ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/35169816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qualzz", "html_url": "https://github.com/Qualzz", "followers_url": "https://api.github.com/users/Qualzz/fo...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-07-02T15:03:48
2024-07-02T20:34:30
2024-07-02T20:34:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Microsoft updated both checkpoints: [https://huggingface.co/microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) > Release Notes > This is an update over ...
{ "login": "Qualzz", "id": 35169816, "node_id": "MDQ6VXNlcjM1MTY5ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/35169816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qualzz", "html_url": "https://github.com/Qualzz", "followers_url": "https://api.github.com/users/Qualzz/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5436/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5436/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7777
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7777/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7777/comments
https://api.github.com/repos/ollama/ollama/issues/7777/events
https://github.com/ollama/ollama/pull/7777
2,679,045,060
PR_kwDOJ0Z1Ps6Cpn-0
7,777
ppc64le: corrected ioctls
{ "login": "stormljor", "id": 36227969, "node_id": "MDQ6VXNlcjM2MjI3OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/36227969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stormljor", "html_url": "https://github.com/stormljor", "followers_url": "https://api.github.com/users/...
[]
open
false
null
[]
null
4
2024-11-21T11:00:06
2025-01-28T00:51:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7777", "html_url": "https://github.com/ollama/ollama/pull/7777", "diff_url": "https://github.com/ollama/ollama/pull/7777.diff", "patch_url": "https://github.com/ollama/ollama/pull/7777.patch", "merged_at": null }
As described in #796 `ollama run` won't work on ppc64le out of the box, as the ioctl `TCSETS` is invalid. This PR changes the ioctl to `TCSETSF` while also moving it away from "magic numbers". According to man pages: ``` TCSETSF Equivalent to tcsetattr(fd, TCSAFLUSH, argp). ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7777/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7777/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/392
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/392/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/392/comments
https://api.github.com/repos/ollama/ollama/issues/392/events
https://github.com/ollama/ollama/pull/392
1,860,393,582
PR_kwDOJ0Z1Ps5YcQf_
392
add version
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-08-22T01:26:20
2023-08-22T16:50:28
2023-08-22T16:50:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/392", "html_url": "https://github.com/ollama/ollama/pull/392", "diff_url": "https://github.com/ollama/ollama/pull/392.diff", "patch_url": "https://github.com/ollama/ollama/pull/392.patch", "merged_at": "2023-08-22T16:50:25" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/392/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8274
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8274/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8274/comments
https://api.github.com/repos/ollama/ollama/issues/8274/events
https://github.com/ollama/ollama/issues/8274
2,764,372,182
I_kwDOJ0Z1Ps6kxPjW
8,274
Ollama hangs without timeout, Ollama model is consuming full CPU or GPU
{ "login": "ttww", "id": 3983391, "node_id": "MDQ6VXNlcjM5ODMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3983391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttww", "html_url": "https://github.com/ttww", "followers_url": "https://api.github.com/users/ttww/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-12-31T13:19:10
2025-01-01T17:01:09
2025-01-01T17:01:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Doing Ollama connections with langchain the API hangs, depending on the input. No information is loggend on the Ollama serve side with debug option 3. I have attached a test case (Python program, test image, and README) to reproduce it. [ollama_hang.tgz](https://github.com/user-attachment...
{ "login": "ttww", "id": 3983391, "node_id": "MDQ6VXNlcjM5ODMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3983391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttww", "html_url": "https://github.com/ttww", "followers_url": "https://api.github.com/users/ttww/followers", ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8274/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1892
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1892/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1892/comments
https://api.github.com/repos/ollama/ollama/issues/1892/events
https://github.com/ollama/ollama/issues/1892
2,074,082,789
I_kwDOJ0Z1Ps57n_3l
1,892
upgrade openchat
{ "login": "morandalex", "id": 9484568, "node_id": "MDQ6VXNlcjk0ODQ1Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/9484568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morandalex", "html_url": "https://github.com/morandalex", "followers_url": "https://api.github.com/users...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-01-10T10:40:27
2024-01-11T16:52:21
2024-01-11T00:09:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
hello a new release of openchat was released : https://huggingface.co/openchat/openchat-3.5-0106#benchmarks
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1892/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1892/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6706
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6706/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6706/comments
https://api.github.com/repos/ollama/ollama/issues/6706/events
https://github.com/ollama/ollama/issues/6706
2,512,960,755
I_kwDOJ0Z1Ps6VyLzz
6,706
Reflection 70B has significant issue with the weights
{ "login": "gileneusz", "id": 34601970, "node_id": "MDQ6VXNlcjM0NjAxOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gileneusz", "html_url": "https://github.com/gileneusz", "followers_url": "https://api.github.com/users/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-09-09T05:43:27
2024-09-12T01:18:15
2024-09-12T01:18:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The whole drama is described here: https://x.com/shinboson/status/1832933753837982024 sorry for recommending the model, I was unaware of that and easily got into the hype it's still possible that's just technical issue, but I'm suspicious: https://x.com/rohanpaul_ai/status/1833094994929897862/photo/1
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6706/reactions", "total_count": 9, "+1": 8, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6706/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8497
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8497/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8497/comments
https://api.github.com/repos/ollama/ollama/issues/8497/events
https://github.com/ollama/ollama/issues/8497
2,798,255,347
I_kwDOJ0Z1Ps6myfzz
8,497
Repository for tyllama/kevin?
{ "login": "Dim-Tim-1963", "id": 42923977, "node_id": "MDQ6VXNlcjQyOTIzOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/42923977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dim-Tim-1963", "html_url": "https://github.com/Dim-Tim-1963", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
0
2025-01-20T05:51:59
2025-01-20T07:49:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The ollama library has the tyllama/kevin model: https://ollama.com/tyllama/kevin The description says that it can be installed from the repository, with the ability to remember previous dialogs and learn from them. But I didn't find that repository. Does it still exist? Removed, renamed?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8497/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4412
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4412/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4412/comments
https://api.github.com/repos/ollama/ollama/issues/4412/events
https://github.com/ollama/ollama/pull/4412
2,293,931,601
PR_kwDOJ0Z1Ps5vUcKA
4,412
Document older win10 terminal problems
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
1
2024-05-13T22:10:03
2024-07-05T15:18:25
2024-07-05T15:18:22
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4412", "html_url": "https://github.com/ollama/ollama/pull/4412", "diff_url": "https://github.com/ollama/ollama/pull/4412.diff", "patch_url": "https://github.com/ollama/ollama/pull/4412.patch", "merged_at": "2024-07-05T15:18:22" }
We haven't found a workaround, so for now recommend updating. Fixes #3916
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4412/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4898
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4898/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4898/comments
https://api.github.com/repos/ollama/ollama/issues/4898/events
https://github.com/ollama/ollama/issues/4898
2,339,661,962
I_kwDOJ0Z1Ps6LdGiK
4,898
Error removing model
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
3
2024-06-07T06:11:34
2024-06-10T18:40:04
2024-06-10T18:40:04
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` ollama run wizardcoder:34b-python ollama rm wizardcoder:34b-python Error: remove /usr/share/ollama/.ollama/models/blobs/sha256-a168bedb9a09640289c5174690a6221adae48b75dc431a219923f052ef20d0af: no such file or directory ``` ### OS Linux ### GPU _No response_ ### CPU _No response_ #...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4898/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4898/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6321
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6321/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6321/comments
https://api.github.com/repos/ollama/ollama/issues/6321/events
https://github.com/ollama/ollama/issues/6321
2,461,338,329
I_kwDOJ0Z1Ps6StQrZ
6,321
Feature request : get probability distribution
{ "login": "Alireza3242", "id": 77293766, "node_id": "MDQ6VXNlcjc3MjkzNzY2", "avatar_url": "https://avatars.githubusercontent.com/u/77293766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alireza3242", "html_url": "https://github.com/Alireza3242", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-08-12T15:42:02
2024-09-02T23:00:00
2024-09-02T22:59:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a prompt and then i get an answer. In some part of answer is a JSON, something like this: ``` { res:"yes" } ``` or this: ``` { res:"no" } ``` I want to know the probability of token "yes" and "no". and use these probabilities in some algorithm.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6321/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2681
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2681/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2681/comments
https://api.github.com/repos/ollama/ollama/issues/2681/events
https://github.com/ollama/ollama/issues/2681
2,149,346,646
I_kwDOJ0Z1Ps6AHG1W
2,681
ollama 运行 orca-迷你模型完成后的问题
{ "login": "wxerada", "id": 160884705, "node_id": "U_kgDOCZbn4Q", "avatar_url": "https://avatars.githubusercontent.com/u/160884705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wxerada", "html_url": "https://github.com/wxerada", "followers_url": "https://api.github.com/users/wxerada/foll...
[]
closed
false
null
[]
null
1
2024-02-22T15:35:33
2024-03-03T22:41:54
2024-02-22T16:03:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Error: Unable to load dynamic library: Unable to load dynamic server library: �Ҳ���ָ����ģ�顣
{ "login": "wxerada", "id": 160884705, "node_id": "U_kgDOCZbn4Q", "avatar_url": "https://avatars.githubusercontent.com/u/160884705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wxerada", "html_url": "https://github.com/wxerada", "followers_url": "https://api.github.com/users/wxerada/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2681/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1997
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1997/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1997/comments
https://api.github.com/repos/ollama/ollama/issues/1997/events
https://github.com/ollama/ollama/issues/1997
2,081,151,447
I_kwDOJ0Z1Ps58C9nX
1,997
:back: Some kind of regression while running on some LlamaIndex versions (Kaggle & Killercoda)
{ "login": "adriens", "id": 5235127, "node_id": "MDQ6VXNlcjUyMzUxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adriens", "html_url": "https://github.com/adriens", "followers_url": "https://api.github.com/users/adriens/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
34
2024-01-15T02:52:36
2024-11-18T21:08:50
2024-05-10T01:03:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# :grey_question: About While working on a `ollama` tutorial on Kaggle, since a few days, I faced a regression while working with LlamaIndex. Here is the output I could get on any model (worked everytime) ![image](https://github.com/langchain-ai/langchainjs/assets/5235127/89ebe9c2-55d4-41da-8b32-74d243759f2e) ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1997/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1015
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1015/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1015/comments
https://api.github.com/repos/ollama/ollama/issues/1015/events
https://github.com/ollama/ollama/pull/1015
1,978,789,553
PR_kwDOJ0Z1Ps5eq1mp
1,015
Update api.md
{ "login": "vmellgre", "id": 46565663, "node_id": "MDQ6VXNlcjQ2NTY1NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/46565663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vmellgre", "html_url": "https://github.com/vmellgre", "followers_url": "https://api.github.com/users/vme...
[]
closed
false
null
[]
null
3
2023-11-06T10:24:18
2023-11-29T21:21:58
2023-11-29T21:21:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1015", "html_url": "https://github.com/ollama/ollama/pull/1015", "diff_url": "https://github.com/ollama/ollama/pull/1015.diff", "patch_url": "https://github.com/ollama/ollama/pull/1015.patch", "merged_at": null }
Fixed documentation, responds one token for streamed results
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1015/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5535/comments
https://api.github.com/repos/ollama/ollama/issues/5535/events
https://github.com/ollama/ollama/pull/5535
2,394,163,520
PR_kwDOJ0Z1Ps50nkSy
5,535
llm: remove ambiguous log message when placing an upper limit on predictions
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-07T18:21:26
2024-07-07T18:32:07
2024-07-07T18:32:05
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5535", "html_url": "https://github.com/ollama/ollama/pull/5535", "diff_url": "https://github.com/ollama/ollama/pull/5535.diff", "patch_url": "https://github.com/ollama/ollama/pull/5535.patch", "merged_at": "2024-07-07T18:32:05" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5535/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2619
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2619/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2619/comments
https://api.github.com/repos/ollama/ollama/issues/2619/events
https://github.com/ollama/ollama/pull/2619
2,145,336,350
PR_kwDOJ0Z1Ps5ncl29
2,619
API doc formatting updates
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2024-02-20T21:40:43
2024-05-07T17:49:02
2024-05-07T17:49:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2619", "html_url": "https://github.com/ollama/ollama/pull/2619", "diff_url": "https://github.com/ollama/ollama/pull/2619.diff", "patch_url": "https://github.com/ollama/ollama/pull/2619.patch", "merged_at": null }
- in preparation for rendering on ollama.com
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2619/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6902
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6902/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6902/comments
https://api.github.com/repos/ollama/ollama/issues/6902/events
https://github.com/ollama/ollama/issues/6902
2,540,292,373
I_kwDOJ0Z1Ps6XackV
6,902
No ollama model can recognize the referenced information.
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-09-21T14:05:25
2024-09-25T07:11:56
2024-09-25T07:11:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Scene One By calling a public cloud-based LLM model through an AI Agent, two documents exceeding 2000 words each are uploaded, and the input question is: Analyze the differences between the two documents. In this manner, the model can normally analyze the differences between the two documents...
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6902/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1294
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1294/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1294/comments
https://api.github.com/repos/ollama/ollama/issues/1294/events
https://github.com/ollama/ollama/pull/1294
2,013,342,661
PR_kwDOJ0Z1Ps5gfvnh
1,294
Allow setting parameters in the REPL
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-11-28T00:02:40
2023-11-29T17:56:43
2023-11-29T17:56:42
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1294", "html_url": "https://github.com/ollama/ollama/pull/1294", "diff_url": "https://github.com/ollama/ollama/pull/1294.diff", "patch_url": "https://github.com/ollama/ollama/pull/1294.patch", "merged_at": "2023-11-29T17:56:42" }
This change adds a new `/set parameter` command inside the repl so that you can change parameters without having to recreate a modelfile. I have changed the `/show parameters` command to also reflect any parameters that have been set, however I haven't yet changed `/show modelfile` which should spit out a new modelf...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1294/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3024
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3024/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3024/comments
https://api.github.com/repos/ollama/ollama/issues/3024/events
https://github.com/ollama/ollama/issues/3024
2,177,284,627
I_kwDOJ0Z1Ps6BxroT
3,024
Ollama not using GPU, falling back to CPU
{ "login": "kopigeek-labs", "id": 128293648, "node_id": "U_kgDOB6WbEA", "avatar_url": "https://avatars.githubusercontent.com/u/128293648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopigeek-labs", "html_url": "https://github.com/kopigeek-labs", "followers_url": "https://api.github.com/...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" }, { "id": 6677677816, "node_id": "LA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2024-03-09T15:59:20
2024-04-29T22:43:52
2024-04-12T22:18:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running Ollama via a docker container on Debian. For a llama2 model, my CPU utilization is at 100% while GPU remains at 0%. Here is my output from `docker logs ollama`: ``` time=2024-03-09T14:52:42.622Z level=INFO source=images.go:800 msg="total blobs: 0" time=2024-03-09T14:52:42.623Z level=INFO source=imag...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3024/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/155
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/155/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/155/comments
https://api.github.com/repos/ollama/ollama/issues/155/events
https://github.com/ollama/ollama/issues/155
1,815,125,416
I_kwDOJ0Z1Ps5sMJ2o
155
Where are the models pulled to?
{ "login": "m3kwong", "id": 888841, "node_id": "MDQ6VXNlcjg4ODg0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/888841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m3kwong", "html_url": "https://github.com/m3kwong", "followers_url": "https://api.github.com/users/m3kwong/fo...
[]
closed
false
null
[]
null
8
2023-07-21T04:15:37
2024-07-27T10:25:17
2023-08-23T17:47:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It downloaded 7 gigs of stuff and i can't seem to find where it went. I want to download it. Any ideas?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/155/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/155/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5886
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5886/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5886/comments
https://api.github.com/repos/ollama/ollama/issues/5886/events
https://github.com/ollama/ollama/pull/5886
2,425,972,130
PR_kwDOJ0Z1Ps52QcxE
5,886
OpenAI: Add Usage to `v1/embeddings`
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-07-23T19:34:33
2024-08-01T22:49:39
2024-08-01T22:49:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5886", "html_url": "https://github.com/ollama/ollama/pull/5886", "diff_url": "https://github.com/ollama/ollama/pull/5886.diff", "patch_url": "https://github.com/ollama/ollama/pull/5886.patch", "merged_at": "2024-08-01T22:49:37" }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5886/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1644
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1644/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1644/comments
https://api.github.com/repos/ollama/ollama/issues/1644/events
https://github.com/ollama/ollama/pull/1644
2,051,392,116
PR_kwDOJ0Z1Ps5ihEHz
1,644
Use cuda base image for final docker image
{ "login": "djmaze", "id": 7229, "node_id": "MDQ6VXNlcjcyMjk=", "avatar_url": "https://avatars.githubusercontent.com/u/7229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djmaze", "html_url": "https://github.com/djmaze", "followers_url": "https://api.github.com/users/djmaze/followers", ...
[]
closed
false
null
[]
null
8
2023-12-20T22:34:01
2024-01-27T01:26:02
2024-01-27T01:26:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1644", "html_url": "https://github.com/ollama/ollama/pull/1644", "diff_url": "https://github.com/ollama/ollama/pull/1644.diff", "patch_url": "https://github.com/ollama/ollama/pull/1644.patch", "merged_at": null }
This is necessary so cuda works at all.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1644/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1644/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1642
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1642/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1642/comments
https://api.github.com/repos/ollama/ollama/issues/1642/events
https://github.com/ollama/ollama/pull/1642
2,051,211,722
PR_kwDOJ0Z1Ps5igboL
1,642
Add Cache option #1573
{ "login": "K0IN", "id": 19688162, "node_id": "MDQ6VXNlcjE5Njg4MTYy", "avatar_url": "https://avatars.githubusercontent.com/u/19688162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/K0IN", "html_url": "https://github.com/K0IN", "followers_url": "https://api.github.com/users/K0IN/followers"...
[]
closed
false
null
[]
null
11
2023-12-20T20:24:14
2024-08-18T12:01:14
2023-12-22T22:16:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1642", "html_url": "https://github.com/ollama/ollama/pull/1642", "diff_url": "https://github.com/ollama/ollama/pull/1642.diff", "patch_url": "https://github.com/ollama/ollama/pull/1642.patch", "merged_at": "2023-12-22T22:16:20" }
This PR, adds the API option "cache", that allows the llama.cpp server to cache our prompt Eval and the response. This speed-up follow-up calls immensely for some models, if you use it over the API, with the same prompt (or even partial ones), it will speed up subsequent calls, since it skips the evaluation of the pro...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1642/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1642/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2107
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2107/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2107/comments
https://api.github.com/repos/ollama/ollama/issues/2107/events
https://github.com/ollama/ollama/issues/2107
2,091,949,934
I_kwDOJ0Z1Ps58sJ9u
2,107
Crash upon loading any model with the ROCm GPU
{ "login": "ThatOneCalculator", "id": 44733677, "node_id": "MDQ6VXNlcjQ0NzMzNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/44733677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThatOneCalculator", "html_url": "https://github.com/ThatOneCalculator", "followers_url": "https...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
11
2024-01-20T07:40:46
2024-01-29T23:50:08
2024-01-29T23:47:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Stacktrace: ``` llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merge...
{ "login": "ThatOneCalculator", "id": 44733677, "node_id": "MDQ6VXNlcjQ0NzMzNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/44733677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThatOneCalculator", "html_url": "https://github.com/ThatOneCalculator", "followers_url": "https...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2107/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/556
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/556/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/556/comments
https://api.github.com/repos/ollama/ollama/issues/556/events
https://github.com/ollama/ollama/pull/556
1,905,373,816
PR_kwDOJ0Z1Ps5azgGR
556
pack in cuda libs
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-09-20T16:42:58
2023-09-20T22:02:38
2023-09-20T22:02:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/556", "html_url": "https://github.com/ollama/ollama/pull/556", "diff_url": "https://github.com/ollama/ollama/pull/556.diff", "patch_url": "https://github.com/ollama/ollama/pull/556.patch", "merged_at": "2023-09-20T22:02:37" }
This change packs CUDA libs into the llama runner and tells the runner to use those libs. Here is the example generate in my case. ``` go generate ./... ```
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/556/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8653
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8653/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8653/comments
https://api.github.com/repos/ollama/ollama/issues/8653/events
https://github.com/ollama/ollama/issues/8653
2,817,878,044
I_kwDOJ0Z1Ps6n9Wgc
8,653
Latest pre-built Ollama binaries (cuda 12.x) do not come with "oob" support for 5.x architecture
{ "login": "RKouchoo", "id": 19159026, "node_id": "MDQ6VXNlcjE5MTU5MDI2", "avatar_url": "https://avatars.githubusercontent.com/u/19159026?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RKouchoo", "html_url": "https://github.com/RKouchoo", "followers_url": "https://api.github.com/users/RKo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
1
2025-01-29T11:00:37
2025-01-29T23:55:30
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### "oob" support for 5.x architecture is missing on prebuilt binaries Hello, I ended up needing some more power so I threw a spare Quadro M5000 into my AI rig only to find it was not being utilsed at all. I did the usual checks and the card has compute capability 5.2 (confirmed compatible in the support matrix [here...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8653/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1483
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1483/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1483/comments
https://api.github.com/repos/ollama/ollama/issues/1483/events
https://github.com/ollama/ollama/pull/1483
2,038,214,542
PR_kwDOJ0Z1Ps5h0Vos
1,483
retry on concurrent request failure
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-12-12T17:10:54
2023-12-12T17:14:36
2023-12-12T17:14:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1483", "html_url": "https://github.com/ollama/ollama/pull/1483", "diff_url": "https://github.com/ollama/ollama/pull/1483.diff", "patch_url": "https://github.com/ollama/ollama/pull/1483.patch", "merged_at": "2023-12-12T17:14:35" }
- remove parallel - retry concurrent requests on failure
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1483/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6927
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6927/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6927/comments
https://api.github.com/repos/ollama/ollama/issues/6927/events
https://github.com/ollama/ollama/issues/6927
2,544,357,555
I_kwDOJ0Z1Ps6Xp9Cz
6,927
Why Is n_ctx in log Always Four Times the num_ctx Value in ModelFIle When Building qwen2.5-coder-7b-instruct-q5_k_m.gguf?
{ "login": "XiongDaowen", "id": 87518017, "node_id": "MDQ6VXNlcjg3NTE4MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/87518017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiongDaowen", "html_url": "https://github.com/XiongDaowen", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-09-24T05:26:34
2024-09-24T07:12:01
2024-09-24T07:12:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I built qwen2.5-coder-7b-instruct-q5_k_m.gguf using the modelfile and set PARAMETER num_ctx 4096, the log output showed llama_new_context_with_model: n_ctx = 16384. After setting num_ctx to different values, I noticed that n_ctx is always 4 times the value of num_ctx. Why is this happening?...
{ "login": "XiongDaowen", "id": 87518017, "node_id": "MDQ6VXNlcjg3NTE4MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/87518017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiongDaowen", "html_url": "https://github.com/XiongDaowen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6927/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5140
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5140/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5140/comments
https://api.github.com/repos/ollama/ollama/issues/5140/events
https://github.com/ollama/ollama/issues/5140
2,362,279,633
I_kwDOJ0Z1Ps6MzYbR
5,140
Chat template not yet supported for Deepseek-Coder-V2 lite
{ "login": "Joly0", "id": 13993216, "node_id": "MDQ6VXNlcjEzOTkzMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/13993216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Joly0", "html_url": "https://github.com/Joly0", "followers_url": "https://api.github.com/users/Joly0/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA...
closed
false
null
[]
null
1
2024-06-19T12:37:36
2024-06-19T18:46:11
2024-06-19T18:46:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Whenever i try to chat with the llm through open-webui and ollama, i get this in the logs of ollama: `ERROR [validate_model_chat_template] The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses | tid="2240191...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5140/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/942
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/942/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/942/comments
https://api.github.com/repos/ollama/ollama/issues/942/events
https://github.com/ollama/ollama/issues/942
1,966,688,244
I_kwDOJ0Z1Ps51OUf0
942
A question on memory
{ "login": "pexus", "id": 1809523, "node_id": "MDQ6VXNlcjE4MDk1MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pexus", "html_url": "https://github.com/pexus", "followers_url": "https://api.github.com/users/pexus/follower...
[]
closed
false
null
[]
null
2
2023-10-28T18:01:38
2023-10-28T20:43:34
2023-10-28T20:43:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello this is a question - with regards to the memory spec on running the OSS LLM - see below: _Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models._ Is the reference to Memory requirement for GPU or the Main Memory (CPU) ? or a combination ...
{ "login": "pexus", "id": 1809523, "node_id": "MDQ6VXNlcjE4MDk1MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pexus", "html_url": "https://github.com/pexus", "followers_url": "https://api.github.com/users/pexus/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/942/timeline
null
completed
false