url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/4714
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4714/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4714/comments
https://api.github.com/repos/ollama/ollama/issues/4714/events
https://github.com/ollama/ollama/issues/4714
2,324,669,902
I_kwDOJ0Z1Ps6Kj6XO
4,714
In macOS Terminal.app, single Japanese character at the end of ongoing line disappears.
{ "login": "tokyohandsome", "id": 34906599, "node_id": "MDQ6VXNlcjM0OTA2NTk5", "avatar_url": "https://avatars.githubusercontent.com/u/34906599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tokyohandsome", "html_url": "https://github.com/tokyohandsome", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[ { "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api....
null
0
2024-05-30T04:26:22
2024-05-30T23:25:13
2024-05-30T23:25:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? First, I'd like to thank you for fixing the Japanese and other multi-byte (double-width) character issues. Output of Japanese is much better than before. However, while I'm testing a couple of Japanese LLM's I found another issue which does not seem to be related to LLM model. When a sentenc...
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4714/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5717
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5717/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5717/comments
https://api.github.com/repos/ollama/ollama/issues/5717/events
https://github.com/ollama/ollama/pull/5717
2,410,170,666
PR_kwDOJ0Z1Ps51dl4b
5,717
server: omit model system prompt if empty
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-16T04:14:45
2024-07-16T18:09:02
2024-07-16T18:09:00
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5717", "html_url": "https://github.com/ollama/ollama/pull/5717", "diff_url": "https://github.com/ollama/ollama/pull/5717.diff", "patch_url": "https://github.com/ollama/ollama/pull/5717.patch", "merged_at": "2024-07-16T18:09:00" }
The model's system prompt (defined by the `SYSTEM` Modelfile command) will be templated out even if empty currently. This fixes the issue so that it is only templated if not empty.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5717/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4023
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4023/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4023/comments
https://api.github.com/repos/ollama/ollama/issues/4023/events
https://github.com/ollama/ollama/pull/4023
2,268,617,892
PR_kwDOJ0Z1Ps5t_agt
4,023
fix(cli): unable to use CLI within the container
{ "login": "BlackHole1", "id": 8198408, "node_id": "MDQ6VXNlcjgxOTg0MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8198408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BlackHole1", "html_url": "https://github.com/BlackHole1", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
5
2024-04-29T10:03:47
2024-05-07T01:43:55
2024-05-06T21:53:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4023", "html_url": "https://github.com/ollama/ollama/pull/4023", "diff_url": "https://github.com/ollama/ollama/pull/4023.diff", "patch_url": "https://github.com/ollama/ollama/pull/4023.patch", "merged_at": null }
In the container, `OLLAMA_HOST` is set by default to `0.0.0.0` (ref: [Dockerfile#L137]), which is fine when starting the server. However, as a client, it is must to use `127.0.0.1` or `localhost` for requests. fix: #3521 #1337 maybe fix: #3526 [Dockerfile#L137]: https://github.com/ollama/ollama/blob/7e432cdfac51...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4023/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1321
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1321/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1321/comments
https://api.github.com/repos/ollama/ollama/issues/1321/events
https://github.com/ollama/ollama/pull/1321
2,017,280,524
PR_kwDOJ0Z1Ps5gtNWe
1,321
Fixed cuda repo location for rhel os
{ "login": "jeremiahbuckley", "id": 17296746, "node_id": "MDQ6VXNlcjE3Mjk2NzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/17296746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeremiahbuckley", "html_url": "https://github.com/jeremiahbuckley", "followers_url": "https://api...
[]
closed
false
null
[]
null
0
2023-11-29T19:28:42
2023-11-29T19:55:15
2023-11-29T19:55:15
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1321", "html_url": "https://github.com/ollama/ollama/pull/1321", "diff_url": "https://github.com/ollama/ollama/pull/1321.diff", "patch_url": "https://github.com/ollama/ollama/pull/1321.patch", "merged_at": "2023-11-29T19:55:15" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1321/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3450
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3450/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3450/comments
https://api.github.com/repos/ollama/ollama/issues/3450/events
https://github.com/ollama/ollama/issues/3450
2,220,003,514
I_kwDOJ0Z1Ps6EUpC6
3,450
I want to make a opensource prompt and response database .
{ "login": "hemangjoshi37a", "id": 12392345, "node_id": "MDQ6VXNlcjEyMzkyMzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/12392345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hemangjoshi37a", "html_url": "https://github.com/hemangjoshi37a", "followers_url": "https://api.gi...
[]
open
false
null
[]
null
0
2024-04-02T09:24:47
2024-04-19T15:41:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? In this I want users to give their consent about open sourcing their prompt-response pairs to a centralized database from which everyone can train their model and using this we can infinitely improve our models. ### How should we solve this? adding a consent tick check box in the...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3450/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6921
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6921/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6921/comments
https://api.github.com/repos/ollama/ollama/issues/6921/events
https://github.com/ollama/ollama/issues/6921
2,543,021,522
I_kwDOJ0Z1Ps6Xk23S
6,921
Ollam build error wih CUDA on Jetson Orin (CUDA v12.6)
{ "login": "jarek7777", "id": 72649794, "node_id": "MDQ6VXNlcjcyNjQ5Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/72649794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarek7777", "html_url": "https://github.com/jarek7777", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-09-23T15:34:13
2024-09-25T00:17:35
2024-09-25T00:17:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Error: llama runner process has terminated: CUDA error: the provided PTX was compiled with an unsupported toolchain. current device: 0, in function ggml_cuda_compute_forward at /ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2326 err /ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:102: CUDA error...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6921/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6453
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6453/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6453/comments
https://api.github.com/repos/ollama/ollama/issues/6453/events
https://github.com/ollama/ollama/issues/6453
2,478,130,586
I_kwDOJ0Z1Ps6TtUWa
6,453
Inconsistent GPU Usage
{ "login": "gru3zi", "id": 44057919, "node_id": "MDQ6VXNlcjQ0MDU3OTE5", "avatar_url": "https://avatars.githubusercontent.com/u/44057919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gru3zi", "html_url": "https://github.com/gru3zi", "followers_url": "https://api.github.com/users/gru3zi/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-08-21T14:03:09
2024-08-21T20:13:34
2024-08-21T19:53:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have been happily using Ollama for sometime with my Dual RTX 3090's with an NV-Link Adaptor. Recently ive been finding the output to be quite slow. After checking both the outputs of 'ollama ps' and nvidia-smi I found that my GPUs are not fully being utilised anymore. It seems to split between...
{ "login": "gru3zi", "id": 44057919, "node_id": "MDQ6VXNlcjQ0MDU3OTE5", "avatar_url": "https://avatars.githubusercontent.com/u/44057919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gru3zi", "html_url": "https://github.com/gru3zi", "followers_url": "https://api.github.com/users/gru3zi/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6453/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7871
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7871/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7871/comments
https://api.github.com/repos/ollama/ollama/issues/7871/events
https://github.com/ollama/ollama/issues/7871
2,701,943,335
I_kwDOJ0Z1Ps6hDGIn
7,871
pydantic issue with converted PNG images
{ "login": "ibagur", "id": 2979615, "node_id": "MDQ6VXNlcjI5Nzk2MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2979615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibagur", "html_url": "https://github.com/ibagur", "followers_url": "https://api.github.com/users/ibagur/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-11-28T11:58:42
2024-11-28T16:32:24
2024-11-28T12:12:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Directly feeding a PNG image does not work (`failed to decode image: image: unknown format`), so until recently, I was using the code bellow in order to encode the image file and it used to work fine: ``` import base64 import io from PIL import Image import ollama def encode_image_to...
{ "login": "ibagur", "id": 2979615, "node_id": "MDQ6VXNlcjI5Nzk2MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2979615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibagur", "html_url": "https://github.com/ibagur", "followers_url": "https://api.github.com/users/ibagur/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7871/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2669
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2669/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2669/comments
https://api.github.com/repos/ollama/ollama/issues/2669/events
https://github.com/ollama/ollama/issues/2669
2,148,447,138
I_kwDOJ0Z1Ps6ADrOi
2,669
Is it possible to add Orion model into downloadable model list
{ "login": "renillhuang", "id": 24711416, "node_id": "MDQ6VXNlcjI0NzExNDE2", "avatar_url": "https://avatars.githubusercontent.com/u/24711416?v=4", "gravatar_id": "", "url": "https://api.github.com/users/renillhuang", "html_url": "https://github.com/renillhuang", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2024-02-22T07:56:24
2024-03-12T02:06:16
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
After create Orion14B-chat model, is it possible to upload to ollama project? And let other users could choose and download/run it locally? Looking forward to reply, thanks.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2669/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1296
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1296/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1296/comments
https://api.github.com/repos/ollama/ollama/issues/1296/events
https://github.com/ollama/ollama/issues/1296
2,013,551,136
I_kwDOJ0Z1Ps54BFog
1,296
All models gone?
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
[]
closed
false
null
[]
null
2
2023-11-28T03:33:49
2023-11-28T14:56:07
2023-11-28T14:56:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have no idea what happened. Started working ran ollama run alfred Error: could not connect to ollama server, run 'ollama serve' to start it (alfred was previously installed) ollama serve & ollama run alfred started downloading it! Olama list all the models are gone. in /usr/share/ollama/.ollama/models/b...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1296/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4618
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4618/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4618/comments
https://api.github.com/repos/ollama/ollama/issues/4618/events
https://github.com/ollama/ollama/issues/4618
2,315,979,851
I_kwDOJ0Z1Ps6KCwxL
4,618
Extended lora support
{ "login": "AncientMystic", "id": 62780271, "node_id": "MDQ6VXNlcjYyNzgwMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AncientMystic", "html_url": "https://github.com/AncientMystic", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
3
2024-05-24T18:14:25
2024-07-10T19:37:38
2024-07-10T19:37:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would it be possible to extend lora support so that they might be pulled like models and loaded with more ease such as with command Ollama run model adapter lora Or something similar for ease of use and easier mixing and matching using different loras with different models instead of having to hard code it into the...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4618/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4618/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3576
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3576/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3576/comments
https://api.github.com/repos/ollama/ollama/issues/3576/events
https://github.com/ollama/ollama/issues/3576
2,235,575,326
I_kwDOJ0Z1Ps6FQCwe
3,576
Support command r plus
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-04-10T13:14:59
2024-04-17T00:50:31
2024-04-17T00:50:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? https://huggingface.co/CohereForAI/c4ai-command-r-plus
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3576/reactions", "total_count": 16, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/3576/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7271
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7271/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7271/comments
https://api.github.com/repos/ollama/ollama/issues/7271/events
https://github.com/ollama/ollama/pull/7271
2,599,236,565
PR_kwDOJ0Z1Ps5_LcRJ
7,271
Refactor context shift flag for infinite text generation comment
{ "login": "YassineOsip", "id": 44472826, "node_id": "MDQ6VXNlcjQ0NDcyODI2", "avatar_url": "https://avatars.githubusercontent.com/u/44472826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YassineOsip", "html_url": "https://github.com/YassineOsip", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
1
2024-10-19T14:24:40
2024-10-21T20:50:04
2024-10-21T20:50:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7271", "html_url": "https://github.com/ollama/ollama/pull/7271", "diff_url": "https://github.com/ollama/ollama/pull/7271.diff", "patch_url": "https://github.com/ollama/ollama/pull/7271.patch", "merged_at": null }
(should be This pull request includes a minor correction to a comment in the `llama/common.h` file. The change fixes a typo in the comment for the `ctx_shift` parameter. * [`llama/common.h`](diffhunk://#diff-670d1015c5d0908848f1f635691ebcc8372dc9a337ca0e93ad02abb72df998e3L275-R275): Corrected a typo in the comment ...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7271/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5460
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5460/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5460/comments
https://api.github.com/repos/ollama/ollama/issues/5460/events
https://github.com/ollama/ollama/issues/5460
2,388,518,933
I_kwDOJ0Z1Ps6OXegV
5,460
custom model: error loading model: check_tensor_dims: tensor 'blk.0.ffn_norm.weight' not found
{ "login": "finnbusse", "id": 110921874, "node_id": "U_kgDOBpyIkg", "avatar_url": "https://avatars.githubusercontent.com/u/110921874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finnbusse", "html_url": "https://github.com/finnbusse", "followers_url": "https://api.github.com/users/finnbu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-03T12:35:45
2024-07-26T18:18:50
2024-07-26T18:18:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I recently trained a custom AI model using Google Colab with Alpaca and Unsloth. The training process was successful, but when attempting to run the model using Ollama, I encountered an error. `C:\Users\Finn\Downloads>ollama run test2 Error: llama runner process has terminated: exit status ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5460/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/5460/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1164
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1164/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1164/comments
https://api.github.com/repos/ollama/ollama/issues/1164/events
https://github.com/ollama/ollama/pull/1164
1,998,064,614
PR_kwDOJ0Z1Ps5fseVd
1,164
update faq
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-11-17T01:10:37
2023-11-17T01:20:20
2023-11-17T01:20:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1164", "html_url": "https://github.com/ollama/ollama/pull/1164", "diff_url": "https://github.com/ollama/ollama/pull/1164.diff", "patch_url": "https://github.com/ollama/ollama/pull/1164.patch", "merged_at": "2023-11-17T01:20:19" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1164/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/531
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/531/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/531/comments
https://api.github.com/repos/ollama/ollama/issues/531/events
https://github.com/ollama/ollama/pull/531
1,897,072,760
PR_kwDOJ0Z1Ps5aXrGo
531
set request.ContentLength
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-14T18:11:06
2023-09-14T20:33:12
2023-09-14T20:33:11
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/531", "html_url": "https://github.com/ollama/ollama/pull/531", "diff_url": "https://github.com/ollama/ollama/pull/531.diff", "patch_url": "https://github.com/ollama/ollama/pull/531.patch", "merged_at": "2023-09-14T20:33:11" }
This informs the HTTP client the content length is known and disables chunked Transfer-Encoding
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/531/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7052
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7052/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7052/comments
https://api.github.com/repos/ollama/ollama/issues/7052/events
https://github.com/ollama/ollama/issues/7052
2,557,928,701
I_kwDOJ0Z1Ps6YduT9
7,052
Capability checking does not consider custom templates
{ "login": "kyRobot", "id": 9490543, "node_id": "MDQ6VXNlcjk0OTA1NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/9490543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyRobot", "html_url": "https://github.com/kyRobot", "followers_url": "https://api.github.com/users/kyRobot/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-10-01T00:41:31
2024-10-01T00:44:12
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When sending a completion request to a model that supports FIM tasks - e.g Qwen 2.5 Coder 7B base - Ollama rejects the request because the "model does not support insert". Passing `raw` with the necessary prompt format to the model does work for completion, so the issue is not with the model,...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7052/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7052/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7178
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7178/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7178/comments
https://api.github.com/repos/ollama/ollama/issues/7178/events
https://github.com/ollama/ollama/issues/7178
2,582,403,527
I_kwDOJ0Z1Ps6Z7FnH
7,178
Qwen2.5-Math support
{ "login": "fzyzcjy", "id": 5236035, "node_id": "MDQ6VXNlcjUyMzYwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fzyzcjy", "html_url": "https://github.com/fzyzcjy", "followers_url": "https://api.github.com/users/fzyzcjy/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-10-12T01:46:09
2024-10-13T05:07:57
2024-10-13T05:07:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi thanks for the library! It seems that ollama supports qwen2.5 and qwen2.5-coder, but not qwen2.5-math (a quick search only gives qwen2-math which is older model https://ollama.com/search?q=qwen2.5-math). Related: https://github.com/ollama/ollama/issues/6916 Related: https://github.com/ollama/ollama/issues/6889
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7178/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5752
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5752/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5752/comments
https://api.github.com/repos/ollama/ollama/issues/5752/events
https://github.com/ollama/ollama/pull/5752
2,414,269,197
PR_kwDOJ0Z1Ps51rXij
5,752
OpenAI: Function Based Testing
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-07-17T18:17:00
2024-07-21T04:39:36
2024-07-19T18:37:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5752", "html_url": "https://github.com/ollama/ollama/pull/5752", "diff_url": "https://github.com/ollama/ollama/pull/5752.diff", "patch_url": "https://github.com/ollama/ollama/pull/5752.patch", "merged_at": "2024-07-19T18:37:13" }
Distinguish tests by function, testing requests and error forwarding captureRequestMiddleware catches the request after it has been converted by the functionality middleware, before hitting a mock endpoint returning 200 ResponseRecorder catches any errors that are returned in the response body immediately by the ...
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5752/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6073
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6073/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6073/comments
https://api.github.com/repos/ollama/ollama/issues/6073/events
https://github.com/ollama/ollama/issues/6073
2,437,787,400
I_kwDOJ0Z1Ps6RTa8I
6,073
Model request: Llama3-Athene-70B
{ "login": "joliss", "id": 524783, "node_id": "MDQ6VXNlcjUyNDc4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joliss", "html_url": "https://github.com/joliss", "followers_url": "https://api.github.com/users/joliss/follow...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2024-07-30T13:01:04
2024-08-17T21:18:41
2024-08-17T21:18:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be lovely to have the Llama3-based post-trained Athene-70b model available on Ollama! It is currently the highest-ranked open 70b model on the [LMSYS leaderboard](https://chat.lmsys.org/?leaderboard). https://nexusflow.ai/blogs/athene https://huggingface.co/Nexusflow/Athene-70B Somebody also published a...
{ "login": "joliss", "id": 524783, "node_id": "MDQ6VXNlcjUyNDc4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joliss", "html_url": "https://github.com/joliss", "followers_url": "https://api.github.com/users/joliss/follow...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6073/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6073/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4050
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4050/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4050/comments
https://api.github.com/repos/ollama/ollama/issues/4050/events
https://github.com/ollama/ollama/issues/4050
2,271,318,889
I_kwDOJ0Z1Ps6HYZNp
4,050
Ollama after 30 minutes start to be very very slow to answer the questions
{ "login": "nunostiles", "id": 168548263, "node_id": "U_kgDOCgvXpw", "avatar_url": "https://avatars.githubusercontent.com/u/168548263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nunostiles", "html_url": "https://github.com/nunostiles", "followers_url": "https://api.github.com/users/nun...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng...
closed
false
null
[]
null
10
2024-04-30T12:20:22
2024-12-19T23:46:07
2024-12-19T23:46:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've already tried with several different models, but the issue is always persisting, after ~30 minutes it keeps taking ages to answer to questions, even with saved models it happens. Is there anything that I can do? it's in fact a bug? On the first 30 minutes it runs normally without any slow...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4050/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3682
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3682/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3682/comments
https://api.github.com/repos/ollama/ollama/issues/3682/events
https://github.com/ollama/ollama/pull/3682
2,246,845,425
PR_kwDOJ0Z1Ps5s2LF1
3,682
quantize any fp16/fp32 model
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-04-16T20:37:34
2024-05-07T22:20:51
2024-05-07T22:20:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3682", "html_url": "https://github.com/ollama/ollama/pull/3682", "diff_url": "https://github.com/ollama/ollama/pull/3682.diff", "patch_url": "https://github.com/ollama/ollama/pull/3682.patch", "merged_at": "2024-05-07T22:20:49" }
- FROM /path/to/{safetensors,pytorch} - FROM /path/to/fp{16,32}.bin - FROM model:fp{16,32}
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3682/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8241
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8241/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8241/comments
https://api.github.com/repos/ollama/ollama/issues/8241/events
https://github.com/ollama/ollama/issues/8241
2,758,807,838
I_kwDOJ0Z1Ps6kcBEe
8,241
Option to show all models available from registry/library
{ "login": "t18n", "id": 14198542, "node_id": "MDQ6VXNlcjE0MTk4NTQy", "avatar_url": "https://avatars.githubusercontent.com/u/14198542?v=4", "gravatar_id": "", "url": "https://api.github.com/users/t18n", "html_url": "https://github.com/t18n", "followers_url": "https://api.github.com/users/t18n/followers"...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
2
2024-12-25T13:25:52
2024-12-26T00:26:10
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Currently, `ollama list` shows all the installed models. It would be very useful to be able to show all the available models from the registry/[model library](https://github.com/ollama/ollama?tab=readme-ov-file#model-library), which allow us to make model management app to install a model with GUI with one click.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8241/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4452
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4452/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4452/comments
https://api.github.com/repos/ollama/ollama/issues/4452/events
https://github.com/ollama/ollama/pull/4452
2,297,736,217
PR_kwDOJ0Z1Ps5vhfvd
4,452
follow naming convenstions
{ "login": "Tyrell04", "id": 43107913, "node_id": "MDQ6VXNlcjQzMTA3OTEz", "avatar_url": "https://avatars.githubusercontent.com/u/43107913?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tyrell04", "html_url": "https://github.com/Tyrell04", "followers_url": "https://api.github.com/users/Tyr...
[]
closed
false
null
[]
null
0
2024-05-15T12:09:38
2024-10-26T17:41:47
2024-10-26T17:41:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4452", "html_url": "https://github.com/ollama/ollama/pull/4452", "diff_url": "https://github.com/ollama/ollama/pull/4452.diff", "patch_url": "https://github.com/ollama/ollama/pull/4452.patch", "merged_at": null }
null
{ "login": "Tyrell04", "id": 43107913, "node_id": "MDQ6VXNlcjQzMTA3OTEz", "avatar_url": "https://avatars.githubusercontent.com/u/43107913?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tyrell04", "html_url": "https://github.com/Tyrell04", "followers_url": "https://api.github.com/users/Tyr...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4452/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7350
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7350/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7350/comments
https://api.github.com/repos/ollama/ollama/issues/7350/events
https://github.com/ollama/ollama/issues/7350
2,613,001,236
I_kwDOJ0Z1Ps6bvzwU
7,350
Ollama keeps reloading the same model repeatedly
{ "login": "cray1031", "id": 69585934, "node_id": "MDQ6VXNlcjY5NTg1OTM0", "avatar_url": "https://avatars.githubusercontent.com/u/69585934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cray1031", "html_url": "https://github.com/cray1031", "followers_url": "https://api.github.com/users/cra...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-10-25T03:47:33
2024-11-17T14:22:06
2024-11-17T14:22:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? `docker run -d --gpus=all -v /data/ollama:/root/.ollama -p 9112:11434 -e OLLAMA_ORIGINS="*" -e OLLAMA_NUM_PARALLEL=15 -e OLLAMA_KEEP_ALIVE=2h -e OLLAMA_DEBUG=1 --name ollama_v0314 ollama/ollama:latest` ``` eleasing cuda driver library time=2024-10-25T03:39:46.460Z level=DEBUG source=ser...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7350/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7771
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7771/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7771/comments
https://api.github.com/repos/ollama/ollama/issues/7771/events
https://github.com/ollama/ollama/issues/7771
2,677,691,308
I_kwDOJ0Z1Ps6fmlOs
7,771
CUDA error: unspecified launch failure: current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
{ "login": "daocoder2", "id": 19505806, "node_id": "MDQ6VXNlcjE5NTA1ODA2", "avatar_url": "https://avatars.githubusercontent.com/u/19505806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daocoder2", "html_url": "https://github.com/daocoder2", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-11-21T01:41:16
2024-11-21T16:50:25
2024-11-21T16:50:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` 2024/11/21 01:22:08 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7771/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8335
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8335/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8335/comments
https://api.github.com/repos/ollama/ollama/issues/8335/events
https://github.com/ollama/ollama/issues/8335
2,772,565,463
I_kwDOJ0Z1Ps6lQf3X
8,335
Make flash attention configurable via UI or enable by default
{ "login": "HDembinski", "id": 2631586, "node_id": "MDQ6VXNlcjI2MzE1ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/2631586?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HDembinski", "html_url": "https://github.com/HDembinski", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-07T11:10:17
2025-01-07T11:10:17
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I love Ollama, excellent work. It makes using LLMs really beginner friendly, but does impose any limits on power usage. I recently learned about flash attention and found out from reading the FAQ that Ollama supports this. As flash attention is important to support large contexts and can speed up models consider...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8335/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/472
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/472/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/472/comments
https://api.github.com/repos/ollama/ollama/issues/472/events
https://github.com/ollama/ollama/pull/472
1,882,888,686
PR_kwDOJ0Z1Ps5Zn3qn
472
Added missing options params to the embeddings docs
{ "login": "yackermann", "id": 1636116, "node_id": "MDQ6VXNlcjE2MzYxMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1636116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yackermann", "html_url": "https://github.com/yackermann", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2023-09-05T23:59:24
2023-09-06T00:19:01
2023-09-06T00:18:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/472", "html_url": "https://github.com/ollama/ollama/pull/472", "diff_url": "https://github.com/ollama/ollama/pull/472.diff", "patch_url": "https://github.com/ollama/ollama/pull/472.patch", "merged_at": "2023-09-06T00:18:49" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/472/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5975
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5975/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5975/comments
https://api.github.com/repos/ollama/ollama/issues/5975/events
https://github.com/ollama/ollama/issues/5975
2,431,736,450
I_kwDOJ0Z1Ps6Q8VqC
5,975
Deepseek2 with large context crashes with "Deepseek2 does not support K-shift"
{ "login": "balckwilliam", "id": 32457598, "node_id": "MDQ6VXNlcjMyNDU3NTk4", "avatar_url": "https://avatars.githubusercontent.com/u/32457598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/balckwilliam", "html_url": "https://github.com/balckwilliam", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[ { "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://...
null
12
2024-07-26T08:44:26
2024-12-17T16:34:29
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:15147: false && "Deepseek2 does not support K-shift" ### OS Linux, Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.0
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5975/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5975/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1250
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1250/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1250/comments
https://api.github.com/repos/ollama/ollama/issues/1250/events
https://github.com/ollama/ollama/pull/1250
2,007,218,435
PR_kwDOJ0Z1Ps5gLaE2
1,250
refactor layer creation
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-11-22T22:56:02
2023-12-05T22:32:54
2023-12-05T22:32:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1250", "html_url": "https://github.com/ollama/ollama/pull/1250", "diff_url": "https://github.com/ollama/ollama/pull/1250.diff", "patch_url": "https://github.com/ollama/ollama/pull/1250.patch", "merged_at": "2023-12-05T22:32:52" }
refactor layer creation previous layer creation was not ideal because: 1. it required reading the input file multiple times, once to calculate the sha256 checksum, another to write it to disk, and potentially one more to decode the underlying gguf 2. used io.ReadSeeker which is prone to user error. if the file i...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1250/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/872
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/872/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/872/comments
https://api.github.com/repos/ollama/ollama/issues/872/events
https://github.com/ollama/ollama/pull/872
1,955,638,029
PR_kwDOJ0Z1Ps5dc8hb
872
fix readme for linux : port address already in use
{ "login": "Yadheedhya06", "id": 79125868, "node_id": "MDQ6VXNlcjc5MTI1ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/79125868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yadheedhya06", "html_url": "https://github.com/Yadheedhya06", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
1
2023-10-21T19:32:37
2023-10-26T17:47:43
2023-10-26T17:47:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/872", "html_url": "https://github.com/ollama/ollama/pull/872", "diff_url": "https://github.com/ollama/ollama/pull/872.diff", "patch_url": "https://github.com/ollama/ollama/pull/872.patch", "merged_at": null }
If user is installing Ollama for the first time/fresh install then Ollama server is started automatically. So when you try ``` ollama serve ``` then it throws error - 127.0.0.1:11434: bind: address already in use So instead of running this command user can skip to running model This PR patches the correspondi...
{ "login": "Yadheedhya06", "id": 79125868, "node_id": "MDQ6VXNlcjc5MTI1ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/79125868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yadheedhya06", "html_url": "https://github.com/Yadheedhya06", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/872/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1732
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1732/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1732/comments
https://api.github.com/repos/ollama/ollama/issues/1732/events
https://github.com/ollama/ollama/pull/1732
2,057,903,932
PR_kwDOJ0Z1Ps5i22xd
1,732
Add list-remote command line option
{ "login": "kris-hansen", "id": 8484582, "node_id": "MDQ6VXNlcjg0ODQ1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/8484582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kris-hansen", "html_url": "https://github.com/kris-hansen", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
2
2023-12-28T02:00:00
2024-05-09T16:07:25
2024-05-09T16:07:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1732", "html_url": "https://github.com/ollama/ollama/pull/1732", "diff_url": "https://github.com/ollama/ollama/pull/1732.diff", "patch_url": "https://github.com/ollama/ollama/pull/1732.patch", "merged_at": null }
- Added a feature to be able to fetch the model library from ollama.ai/library - This makes it easier to determine which models are available to pull without leaving the command line world - using goquery to make the HTML parsing a bit more manageable, added error handling to improve the error reporting in case the h...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1732/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1732/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6671
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6671/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6671/comments
https://api.github.com/repos/ollama/ollama/issues/6671/events
https://github.com/ollama/ollama/issues/6671
2,509,895,725
I_kwDOJ0Z1Ps6Vmfgt
6,671
Reflection 70B NEED Tools
{ "login": "xiaoyu9982", "id": 179811153, "node_id": "U_kgDOCrezUQ", "avatar_url": "https://avatars.githubusercontent.com/u/179811153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaoyu9982", "html_url": "https://github.com/xiaoyu9982", "followers_url": "https://api.github.com/users/xia...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-09-06T08:57:36
2024-09-06T21:19:49
2024-09-06T21:19:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Reflection 70B NEED Tools
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6671/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/667
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/667/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/667/comments
https://api.github.com/repos/ollama/ollama/issues/667/events
https://github.com/ollama/ollama/pull/667
1,920,894,343
PR_kwDOJ0Z1Ps5bnrd2
667
Use build tags to generate accelerated binaries for CUDA and ROCm on …
{ "login": "65a", "id": 10104049, "node_id": "MDQ6VXNlcjEwMTA0MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10104049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/65a", "html_url": "https://github.com/65a", "followers_url": "https://api.github.com/users/65a/followers", ...
[]
closed
false
null
[]
null
20
2023-10-01T18:05:34
2023-10-17T00:45:22
2023-10-17T00:31:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/667", "html_url": "https://github.com/ollama/ollama/pull/667", "diff_url": "https://github.com/ollama/ollama/pull/667.diff", "patch_url": "https://github.com/ollama/ollama/pull/667.patch", "merged_at": null }
…Linux. The binary will detect and use the accelerated runtimes embedded in it. The build tags rocm or cuda must be specified to both go generate and go build. ROCm builds should have both ROCM_PATH set (and the ROCM SDK present) as well as CLBlast installed (for GGML) and CLBlast_DIR set in the environment to the CLBl...
{ "login": "65a", "id": 10104049, "node_id": "MDQ6VXNlcjEwMTA0MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10104049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/65a", "html_url": "https://github.com/65a", "followers_url": "https://api.github.com/users/65a/followers", ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/667/reactions", "total_count": 9, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/667/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/338
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/338/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/338/comments
https://api.github.com/repos/ollama/ollama/issues/338/events
https://github.com/ollama/ollama/issues/338
1,848,830,251
I_kwDOJ0Z1Ps5uMukr
338
More reliable model pull
{ "login": "bohdyone", "id": 13161793, "node_id": "MDQ6VXNlcjEzMTYxNzkz", "avatar_url": "https://avatars.githubusercontent.com/u/13161793?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bohdyone", "html_url": "https://github.com/bohdyone", "followers_url": "https://api.github.com/users/boh...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2023-08-14T01:13:17
2023-08-22T01:06:31
2023-08-22T01:06:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi guys, On Mac OS 13.4.1 and have been having some trouble downloading the larger models. I get occasional "unexpected EOF" issues and sometimes when the model is fully downloaded it is detected as corrupted and must be downloaded again. Some of this seems to be to do with system sleep interrupting the download...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/338/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/338/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2245
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2245/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2245/comments
https://api.github.com/repos/ollama/ollama/issues/2245/events
https://github.com/ollama/ollama/pull/2245
2,104,391,348
PR_kwDOJ0Z1Ps5lRF_U
2,245
Log prompt when running `ollama serve` with `OLLAMA_DEBUG=1`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-28T23:12:57
2024-01-28T23:22:35
2024-01-28T23:22:35
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2245", "html_url": "https://github.com/ollama/ollama/pull/2245", "diff_url": "https://github.com/ollama/ollama/pull/2245.diff", "patch_url": "https://github.com/ollama/ollama/pull/2245.patch", "merged_at": "2024-01-28T23:22:35" }
Fixes https://github.com/ollama/ollama/issues/1533 Fixes https://github.com/ollama/ollama/issues/1118
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2245/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7636
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7636/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7636/comments
https://api.github.com/repos/ollama/ollama/issues/7636/events
https://github.com/ollama/ollama/issues/7636
2,653,413,472
I_kwDOJ0Z1Ps6eJ-Bg
7,636
missing uninstall instructions or script
{ "login": "adbenitez", "id": 24558636, "node_id": "MDQ6VXNlcjI0NTU4NjM2", "avatar_url": "https://avatars.githubusercontent.com/u/24558636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adbenitez", "html_url": "https://github.com/adbenitez", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-11-12T21:45:41
2024-11-12T22:16:59
2024-11-12T22:16:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I got my `/` (root) partition full running the installer, would be nice if the installer allowed to install without `sudo` as local user, I canceled the installation at step `Downloading Linux ROCm amd64 bundle` due to the mentioned disk space issue, and now there is no uninstall instructions, had to figure it out wher...
{ "login": "adbenitez", "id": 24558636, "node_id": "MDQ6VXNlcjI0NTU4NjM2", "avatar_url": "https://avatars.githubusercontent.com/u/24558636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adbenitez", "html_url": "https://github.com/adbenitez", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7636/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/3517
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3517/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3517/comments
https://api.github.com/repos/ollama/ollama/issues/3517/events
https://github.com/ollama/ollama/issues/3517
2,229,467,913
I_kwDOJ0Z1Ps6E4vsJ
3,517
MACOS M2 Docker Compose Failing with GPU Selection Step
{ "login": "akramIOT", "id": 21118209, "node_id": "MDQ6VXNlcjIxMTE4MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/21118209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akramIOT", "html_url": "https://github.com/akramIOT", "followers_url": "https://api.github.com/users/akr...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-04-07T00:01:29
2024-04-15T23:24:35
2024-04-15T23:24:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? MACOS M2 Docker Compose Failing with GPU Selection Step (LLAMA_CPP_ENV) akram_personal@AKRAMs-MacBook-Pro packet_raptor % docker-compose up Attaching to packet_raptor, ollama-1, ollama-webui-1 Gracefully stopping... (press Ctrl+C again to force) Error response from ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3517/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7362
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7362/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7362/comments
https://api.github.com/repos/ollama/ollama/issues/7362/events
https://github.com/ollama/ollama/issues/7362
2,614,931,844
I_kwDOJ0Z1Ps6b3LGE
7,362
Llama3.2-vision image processing not implemented for /generate
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
7
2024-10-25T19:21:47
2024-10-28T23:31:57
2024-10-28T20:51:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Reported by @oderwat: https://github.com/ollama/ollama/issues/6972#issuecomment-2437586368 ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.4.0
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7362/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1431
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1431/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1431/comments
https://api.github.com/repos/ollama/ollama/issues/1431/events
https://github.com/ollama/ollama/issues/1431
2,031,934,468
I_kwDOJ0Z1Ps55HNwE
1,431
[WSL 2] Exposing ollama via 0.0.0.0 on local network
{ "login": "bocklucas", "id": 22528729, "node_id": "MDQ6VXNlcjIyNTI4NzI5", "avatar_url": "https://avatars.githubusercontent.com/u/22528729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bocklucas", "html_url": "https://github.com/bocklucas", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
18
2023-12-08T05:04:09
2024-12-19T15:19:36
2023-12-12T15:56:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello! Just spent the last 3 or so hours struggling to figure this out and thought I'd leave my solution here to spare the next person who tries this out as well. Basically, I was trying to run `ollama serve` in WSL 2 (setup was insanely quick and easy) and then access it on my local network. However, when I trie...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1431/reactions", "total_count": 63, "+1": 39, "-1": 0, "laugh": 0, "hooray": 5, "confused": 5, "heart": 14, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1431/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2758
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2758/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2758/comments
https://api.github.com/repos/ollama/ollama/issues/2758/events
https://github.com/ollama/ollama/issues/2758
2,153,312,111
I_kwDOJ0Z1Ps6AWO9v
2,758
Switching back and forth between models will gradually reduce the available GPU memory.
{ "login": "mofanke", "id": 54242816, "node_id": "MDQ6VXNlcjU0MjQyODE2", "avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mofanke", "html_url": "https://github.com/mofanke", "followers_url": "https://api.github.com/users/mofank...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-26T05:51:15
2024-02-27T19:29:54
2024-02-27T19:29:54
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Operating System: Windows GPU: NVIDIA with 6GB memory Description: While switching between Mistral 7B and Codellama 7B, I noticed a decrease in GPU available memory for layers offloaded to the GPU. Upon investigation, I captured the following debug log: ```plaintext time=2024-02-26T11:18:53.800+08:00 level=D...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2758/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4562
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4562/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4562/comments
https://api.github.com/repos/ollama/ollama/issues/4562/events
https://github.com/ollama/ollama/issues/4562
2,308,723,955
I_kwDOJ0Z1Ps6JnFTz
4,562
Where can I see the full list of embedding modes supported by ollama?
{ "login": "heiheiheibj", "id": 6910198, "node_id": "MDQ6VXNlcjY5MTAxOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6910198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/heiheiheibj", "html_url": "https://github.com/heiheiheibj", "followers_url": "https://api.github.com/us...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
1
2024-05-21T16:56:47
2024-05-21T17:12:30
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
thx
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4562/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5704
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5704/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5704/comments
https://api.github.com/repos/ollama/ollama/issues/5704/events
https://github.com/ollama/ollama/pull/5704
2,408,997,782
PR_kwDOJ0Z1Ps51ZqL9
5,704
Add TensorSplit option to runners and API
{ "login": "NormalFishDev", "id": 174545571, "node_id": "U_kgDOCmdaow", "avatar_url": "https://avatars.githubusercontent.com/u/174545571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NormalFishDev", "html_url": "https://github.com/NormalFishDev", "followers_url": "https://api.github.com/...
[]
open
false
null
[]
null
0
2024-07-15T15:16:27
2024-07-15T15:16:27
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5704", "html_url": "https://github.com/ollama/ollama/pull/5704", "diff_url": "https://github.com/ollama/ollama/pull/5704.diff", "patch_url": "https://github.com/ollama/ollama/pull/5704.patch", "merged_at": null }
This pull request adds non-breaking functionality to Ollama function `NewLlamaServer` and adds a `TensorSplit` field to the `Runner` struct in `api/types.go`. - Add option to pass a `tensor_split` in "options" object for generate api to manually define how tensors should be split with llama.cpp. - Add conditional t...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5704/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1019
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1019/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1019/comments
https://api.github.com/repos/ollama/ollama/issues/1019/events
https://github.com/ollama/ollama/issues/1019
1,979,682,840
I_kwDOJ0Z1Ps51_5AY
1,019
Error: llama runner exited
{ "login": "krenax", "id": 127540387, "node_id": "U_kgDOB5ocow", "avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krenax", "html_url": "https://github.com/krenax", "followers_url": "https://api.github.com/users/krenax/follower...
[]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
11
2023-11-06T17:32:51
2024-01-08T17:28:36
2023-11-23T14:53:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Using mistral and llama2 with ollama, I received the following error message: `Error: llama runner exited, you may not have enough available memory to run this model?`. The `README.md` states that at least 16GB of RAM is required to run 7B models, which is met by my workstation specifications.
{ "login": "krenax", "id": 127540387, "node_id": "U_kgDOB5ocow", "avatar_url": "https://avatars.githubusercontent.com/u/127540387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krenax", "html_url": "https://github.com/krenax", "followers_url": "https://api.github.com/users/krenax/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1019/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6518
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6518/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6518/comments
https://api.github.com/repos/ollama/ollama/issues/6518/events
https://github.com/ollama/ollama/issues/6518
2,487,164,819
I_kwDOJ0Z1Ps6UPx-T
6,518
Unable to run on tcp4/ipv4 on Lambda Labs instance
{ "login": "bayadyne", "id": 179503668, "node_id": "U_kgDOCrMCNA", "avatar_url": "https://avatars.githubusercontent.com/u/179503668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayadyne", "html_url": "https://github.com/bayadyne", "followers_url": "https://api.github.com/users/bayadyne/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
5
2024-08-26T15:40:29
2024-12-02T21:56:12
2024-12-02T21:56:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I used OLLAMA_HOST=0.0.0.0:8080 (and attempted with other ports) and when I checked, the service was only on tcp6 which I'm currently unable to use. ### OS Linux ### GPU Nvidia ### CPU Intel, AMD ### Ollama version 0.3.6
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6518/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/335
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/335/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/335/comments
https://api.github.com/repos/ollama/ollama/issues/335/events
https://github.com/ollama/ollama/issues/335
1,847,319,467
I_kwDOJ0Z1Ps5uG9ur
335
Model import/export
{ "login": "mikeroySoft", "id": 1791194, "node_id": "MDQ6VXNlcjE3OTExOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1791194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikeroySoft", "html_url": "https://github.com/mikeroySoft", "followers_url": "https://api.github.com/us...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
25
2023-08-11T19:26:46
2024-10-24T15:21:41
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When using large models like Llama2:70b, the download files are quite big. As a user with multiple local systems, having to `ollama pull` on every device means that much more bandwidth and time spent. It would be great if we could download the model once and then export/import it to other ollama clients in the office...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/335/reactions", "total_count": 18, "+1": 18, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/335/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/419
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/419/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/419/comments
https://api.github.com/repos/ollama/ollama/issues/419/events
https://github.com/ollama/ollama/issues/419
1,868,085,325
I_kwDOJ0Z1Ps5vWLhN
419
?allow for model files to be located in a different location than ~/.ollama?
{ "login": "vegabook", "id": 3780883, "node_id": "MDQ6VXNlcjM3ODA4ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/3780883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegabook", "html_url": "https://github.com/vegabook", "followers_url": "https://api.github.com/users/vegab...
[]
closed
false
null
[]
null
3
2023-08-26T12:44:56
2023-08-30T16:52:55
2023-08-30T16:52:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
On my M2 mac, Ollama stores pulled models in `~/.ollama/models` and its security keys in `~/.ollama`. Is it possible to specify an alternative directory? My interest is in compartmentalizing ollama as much as possible into a single directory (happen to be using nix where [ollama is available in the unstable channel](ht...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/419/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/419/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6253
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6253/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6253/comments
https://api.github.com/repos/ollama/ollama/issues/6253/events
https://github.com/ollama/ollama/issues/6253
2,454,875,827
I_kwDOJ0Z1Ps6SUm6z
6,253
When systemMessage exceeds a certain length, ollama is unable to process it.
{ "login": "billrenhero", "id": 46013777, "node_id": "MDQ6VXNlcjQ2MDEzNzc3", "avatar_url": "https://avatars.githubusercontent.com/u/46013777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/billrenhero", "html_url": "https://github.com/billrenhero", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
5
2024-08-08T05:02:55
2024-09-02T23:14:38
2024-09-02T23:14:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="1250" alt="截屏2024-08-08 11 28 37" src="https://github.com/user-attachments/assets/49e8c26e-ef09-4f4a-b06b-7e24801c2f69"> when system message exceeds a certain length(4096 likely), Ollama returns "It seems like you're sharing some information, but it's not in a readable format. Could...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6253/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/438
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/438/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/438/comments
https://api.github.com/repos/ollama/ollama/issues/438/events
https://github.com/ollama/ollama/issues/438
1,870,669,000
I_kwDOJ0Z1Ps5vgCTI
438
Document Wolfi package?
{ "login": "dlorenc", "id": 1714486, "node_id": "MDQ6VXNlcjE3MTQ0ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1714486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlorenc", "html_url": "https://github.com/dlorenc", "followers_url": "https://api.github.com/users/dlorenc/...
[]
closed
false
null
[]
null
1
2023-08-29T00:00:25
2023-08-30T20:56:56
2023-08-30T20:56:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hey! I noticed it says there are no official downloads for Linux yet - would you be open to documenting the official Wolfi Linux package? You can see how it's packaged here: https://github.com/wolfi-dev/os/blob/main/ollama.yaml You can install it with: ``` docker run -it cgr.dev/chainguard/wolfi-base sh apk ...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/438/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2118
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2118/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2118/comments
https://api.github.com/repos/ollama/ollama/issues/2118/events
https://github.com/ollama/ollama/issues/2118
2,092,399,782
I_kwDOJ0Z1Ps58t3ym
2,118
Unable to Download Models Due to Malformed Manifests
{ "login": "SpiralCut", "id": 21312296, "node_id": "MDQ6VXNlcjIxMzEyMjk2", "avatar_url": "https://avatars.githubusercontent.com/u/21312296?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SpiralCut", "html_url": "https://github.com/SpiralCut", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-01-21T03:10:26
2024-01-22T02:14:59
2024-01-22T02:14:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running Ollama 0.1.20 in WSL2/Ubuntu. In the past I was able to download new models fine but now when I try to download them I get something similar to the following error messages and am prevented from downloading: ``` pulling manifest Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/co...
{ "login": "SpiralCut", "id": 21312296, "node_id": "MDQ6VXNlcjIxMzEyMjk2", "avatar_url": "https://avatars.githubusercontent.com/u/21312296?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SpiralCut", "html_url": "https://github.com/SpiralCut", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2118/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8460
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8460/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8460/comments
https://api.github.com/repos/ollama/ollama/issues/8460/events
https://github.com/ollama/ollama/issues/8460
2,793,789,463
I_kwDOJ0Z1Ps6mhdgX
8,460
Llama-3_1-Nemotron-51B-Instruct
{ "login": "Tanote650", "id": 60698483, "node_id": "MDQ6VXNlcjYwNjk4NDgz", "avatar_url": "https://avatars.githubusercontent.com/u/60698483?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tanote650", "html_url": "https://github.com/Tanote650", "followers_url": "https://api.github.com/users/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2025-01-16T21:21:32
2025-01-25T09:12:05
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add the Nvidia Model. https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8460/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8607
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8607/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8607/comments
https://api.github.com/repos/ollama/ollama/issues/8607/events
https://github.com/ollama/ollama/issues/8607
2,812,661,670
I_kwDOJ0Z1Ps6npc-m
8,607
Add an ability to inject env variables to modelfile system message.
{ "login": "BotVasya", "id": 10455417, "node_id": "MDQ6VXNlcjEwNDU1NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/10455417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BotVasya", "html_url": "https://github.com/BotVasya", "followers_url": "https://api.github.com/users/Bot...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-27T10:43:33
2025-01-27T10:43:33
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi All. I`ve realized that that there is no way to make ollama models to know current date and time if it runs on ms windows. So I believe that would be useful if there would be possibility to use OS variables in the modelfile. Especially for date and time it would be better if model could obtain that data dynamically,...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8607/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4884
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4884/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4884/comments
https://api.github.com/repos/ollama/ollama/issues/4884/events
https://github.com/ollama/ollama/issues/4884
2,339,277,556
I_kwDOJ0Z1Ps6Lbor0
4,884
No proper response when IPEX-LLM setup with Ollama for intel cpu/gpu
{ "login": "filip-777", "id": 44314861, "node_id": "MDQ6VXNlcjQ0MzE0ODYx", "avatar_url": "https://avatars.githubusercontent.com/u/44314861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/filip-777", "html_url": "https://github.com/filip-777", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-06-06T23:04:51
2024-10-07T03:06:12
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After setup of IPEX-LLM to work with ollama I see that output is wrong. Example: ``` ❯ ./ollama run phi3 >>> hi <s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s> ``` Maybe I had setup something wrong... When I serve ollama I have such log...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4884/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3351
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3351/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3351/comments
https://api.github.com/repos/ollama/ollama/issues/3351/events
https://github.com/ollama/ollama/pull/3351
2,206,799,914
PR_kwDOJ0Z1Ps5qtsps
3,351
Add license in file header for vendored llama.cpp code
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-03-25T22:02:20
2024-03-26T20:23:23
2024-03-26T20:23:23
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3351", "html_url": "https://github.com/ollama/ollama/pull/3351", "diff_url": "https://github.com/ollama/ollama/pull/3351.diff", "patch_url": "https://github.com/ollama/ollama/pull/3351.patch", "merged_at": "2024-03-26T20:23:23" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3351/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7892
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7892/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7892/comments
https://api.github.com/repos/ollama/ollama/issues/7892/events
https://github.com/ollama/ollama/issues/7892
2,707,110,889
I_kwDOJ0Z1Ps6hWzvp
7,892
After the deployment of ollama, it can only be accessed through 127.0.0.1 and cannot be accessed through IP
{ "login": "2277509846", "id": 52586868, "node_id": "MDQ6VXNlcjUyNTg2ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/52586868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2277509846", "html_url": "https://github.com/2277509846", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
15
2024-11-30T10:06:33
2024-11-30T13:14:47
2024-11-30T11:45:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Version: 0.4.6 OS: Ubuntu Download and install curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.4.6 sh Edit service file sudo vim /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Grou...
{ "login": "2277509846", "id": 52586868, "node_id": "MDQ6VXNlcjUyNTg2ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/52586868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2277509846", "html_url": "https://github.com/2277509846", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7892/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/906
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/906/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/906/comments
https://api.github.com/repos/ollama/ollama/issues/906/events
https://github.com/ollama/ollama/pull/906
1,962,163,935
PR_kwDOJ0Z1Ps5dyzrb
906
Documenting OpenAI compatibility (and other docs tweaks)
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
2
2023-10-25T20:18:41
2023-10-27T07:29:21
2023-10-27T07:10:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/906", "html_url": "https://github.com/ollama/ollama/pull/906", "diff_url": "https://github.com/ollama/ollama/pull/906.diff", "patch_url": "https://github.com/ollama/ollama/pull/906.patch", "merged_at": "2023-10-27T07:10:23" }
Modernization of https://github.com/jmorganca/ollama/pull/661 - ~Closes https://github.com/jmorganca/ollama/issues/538~ - Upstreams more knowledge from https://github.com/jmorganca/ollama/issues/546 - Simplifies `brew install` to one line
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/906/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2706
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2706/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2706/comments
https://api.github.com/repos/ollama/ollama/issues/2706/events
https://github.com/ollama/ollama/issues/2706
2,150,702,091
I_kwDOJ0Z1Ps6AMRwL
2,706
CUDA error: out of memory with llava:7b-v1.6 when providing an image
{ "login": "lucaboulard", "id": 25926274, "node_id": "MDQ6VXNlcjI1OTI2Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/25926274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucaboulard", "html_url": "https://github.com/lucaboulard", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
2
2024-02-23T09:29:53
2024-06-01T20:37:51
2024-06-01T20:37:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I'm using ollama 0.1.26 to run llava:7b-v1.6 on WSL on Windows (Ubuntu 22.04.3 LTS). It works just fine as long as I just use textual prompts, but as soon as I go multimodal and pass an image as well ollama crashes with this message: ``` time=2024-02-23T09:49:45.496+01:00 level=INFO source=dyn_ext_server.go:1...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2706/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1194
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1194/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1194/comments
https://api.github.com/repos/ollama/ollama/issues/1194/events
https://github.com/ollama/ollama/issues/1194
2,000,634,109
I_kwDOJ0Z1Ps53P0D9
1,194
Add open assistant
{ "login": "mak448a", "id": 94062293, "node_id": "U_kgDOBZtG1Q", "avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mak448a", "html_url": "https://github.com/mak448a", "followers_url": "https://api.github.com/users/mak448a/follow...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
0
2023-11-19T00:20:34
2024-03-11T18:16:51
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Could you add Open Assistant? Thank you!
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1194/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3471
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3471/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3471/comments
https://api.github.com/repos/ollama/ollama/issues/3471/events
https://github.com/ollama/ollama/issues/3471
2,222,014,071
I_kwDOJ0Z1Ps6EcT53
3,471
Please add Qwen-audio
{ "login": "zimuoo", "id": 29696639, "node_id": "MDQ6VXNlcjI5Njk2NjM5", "avatar_url": "https://avatars.githubusercontent.com/u/29696639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zimuoo", "html_url": "https://github.com/zimuoo", "followers_url": "https://api.github.com/users/zimuoo/fo...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
5
2024-04-03T06:24:34
2024-09-02T03:06:57
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3471/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3471/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8159
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8159/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8159/comments
https://api.github.com/repos/ollama/ollama/issues/8159/events
https://github.com/ollama/ollama/issues/8159
2,748,255,686
I_kwDOJ0Z1Ps6jzw3G
8,159
phi4
{ "login": "sinxyz", "id": 32287704, "node_id": "MDQ6VXNlcjMyMjg3NzA0", "avatar_url": "https://avatars.githubusercontent.com/u/32287704?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sinxyz", "html_url": "https://github.com/sinxyz", "followers_url": "https://api.github.com/users/sinxyz/fo...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
5
2024-12-18T16:25:36
2025-01-14T08:54:16
2024-12-19T19:53:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
please add phi4
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8159/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8159/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4302
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4302/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4302/comments
https://api.github.com/repos/ollama/ollama/issues/4302/events
https://github.com/ollama/ollama/pull/4302
2,288,543,597
PR_kwDOJ0Z1Ps5vCTQ1
4,302
only forward some env vars
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-05-09T22:13:01
2024-05-09T23:21:06
2024-05-09T23:21:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4302", "html_url": "https://github.com/ollama/ollama/pull/4302", "diff_url": "https://github.com/ollama/ollama/pull/4302.diff", "patch_url": "https://github.com/ollama/ollama/pull/4302.patch", "merged_at": "2024-05-09T23:21:05" }
only forward select env vars which prevents 1) logging and 2) the subprocess inheriting irrelevant, possibly sensitive, vars
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4302/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3591
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3591/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3591/comments
https://api.github.com/repos/ollama/ollama/issues/3591/events
https://github.com/ollama/ollama/pull/3591
2,237,320,424
PR_kwDOJ0Z1Ps5sVqs_
3,591
examples: Update langchain-python-simple
{ "login": "erikos", "id": 3714785, "node_id": "MDQ6VXNlcjM3MTQ3ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/3714785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erikos", "html_url": "https://github.com/erikos", "followers_url": "https://api.github.com/users/erikos/foll...
[]
closed
false
null
[]
null
0
2024-04-11T09:39:46
2024-11-25T00:06:22
2024-11-25T00:06:22
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3591", "html_url": "https://github.com/ollama/ollama/pull/3591", "diff_url": "https://github.com/ollama/ollama/pull/3591.diff", "patch_url": "https://github.com/ollama/ollama/pull/3591.patch", "merged_at": "2024-11-25T00:06:22" }
* remove deprecated predict command, use invoke instead * improve input handling
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3591/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7032
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7032/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7032/comments
https://api.github.com/repos/ollama/ollama/issues/7032/events
https://github.com/ollama/ollama/issues/7032
2,554,811,791
I_kwDOJ0Z1Ps6YR1WP
7,032
Persistent context
{ "login": "tomstdenis", "id": 11875109, "node_id": "MDQ6VXNlcjExODc1MTA5", "avatar_url": "https://avatars.githubusercontent.com/u/11875109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomstdenis", "html_url": "https://github.com/tomstdenis", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-09-29T08:37:38
2024-10-01T23:03:35
2024-10-01T23:03:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When using an LLM for say a lesson it'd be nice to prime the LLM with a persistent initial "basic instruction" that never falls out of the window, e.g. "You're a German language instructor, I'm an Anglophone, help me learn German." With most LLM drivers (ChatGPT/Ollama/etc) these instructions will fall out of th...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7032/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8513
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8513/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8513/comments
https://api.github.com/repos/ollama/ollama/issues/8513/events
https://github.com/ollama/ollama/issues/8513
2,801,188,323
I_kwDOJ0Z1Ps6m9r3j
8,513
Support for Multiple Images in /chat Endpoint
{ "login": "pmedina-42", "id": 68591323, "node_id": "MDQ6VXNlcjY4NTkxMzIz", "avatar_url": "https://avatars.githubusercontent.com/u/68591323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pmedina-42", "html_url": "https://github.com/pmedina-42", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2025-01-21T09:16:37
2025-01-21T17:34:35
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Currently, the /chat endpoint includes the images field, but it only supports a single image. While this is functional, it introduces an additional layer of complexity when performing RAG with images embedded in base64. For instance, if the content retriever returns multiple embeddings with the highest scores referenc...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8513/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4885
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4885/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4885/comments
https://api.github.com/repos/ollama/ollama/issues/4885/events
https://github.com/ollama/ollama/issues/4885
2,339,327,186
I_kwDOJ0Z1Ps6Lb0zS
4,885
Support Dragonfly
{ "login": "kylemclaren", "id": 3727384, "node_id": "MDQ6VXNlcjM3MjczODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3727384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kylemclaren", "html_url": "https://github.com/kylemclaren", "followers_url": "https://api.github.com/us...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-06-06T23:54:59
2024-06-06T23:54:59
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Together.ai recently announced the Dragonfly vision-language model based on Llama3: https://huggingface.co/togethercomputer/Llama-3-8B-Dragonfly-v1
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4885/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4885/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2880
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2880/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2880/comments
https://api.github.com/repos/ollama/ollama/issues/2880/events
https://github.com/ollama/ollama/pull/2880
2,164,838,723
PR_kwDOJ0Z1Ps5ofDyN
2,880
update go module path
{ "login": "icholy", "id": 943597, "node_id": "MDQ6VXNlcjk0MzU5Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/943597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/icholy", "html_url": "https://github.com/icholy", "followers_url": "https://api.github.com/users/icholy/follow...
[]
closed
false
null
[]
null
1
2024-03-02T14:41:33
2024-03-31T17:16:06
2024-03-31T17:16:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2880", "html_url": "https://github.com/ollama/ollama/pull/2880", "diff_url": "https://github.com/ollama/ollama/pull/2880.diff", "patch_url": "https://github.com/ollama/ollama/pull/2880.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2880/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6608
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6608/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6608/comments
https://api.github.com/repos/ollama/ollama/issues/6608/events
https://github.com/ollama/ollama/pull/6608
2,503,378,382
PR_kwDOJ0Z1Ps56SnAa
6,608
Updated Ollama4j link
{ "login": "amithkoujalgi", "id": 1876165, "node_id": "MDQ6VXNlcjE4NzYxNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/1876165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amithkoujalgi", "html_url": "https://github.com/amithkoujalgi", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
1
2024-09-03T17:12:20
2024-09-03T20:08:50
2024-09-03T20:08:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6608", "html_url": "https://github.com/ollama/ollama/pull/6608", "diff_url": "https://github.com/ollama/ollama/pull/6608.diff", "patch_url": "https://github.com/ollama/ollama/pull/6608.patch", "merged_at": "2024-09-03T20:08:50" }
Updated Ollama4j link and added link to Ollama4j Web UI tool.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6608/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2563
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2563/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2563/comments
https://api.github.com/repos/ollama/ollama/issues/2563/events
https://github.com/ollama/ollama/pull/2563
2,140,231,004
PR_kwDOJ0Z1Ps5nLM2M
2,563
Update Web UI link to new project name
{ "login": "justinh-rahb", "id": 52832301, "node_id": "MDQ6VXNlcjUyODMyMzAx", "avatar_url": "https://avatars.githubusercontent.com/u/52832301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justinh-rahb", "html_url": "https://github.com/justinh-rahb", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
0
2024-02-17T16:07:29
2024-02-18T05:02:48
2024-02-18T04:05:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2563", "html_url": "https://github.com/ollama/ollama/pull/2563", "diff_url": "https://github.com/ollama/ollama/pull/2563.diff", "patch_url": "https://github.com/ollama/ollama/pull/2563.patch", "merged_at": "2024-02-18T04:05:20" }
Ollama WebUI is now known as Open WebUI: https://openwebui.com https://github.com/open-webui/open-webui
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2563/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4698
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4698/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4698/comments
https://api.github.com/repos/ollama/ollama/issues/4698/events
https://github.com/ollama/ollama/issues/4698
2,322,810,880
I_kwDOJ0Z1Ps6Kc0gA
4,698
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}
{ "login": "uzumakinaruto19", "id": 99479748, "node_id": "U_kgDOBe3wxA", "avatar_url": "https://avatars.githubusercontent.com/u/99479748?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uzumakinaruto19", "html_url": "https://github.com/uzumakinaruto19", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
4
2024-05-29T09:20:09
2024-11-10T13:00:51
2024-09-13T00:14:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"} still I'm getting with the latest ollama docker > Hi folks this should be fixed now - please let me know if that's not the case @jmorganca only with the llama/ollama:0.1.32 version it works, d...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4698/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4691
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4691/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4691/comments
https://api.github.com/repos/ollama/ollama/issues/4691/events
https://github.com/ollama/ollama/issues/4691
2,322,041,964
I_kwDOJ0Z1Ps6KZ4xs
4,691
linux installation
{ "login": "wi-wi", "id": 53225089, "node_id": "MDQ6VXNlcjUzMjI1MDg5", "avatar_url": "https://avatars.githubusercontent.com/u/53225089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wi-wi", "html_url": "https://github.com/wi-wi", "followers_url": "https://api.github.com/users/wi-wi/follow...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-05-28T22:54:05
2024-08-09T23:23:00
2024-08-09T23:23:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` curl -fsSL https://ollama.com/install.sh | sh bash: ./shell.sh: No such file or directory curl: (23) Failed writing body (1349 != 1378) ``` ======== the curl command is from ollama's download page. I solved it by downloading /install.sh, making it executable and run it. ### OS ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4691/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4691/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/265
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/265/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/265/comments
https://api.github.com/repos/ollama/ollama/issues/265/events
https://github.com/ollama/ollama/pull/265
1,834,068,042
PR_kwDOJ0Z1Ps5XDrCg
265
Update README.md
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-03T00:21:46
2023-08-03T02:38:33
2023-08-03T02:38:32
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/265", "html_url": "https://github.com/ollama/ollama/pull/265", "diff_url": "https://github.com/ollama/ollama/pull/265.diff", "patch_url": "https://github.com/ollama/ollama/pull/265.patch", "merged_at": "2023-08-03T02:38:32" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/265/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4045
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4045/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4045/comments
https://api.github.com/repos/ollama/ollama/issues/4045/events
https://github.com/ollama/ollama/pull/4045
2,270,995,055
PR_kwDOJ0Z1Ps5uHnTK
4,045
docs: add ollama-operator in example
{ "login": "panpan0000", "id": 14049268, "node_id": "MDQ6VXNlcjE0MDQ5MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/14049268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/panpan0000", "html_url": "https://github.com/panpan0000", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
3
2024-04-30T09:39:49
2024-11-21T09:36:57
2024-11-21T09:36:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4045", "html_url": "https://github.com/ollama/ollama/pull/4045", "diff_url": "https://github.com/ollama/ollama/pull/4045.diff", "patch_url": "https://github.com/ollama/ollama/pull/4045.patch", "merged_at": null }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4045/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4045/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7868
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7868/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7868/comments
https://api.github.com/repos/ollama/ollama/issues/7868/events
https://github.com/ollama/ollama/pull/7868
2,700,161,640
PR_kwDOJ0Z1Ps6DZXih
7,868
server: automatically open browser to connect ollama key
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
4
2024-11-27T23:21:43
2024-12-19T03:42:36
2024-12-19T01:41:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7868", "html_url": "https://github.com/ollama/ollama/pull/7868", "diff_url": "https://github.com/ollama/ollama/pull/7868.diff", "patch_url": "https://github.com/ollama/ollama/pull/7868.patch", "merged_at": null }
When an ollama key is not registered with any account on ollama.com this is not obvious. In the current CLI an error message that the user is not authorized is displayed. This change brings back previous behavior to show the user their key and where they should add it. It protects against adding unexpected keys by chec...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7868/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/152
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/152/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/152/comments
https://api.github.com/repos/ollama/ollama/issues/152/events
https://github.com/ollama/ollama/pull/152
1,814,879,032
PR_kwDOJ0Z1Ps5WDMBF
152
add ls alias
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-07-20T22:27:09
2023-07-20T22:28:28
2023-07-20T22:28:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/152", "html_url": "https://github.com/ollama/ollama/pull/152", "diff_url": "https://github.com/ollama/ollama/pull/152.diff", "patch_url": "https://github.com/ollama/ollama/pull/152.patch", "merged_at": "2023-07-20T22:28:28" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/152/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2878
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2878/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2878/comments
https://api.github.com/repos/ollama/ollama/issues/2878/events
https://github.com/ollama/ollama/pull/2878
2,164,826,251
PR_kwDOJ0Z1Ps5ofBUF
2,878
api: start adding documentation to package api
{ "login": "eliben", "id": 1130906, "node_id": "MDQ6VXNlcjExMzA5MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/1130906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eliben", "html_url": "https://github.com/eliben", "followers_url": "https://api.github.com/users/eliben/foll...
[]
closed
false
null
[]
null
1
2024-03-02T14:08:51
2024-04-10T17:31:55
2024-04-10T17:31:55
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2878", "html_url": "https://github.com/ollama/ollama/pull/2878", "diff_url": "https://github.com/ollama/ollama/pull/2878.diff", "patch_url": "https://github.com/ollama/ollama/pull/2878.patch", "merged_at": "2024-04-10T17:31:55" }
Updates #2840 This is an initial PR just to double check that I'm heading in the right direction. If it looks good, I can update it (or send separate ones) to fill up the whole documentation for the `api` package.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2878/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2878/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7600
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7600/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7600/comments
https://api.github.com/repos/ollama/ollama/issues/7600/events
https://github.com/ollama/ollama/issues/7600
2,647,312,984
I_kwDOJ0Z1Ps6dyspY
7,600
`/save` overwrites everything including system and template and previous messages
{ "login": "belfie13", "id": 39270867, "node_id": "MDQ6VXNlcjM5MjcwODY3", "avatar_url": "https://avatars.githubusercontent.com/u/39270867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/belfie13", "html_url": "https://github.com/belfie13", "followers_url": "https://api.github.com/users/bel...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-11-10T14:44:12
2024-11-10T22:21:48
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? the `/save` command overwrites everything and only includes the current context, any previously saved data is lost including the system and template to recreate ```shell ollama create model -f customtemplate.modelfile ollama run model >>> /set system you are an assistant >>> how are ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7600/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/306
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/306/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/306/comments
https://api.github.com/repos/ollama/ollama/issues/306/events
https://github.com/ollama/ollama/pull/306
1,840,328,464
PR_kwDOJ0Z1Ps5XYby9
306
automatically set num_keep if num_keep < 0
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-07T23:17:42
2023-08-08T16:25:36
2023-08-08T16:25:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/306", "html_url": "https://github.com/ollama/ollama/pull/306", "diff_url": "https://github.com/ollama/ollama/pull/306.diff", "patch_url": "https://github.com/ollama/ollama/pull/306.patch", "merged_at": "2023-08-08T16:25:35" }
num_keep defines how many tokens to keep in the context when truncating inputs. if left to its default value of -1, the server will calculate num_keep to be the left of the system instructions resolves #299
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/306/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5637
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5637/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5637/comments
https://api.github.com/repos/ollama/ollama/issues/5637/events
https://github.com/ollama/ollama/pull/5637
2,404,030,923
PR_kwDOJ0Z1Ps51JEVc
5,637
llm: avoid loading model if system memory is too small
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-11T20:17:35
2024-07-11T23:42:58
2024-07-11T23:42:57
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5637", "html_url": "https://github.com/ollama/ollama/pull/5637", "diff_url": "https://github.com/ollama/ollama/pull/5637.diff", "patch_url": "https://github.com/ollama/ollama/pull/5637.patch", "merged_at": "2024-07-11T23:42:57" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5637/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5132
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5132/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5132/comments
https://api.github.com/repos/ollama/ollama/issues/5132/events
https://github.com/ollama/ollama/issues/5132
2,361,175,255
I_kwDOJ0Z1Ps6MvKzX
5,132
CANNOT DOWNLOAD MODELS
{ "login": "Udacv", "id": 126667614, "node_id": "U_kgDOB4zLXg", "avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Udacv", "html_url": "https://github.com/Udacv", "followers_url": "https://api.github.com/users/Udacv/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-06-19T03:16:38
2024-06-19T06:07:21
2024-06-19T06:07:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Recently, when I use 'ollama run' to download models, I cannot download anything with the bug following. ![QQ截图20240619111403](https://github.com/ollama/ollama/assets/126667614/a4465567-74aa-4869-b12d-6b6d7d5701ea) Im from China, I cannot download either with the local Internet or with a VPN...
{ "login": "Udacv", "id": 126667614, "node_id": "U_kgDOB4zLXg", "avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Udacv", "html_url": "https://github.com/Udacv", "followers_url": "https://api.github.com/users/Udacv/followers", ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5132/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3527
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3527/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3527/comments
https://api.github.com/repos/ollama/ollama/issues/3527/events
https://github.com/ollama/ollama/issues/3527
2,229,893,144
I_kwDOJ0Z1Ps6E6XgY
3,527
Ollama conflict with amdgpu driver on Debian
{ "login": "hpsaturn", "id": 423856, "node_id": "MDQ6VXNlcjQyMzg1Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/423856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hpsaturn", "html_url": "https://github.com/hpsaturn", "followers_url": "https://api.github.com/users/hpsatur...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-04-07T18:42:54
2024-05-21T18:25:32
2024-05-21T18:24:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I notice that my Debian fails after the first suspend, it can't suspend again because the amdgpu driver has a kernel exception. Researching that, I found the Ollama service can't stop and also it produces this behavior. My current workaround is disable the systemd ollama service in the boot, w...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3527/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3710
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3710/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3710/comments
https://api.github.com/repos/ollama/ollama/issues/3710/events
https://github.com/ollama/ollama/pull/3710
2,249,201,612
PR_kwDOJ0Z1Ps5s-NFn
3,710
update jetson tutorial
{ "login": "remy415", "id": 105550370, "node_id": "U_kgDOBkqSIg", "avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/remy415", "html_url": "https://github.com/remy415", "followers_url": "https://api.github.com/users/remy415/foll...
[]
closed
false
null
[]
null
0
2024-04-17T20:18:07
2024-04-18T23:02:09
2024-04-18T23:02:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3710", "html_url": "https://github.com/ollama/ollama/pull/3710", "diff_url": "https://github.com/ollama/ollama/pull/3710.diff", "patch_url": "https://github.com/ollama/ollama/pull/3710.patch", "merged_at": "2024-04-18T23:02:09" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3710/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4390
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4390/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4390/comments
https://api.github.com/repos/ollama/ollama/issues/4390/events
https://github.com/ollama/ollama/issues/4390
2,291,969,108
I_kwDOJ0Z1Ps6InKxU
4,390
Feature Request: Customizable JSON Encoder/Decoder Configuration for REST API Endpoints or other that might need
{ "login": "H0llyW00dzZ", "id": 17626300, "node_id": "MDQ6VXNlcjE3NjI2MzAw", "avatar_url": "https://avatars.githubusercontent.com/u/17626300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/H0llyW00dzZ", "html_url": "https://github.com/H0llyW00dzZ", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
1
2024-05-13T06:49:23
2024-11-06T17:37:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
## Description Since this repository is written in Go, it is possible to customize the `JSON encoding/decoding` configuration for `REST API endpoints`. However, it may require refactoring from scratch since there are many `hard-coded` instances of `JSON` using the `standard library`. ## Proposed Feature The...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4390/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2752
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2752/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2752/comments
https://api.github.com/repos/ollama/ollama/issues/2752/events
https://github.com/ollama/ollama/issues/2752
2,152,976,820
I_kwDOJ0Z1Ps6AU9G0
2,752
CUDA error: out of memory
{ "login": "kennethwork101", "id": 147571330, "node_id": "U_kgDOCMvCgg", "avatar_url": "https://avatars.githubusercontent.com/u/147571330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kennethwork101", "html_url": "https://github.com/kennethwork101", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
3
2024-02-25T22:47:20
2024-02-25T23:06:51
2024-02-25T23:01:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
CUDA error: out of memory ollama version is 0.1.27 windows 11 wsl2 ubuntu 22.04 RTX 4070 TI Running a set of tests with each test loading a different model using ollama. It takes some time during testing we ran into the CUDA error: out of memory 3 times. Note each of the models being loaded is less than 10 GB...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2752/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4769
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4769/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4769/comments
https://api.github.com/repos/ollama/ollama/issues/4769/events
https://github.com/ollama/ollama/issues/4769
2,329,291,838
I_kwDOJ0Z1Ps6K1iw-
4,769
Infinetely generating irrelavent response when running phi3-mini in Linux Terminal
{ "login": "MomenAbdelwadoud", "id": 66366532, "node_id": "MDQ6VXNlcjY2MzY2NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/66366532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MomenAbdelwadoud", "html_url": "https://github.com/MomenAbdelwadoud", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-06-01T19:00:46
2024-06-01T19:01:37
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I setted up a modelFile that loads PHI-3-mini-instruct, and whatever input I give it it starts to generate infinite response related to coding as shown in the screenshot. Here is the Content of the model file: ``` FROM ./Phi-3-mini-4k-instruct.Q4_0.gguf PARAMETER temperature 0.1 PARAMETER n...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4769/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6279
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6279/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6279/comments
https://api.github.com/repos/ollama/ollama/issues/6279/events
https://github.com/ollama/ollama/pull/6279
2,457,284,038
PR_kwDOJ0Z1Ps536jVv
6,279
feat: Introduce K/V Context Quantisation (vRAM improvements)
{ "login": "sammcj", "id": 862951, "node_id": "MDQ6VXNlcjg2Mjk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sammcj", "html_url": "https://github.com/sammcj", "followers_url": "https://api.github.com/users/sammcj/follow...
[]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
181
2024-08-09T07:22:10
2024-12-07T05:14:34
2024-12-03T23:57:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6279", "html_url": "https://github.com/ollama/ollama/pull/6279", "diff_url": "https://github.com/ollama/ollama/pull/6279.diff", "patch_url": "https://github.com/ollama/ollama/pull/6279.patch", "merged_at": "2024-12-03T23:57:19" }
This PR introduces optional K/V (context) cache quantisation. > TLDR; Set your k/v cache to Q8_0 and use 50% less vRAM for no noticeable quality impact. Ollama is arguably the only remaining the popular model server to not support this. This PR brings Ollama's K/V memory usage inline with likes of ExLlamav2, M...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6279/reactions", "total_count": 118, "+1": 30, "-1": 0, "laugh": 0, "hooray": 27, "confused": 0, "heart": 40, "rocket": 15, "eyes": 6 }
https://api.github.com/repos/ollama/ollama/issues/6279/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2942
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2942/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2942/comments
https://api.github.com/repos/ollama/ollama/issues/2942/events
https://github.com/ollama/ollama/pull/2942
2,170,198,208
PR_kwDOJ0Z1Ps5oxQ-k
2,942
[FEAT] Add `init` command
{ "login": "m4tt72", "id": 20604769, "node_id": "MDQ6VXNlcjIwNjA0NzY5", "avatar_url": "https://avatars.githubusercontent.com/u/20604769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m4tt72", "html_url": "https://github.com/m4tt72", "followers_url": "https://api.github.com/users/m4tt72/fo...
[]
closed
false
null
[]
null
2
2024-03-05T21:47:35
2024-05-10T11:18:06
2024-05-07T16:59:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2942", "html_url": "https://github.com/ollama/ollama/pull/2942", "diff_url": "https://github.com/ollama/ollama/pull/2942.diff", "patch_url": "https://github.com/ollama/ollama/pull/2942.patch", "merged_at": null }
Inspired by `docker init`, this command will create a new `Modelfile` in the current directory. If the file already exists, it will ask for confirmation before overwriting it.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2942/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4914
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4914/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4914/comments
https://api.github.com/repos/ollama/ollama/issues/4914/events
https://github.com/ollama/ollama/issues/4914
2,340,874,859
I_kwDOJ0Z1Ps6Lhupr
4,914
Request Ollama Web API to fetch all models data in the remote
{ "login": "edwinjhlee", "id": 4426319, "node_id": "MDQ6VXNlcjQ0MjYzMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4426319?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edwinjhlee", "html_url": "https://github.com/edwinjhlee", "followers_url": "https://api.github.com/users...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-06-07T17:20:26
2024-06-09T17:51:32
2024-06-09T17:25:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Is is possible that ollama officials provides API to fetch remote model data ? --- I am developing a Ollama CLI. I currently use the data scraped from ollama website. If this API available, I can just request data using official API. This is the demo: https://www.x-cmd.com/mod/ollama <img width="1494" al...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4914/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6560
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6560/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6560/comments
https://api.github.com/repos/ollama/ollama/issues/6560/events
https://github.com/ollama/ollama/issues/6560
2,495,102,637
I_kwDOJ0Z1Ps6UuD6t
6,560
Logging final input after prompting specified in model file as a debug flag
{ "login": "adela185", "id": 77362834, "node_id": "MDQ6VXNlcjc3MzYyODM0", "avatar_url": "https://avatars.githubusercontent.com/u/77362834?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adela185", "html_url": "https://github.com/adela185", "followers_url": "https://api.github.com/users/ade...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-08-29T17:09:11
2024-08-29T18:12:27
2024-08-29T18:12:27
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Just as the title says, it would be useful to log the final input text given to the model after it undergoes the prompting specified in the model file. This is for the windows preview. The logging already included only prints the parameters and API request, but not the final input.
{ "login": "adela185", "id": 77362834, "node_id": "MDQ6VXNlcjc3MzYyODM0", "avatar_url": "https://avatars.githubusercontent.com/u/77362834?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adela185", "html_url": "https://github.com/adela185", "followers_url": "https://api.github.com/users/ade...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6560/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/400
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/400/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/400/comments
https://api.github.com/repos/ollama/ollama/issues/400/events
https://github.com/ollama/ollama/pull/400
1,862,304,627
PR_kwDOJ0Z1Ps5Yi0A6
400
wip: decode gguf
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
2
2023-08-22T22:52:05
2023-09-14T20:33:49
2023-08-30T14:21:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/400", "html_url": "https://github.com/ollama/ollama/pull/400", "diff_url": "https://github.com/ollama/ollama/pull/400.diff", "patch_url": "https://github.com/ollama/ollama/pull/400.patch", "merged_at": null }
null
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/400/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/615
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/615/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/615/comments
https://api.github.com/repos/ollama/ollama/issues/615/events
https://github.com/ollama/ollama/pull/615
1,914,581,798
PR_kwDOJ0Z1Ps5bSd8Q
615
add `ollama run` flags: template, context, stop
{ "login": "sqs", "id": 1976, "node_id": "MDQ6VXNlcjE5NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sqs", "html_url": "https://github.com/sqs", "followers_url": "https://api.github.com/users/sqs/followers", "following_u...
[]
closed
false
null
[]
null
2
2023-09-27T02:35:30
2024-11-21T09:17:13
2024-11-21T09:17:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/615", "html_url": "https://github.com/ollama/ollama/pull/615", "diff_url": "https://github.com/ollama/ollama/pull/615.diff", "patch_url": "https://github.com/ollama/ollama/pull/615.patch", "merged_at": null }
These new `ollama run` flags make `ollama run` useful for debugging more advanced invocations of the Ollama generate API. For example, the following command generates completions with context tokens for `const primes=[1,2,3,5,7`, a stop sequence (`;`), and a custom template: ``` ollama run --verbose --context 3075,54...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/615/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/615/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4148
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4148/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4148/comments
https://api.github.com/repos/ollama/ollama/issues/4148/events
https://github.com/ollama/ollama/issues/4148
2,278,721,519
I_kwDOJ0Z1Ps6H0ofv
4,148
Importing a Mistral finetune into Ollama fails with `invalid file magic`
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-05-04T04:17:02
2024-05-04T05:27:41
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Importing a custom Mistral-7B finetune into Ollama from safetensors fails with `invalid file magic`. Converting the same safetensors to gguf with llama.cpp works on import. Steps to reproduce: - Finetune Mistral with MLX, fuse the lora to the model to get the resulting safetensors. - Create...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4148/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2151
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2151/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2151/comments
https://api.github.com/repos/ollama/ollama/issues/2151/events
https://github.com/ollama/ollama/issues/2151
2,095,079,810
I_kwDOJ0Z1Ps584GGC
2,151
Layer splitting on macOS
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-01-23T01:39:49
2024-05-10T01:07:59
2024-05-10T01:07:59
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Llama.cpp now supports splitting layers over Metal and CPU, we should implement this once we fix #1952
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2151/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2151/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7458
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7458/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7458/comments
https://api.github.com/repos/ollama/ollama/issues/7458/events
https://github.com/ollama/ollama/issues/7458
2,627,875,374
I_kwDOJ0Z1Ps6cojIu
7,458
mistake:ollama run llama3_8b_chat_uncensored_q4_0
{ "login": "1015g", "id": 185006875, "node_id": "U_kgDOCwb7Gw", "avatar_url": "https://avatars.githubusercontent.com/u/185006875?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1015g", "html_url": "https://github.com/1015g", "followers_url": "https://api.github.com/users/1015g/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-10-31T21:35:52
2024-11-13T22:05:27
2024-11-13T22:05:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Why is this happening? I've successfully converted it to Ollama format using the ollama create llama3_8b_chat_uncensored_q4_0 -f Modelfile command and successfully loaded and run it with the ollama run command, but why is it answering such an unreasonable nonsense? Is this GUFF model incompatibl...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7458/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2463
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2463/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2463/comments
https://api.github.com/repos/ollama/ollama/issues/2463/events
https://github.com/ollama/ollama/issues/2463
2,130,246,460
I_kwDOJ0Z1Ps5--Ps8
2,463
Resume does not seem to work
{ "login": "da-z", "id": 3681019, "node_id": "MDQ6VXNlcjM2ODEwMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/da-z", "html_url": "https://github.com/da-z", "followers_url": "https://api.github.com/users/da-z/followers", ...
[]
closed
false
null
[]
null
2
2024-02-12T14:23:08
2024-02-12T15:27:33
2024-02-12T15:27:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I had about 4.5GB out of 49GB already downloaded but on a retry it restarted from scratch (same layer - edb02981b596...). `ollama pull nous-hermes2-mixtral:8x7b-dpo-q8_0`
{ "login": "da-z", "id": 3681019, "node_id": "MDQ6VXNlcjM2ODEwMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3681019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/da-z", "html_url": "https://github.com/da-z", "followers_url": "https://api.github.com/users/da-z/followers", ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2463/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/886
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/886/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/886/comments
https://api.github.com/repos/ollama/ollama/issues/886/events
https://github.com/ollama/ollama/pull/886
1,958,046,331
PR_kwDOJ0Z1Ps5dk8k-
886
during linux install add the ollama service user to the current resolved user's group
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-10-23T21:23:35
2023-10-24T17:52:05
2023-10-24T17:52:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/886", "html_url": "https://github.com/ollama/ollama/pull/886", "diff_url": "https://github.com/ollama/ollama/pull/886.diff", "patch_url": "https://github.com/ollama/ollama/pull/886.patch", "merged_at": null }
null
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/886/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1632
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1632/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1632/comments
https://api.github.com/repos/ollama/ollama/issues/1632/events
https://github.com/ollama/ollama/issues/1632
2,050,719,207
I_kwDOJ0Z1Ps56O33n
1,632
Only utilizing one thread - Unraid
{ "login": "evanrodgers", "id": 36175609, "node_id": "MDQ6VXNlcjM2MTc1NjA5", "avatar_url": "https://avatars.githubusercontent.com/u/36175609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/evanrodgers", "html_url": "https://github.com/evanrodgers", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
1
2023-12-20T14:56:36
2024-03-11T17:44:40
2024-03-11T17:44:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi all, Right now I am using Ollama 0.1.17 in an Unraid environment. The app works great, but it's only utilizing one thread. I did some light troubleshooting by adding the "num_thread" parameter to the modelfile as shown below, but it still only utilizes one thread. I've checked this by looking at the dashboard in ...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1632/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/371
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/371/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/371/comments
https://api.github.com/repos/ollama/ollama/issues/371/events
https://github.com/ollama/ollama/issues/371
1,855,464,365
I_kwDOJ0Z1Ps5umCOt
371
Strip `https://` from model in `ollama run <model>`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg...
closed
false
null
[]
null
1
2023-08-17T17:59:48
2023-08-23T17:52:23
2023-08-23T17:52:23
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Users may prefix the model name with `https://` and we should accept it and strip it out. For example: ``` ollama run https://ollama.ai/m/wb ``` Should be equivalent to ``` ollama run ollama.ai/m/wb ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/371/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3878
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3878/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3878/comments
https://api.github.com/repos/ollama/ollama/issues/3878/events
https://github.com/ollama/ollama/issues/3878
2,261,502,860
I_kwDOJ0Z1Ps6Gy8uM
3,878
Simple guide for using/uploading custom models from Windows onto Ollama.
{ "login": "Avroboros", "id": 146421595, "node_id": "U_kgDOCLo3Ww", "avatar_url": "https://avatars.githubusercontent.com/u/146421595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Avroboros", "html_url": "https://github.com/Avroboros", "followers_url": "https://api.github.com/users/Avrobo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-04-24T14:55:15
2024-04-24T14:55:15
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ollama is fantastic, however until now i've been dependant on models that are already on the website.. There are many models from hugging face that I wanna use with Ollama (since Ollama is highly optimized and larger models run better on my computer using it). However, there are literally zero guides online about how t...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3878/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3878/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7273
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7273/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7273/comments
https://api.github.com/repos/ollama/ollama/issues/7273/events
https://github.com/ollama/ollama/pull/7273
2,599,542,799
PR_kwDOJ0Z1Ps5_MIoP
7,273
server: allow vscode-webview origins
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-10-19T19:53:36
2024-10-19T21:06:42
2024-10-19T21:06:41
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7273", "html_url": "https://github.com/ollama/ollama/pull/7273", "diff_url": "https://github.com/ollama/ollama/pull/7273.diff", "patch_url": "https://github.com/ollama/ollama/pull/7273.patch", "merged_at": "2024-10-19T21:06:41" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7273/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7273/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6997
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6997/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6997/comments
https://api.github.com/repos/ollama/ollama/issues/6997/events
https://github.com/ollama/ollama/issues/6997
2,552,121,384
I_kwDOJ0Z1Ps6YHkgo
6,997
CUDA error: device kernel image is invalid - CC 7.5
{ "login": "nikita228gym", "id": 66132104, "node_id": "MDQ6VXNlcjY2MTMyMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/66132104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikita228gym", "html_url": "https://github.com/nikita228gym", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-09-27T06:36:10
2024-11-05T23:25:47
2024-11-05T23:25:47
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello, I would like to apologize for my poor English (I am using a translator from another language). Could you please help me? I had a problem with Ollama. As always, I tried to run it with the command "Ollama ru llama3.1:8b," but then an error occurred: "Error: llama runner process has term...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6997/timeline
null
completed
false