url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/5297
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5297/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5297/comments
https://api.github.com/repos/ollama/ollama/issues/5297/events
https://github.com/ollama/ollama/issues/5297
2,375,176,567
I_kwDOJ0Z1Ps6NklF3
5,297
How to get same length of response from CLI and API?
{ "login": "dsbyprateekg", "id": 30830541, "node_id": "MDQ6VXNlcjMwODMwNTQx", "avatar_url": "https://avatars.githubusercontent.com/u/30830541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dsbyprateekg", "html_url": "https://github.com/dsbyprateekg", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
10
2024-06-26T12:17:04
2024-06-28T04:02:27
2024-06-28T04:02:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I checked the prompt 'why is the sky blue?' with CLI and with the API through postman. The response generated in CLI is longer than the response generated with the API- ![api_response_gemma2b](https://github.com/ollama/ollama/assets/30830541/a142983d-9c57-466f-a920-e3b4966870a1) ![...
{ "login": "dsbyprateekg", "id": 30830541, "node_id": "MDQ6VXNlcjMwODMwNTQx", "avatar_url": "https://avatars.githubusercontent.com/u/30830541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dsbyprateekg", "html_url": "https://github.com/dsbyprateekg", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5297/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5966
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5966/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5966/comments
https://api.github.com/repos/ollama/ollama/issues/5966/events
https://github.com/ollama/ollama/issues/5966
2,431,299,032
I_kwDOJ0Z1Ps6Q6q3Y
5,966
Add "Mistral large v2" , thanks
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-07-26T02:39:12
2024-07-26T10:53:55
2024-07-26T10:53:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Mistral又重磅开源了,7月是一个适合开源的月份~。 Mistral large v2支持中文,特点是对编码和agent能力、推理能力做了很好的优化,110B模型可以与llama 3.1 405B分庭抗礼! hf模型地址:https://huggingface.co/mistralai/Mistral-Large-Instruct-2407 试玩地址:https://chat.mistral.ai/chat/e56844bf-f6c1-46be-8f17-9072766fec10
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5966/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/327
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/327/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/327/comments
https://api.github.com/repos/ollama/ollama/issues/327/events
https://github.com/ollama/ollama/issues/327
1,846,157,406
I_kwDOJ0Z1Ps5uCiBe
327
Embedding model support
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5789807732, "node_id": ...
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
18
2023-08-11T03:53:45
2024-02-21T02:37:30
2024-02-21T02:37:30
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Add embedding models to use primarily with `/api/embeddings` * `instructor-xl` * `bge-large` * `all-MiniLM-L6-v2` See the full [leaderboard](https://huggingface.co/spaces/mteb/leaderboard)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/327/reactions", "total_count": 40, "+1": 40, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/327/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5550
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5550/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5550/comments
https://api.github.com/repos/ollama/ollama/issues/5550/events
https://github.com/ollama/ollama/issues/5550
2,396,475,077
I_kwDOJ0Z1Ps6O107F
5,550
Support For TPU's
{ "login": "Moonlight1220", "id": 172665223, "node_id": "U_kgDOCkqphw", "avatar_url": "https://avatars.githubusercontent.com/u/172665223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Moonlight1220", "html_url": "https://github.com/Moonlight1220", "followers_url": "https://api.github.com/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-08T20:10:38
2024-07-09T22:01:49
2024-07-08T22:24:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**Hello Ollama Community,** Single board computers such as the Raspberry Pi have limitless possibilities when it comes to expansion, this can be very useful for AI. With products such as the Raspberry Pi AI Kit from seed studio and the Coral AI family of TPU's from Google this can accelerate LLM's and may prove usef...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5550/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5550/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5773
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5773/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5773/comments
https://api.github.com/repos/ollama/ollama/issues/5773/events
https://github.com/ollama/ollama/issues/5773
2,416,790,360
I_kwDOJ0Z1Ps6QDUtY
5,773
Favor idle GPUs that fit over largest free memory GPUs when scheduling
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
0
2024-07-18T15:56:14
2024-07-18T15:56:15
null
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The scheduler today tries to find a single GPU to run a model based on the [largest amount of free VRAM](https://github.com/ollama/ollama/blob/main/server/sched.go#L690-L693), but on multi-GPU setups where one GPU is significantly larger than others, this can lead to smaller models clumping on the largest GPU. The alg...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5773/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3300
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3300/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3300/comments
https://api.github.com/repos/ollama/ollama/issues/3300/events
https://github.com/ollama/ollama/issues/3300
2,203,219,291
I_kwDOJ0Z1Ps6DUnVb
3,300
docker container only listens on ipv6 by default
{ "login": "nopoz", "id": 460545, "node_id": "MDQ6VXNlcjQ2MDU0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/460545?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nopoz", "html_url": "https://github.com/nopoz", "followers_url": "https://api.github.com/users/nopoz/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
6
2024-03-22T19:59:49
2024-09-19T18:07:09
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The docker container only listens on ipv6 by default. This causes connection failures for other containers in the same stack that are trying to communicate with ollama via ipv4. ### What did you expect to see? For the container to listen on both ipv4 and ipv6. ### Steps to reproduce...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3300/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5303
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5303/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5303/comments
https://api.github.com/repos/ollama/ollama/issues/5303/events
https://github.com/ollama/ollama/issues/5303
2,375,490,048
I_kwDOJ0Z1Ps6NlxoA
5,303
Ollama keeps to randomly re-evaluate whole prompt, making chats impossible
{ "login": "drazdra", "id": 133811709, "node_id": "U_kgDOB_nN_Q", "avatar_url": "https://avatars.githubusercontent.com/u/133811709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drazdra", "html_url": "https://github.com/drazdra", "followers_url": "https://api.github.com/users/drazdra/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng...
open
false
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[ { "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://...
null
18
2024-06-26T14:14:53
2024-11-06T01:01:34
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama randomly starts whole prompt re-evaluation ignoring the cache. Normally next message on my system starts in 1-2 seconds, but when it happens i've to wait 7-20 minutes. Another proof is that in last message stats it shows the whole size of the prompt in prompt eval, instead of just last ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5303/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5303/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4437
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4437/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4437/comments
https://api.github.com/repos/ollama/ollama/issues/4437/events
https://github.com/ollama/ollama/issues/4437
2,296,281,072
I_kwDOJ0Z1Ps6I3nfw
4,437
Ollama vs Llama-cpp-python : Slow response time as compared to llama-cpp-python
{ "login": "utility-aagrawal", "id": 140737044, "node_id": "U_kgDOCGN6FA", "avatar_url": "https://avatars.githubusercontent.com/u/140737044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/utility-aagrawal", "html_url": "https://github.com/utility-aagrawal", "followers_url": "https://api.gi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-05-14T19:43:03
2024-07-03T23:16:48
2024-07-03T23:16:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I built a RAG Q&A pipeline using LlamaIndex and Llama-cpp-python in the past. I want to switch from llama-cpp to ollama because ollama is more stable and easier to install. When I made the switch, I noticed a significant increase in response time. Would you know what might cause this slow...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4437/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/4437/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5504
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5504/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5504/comments
https://api.github.com/repos/ollama/ollama/issues/5504/events
https://github.com/ollama/ollama/issues/5504
2,393,076,906
I_kwDOJ0Z1Ps6Oo3Sq
5,504
0xc0000409 CUDA error | was working fine before - OOM crash
{ "login": "gaduffl", "id": 100528925, "node_id": "U_kgDOBf3zHQ", "avatar_url": "https://avatars.githubusercontent.com/u/100528925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaduffl", "html_url": "https://github.com/gaduffl", "followers_url": "https://api.github.com/users/gaduffl/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-07-05T19:54:29
2024-07-10T19:47:32
2024-07-10T19:47:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama was working fine with all small models I tested so far (4GB VRAM). After upgrading to 0.1.48, I get a CUDA error with all models, e.g. Llama3 8B: _Error: llama runner process has terminated: exit status 0xc0000409 CUDA error"_ This model was running perfectly fine before. [server....
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5504/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8197
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8197/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8197/comments
https://api.github.com/repos/ollama/ollama/issues/8197/events
https://github.com/ollama/ollama/pull/8197
2,753,836,424
PR_kwDOJ0Z1Ps6F-O2S
8,197
fix: only add to history if different
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
open
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
0
2024-12-21T08:09:11
2025-01-10T21:50:13
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8197", "html_url": "https://github.com/ollama/ollama/pull/8197", "diff_url": "https://github.com/ollama/ollama/pull/8197.diff", "patch_url": "https://github.com/ollama/ollama/pull/8197.patch", "merged_at": null }
if the last item in history is the same as the one being added, skip it. this reduces the number of history entries. the behaviour is similar to how most shells maintain history
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8197/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6687
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6687/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6687/comments
https://api.github.com/repos/ollama/ollama/issues/6687/events
https://github.com/ollama/ollama/pull/6687
2,511,664,082
PR_kwDOJ0Z1Ps56uzL6
6,687
Align OpenAI Chat option processing with Completion option processing
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-09-07T14:11:03
2024-09-07T14:14:05
2024-09-07T14:13:35
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6687", "html_url": "https://github.com/ollama/ollama/pull/6687", "diff_url": "https://github.com/ollama/ollama/pull/6687.diff", "patch_url": "https://github.com/ollama/ollama/pull/6687.patch", "merged_at": null }
https://github.com/ollama/ollama/pull/6514 removed the scaling of option values for OpenAI Completion requests. Do the same for Chat requests.
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6687/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5962
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5962/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5962/comments
https://api.github.com/repos/ollama/ollama/issues/5962/events
https://github.com/ollama/ollama/pull/5962
2,430,931,935
PR_kwDOJ0Z1Ps52gx8k
5,962
server: reuse original download URL for images
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
1
2024-07-25T20:32:06
2024-07-25T22:58:32
2024-07-25T22:58:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5962", "html_url": "https://github.com/ollama/ollama/pull/5962", "diff_url": "https://github.com/ollama/ollama/pull/5962.diff", "patch_url": "https://github.com/ollama/ollama/pull/5962.patch", "merged_at": "2024-07-25T22:58:30" }
This changes the registry client to reuse the original download URL it gets on the first redirect response for all subsequent requests, preventing thundering herd issues when hot new LLMs are released.
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5962/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5383
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5383/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5383/comments
https://api.github.com/repos/ollama/ollama/issues/5383/events
https://github.com/ollama/ollama/issues/5383
2,381,793,449
I_kwDOJ0Z1Ps6N90ip
5,383
Referring offline downloaded models in code
{ "login": "RaoPisay", "id": 8242864, "node_id": "MDQ6VXNlcjgyNDI4NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8242864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RaoPisay", "html_url": "https://github.com/RaoPisay", "followers_url": "https://api.github.com/users/RaoPi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-06-29T14:56:12
2024-07-01T15:43:06
2024-07-01T15:43:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Need help: I trying to refer model downloaded from ollama. I know the path where it is downloaded `~/.ollama/models/*` In the python code given Python code -- `tokenizer = AutoTokenizer.from_pretrained(model)` -- Python code Here I want to mentioned the `model` variable with the path for ...
{ "login": "RaoPisay", "id": 8242864, "node_id": "MDQ6VXNlcjgyNDI4NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8242864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RaoPisay", "html_url": "https://github.com/RaoPisay", "followers_url": "https://api.github.com/users/RaoPi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5383/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7807
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7807/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7807/comments
https://api.github.com/repos/ollama/ollama/issues/7807/events
https://github.com/ollama/ollama/issues/7807
2,685,232,210
I_kwDOJ0Z1Ps6gDWRS
7,807
newer version ollama chat more slower
{ "login": "krmao", "id": 7344437, "node_id": "MDQ6VXNlcjczNDQ0Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7344437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krmao", "html_url": "https://github.com/krmao", "followers_url": "https://api.github.com/users/krmao/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-11-23T03:39:20
2024-11-25T04:33:44
2024-11-25T04:33:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? with the same code on the same machine Apple M2 Pro macos 15.1.1 (24B91) ```python import time import ollama start_time = time.perf_counter() #len(final_chat_messages)= 6107 ai_response = ollama.chat(model=model, messages=final_chat_messages, tools=TOOLS) print(f'time after ollama...
{ "login": "krmao", "id": 7344437, "node_id": "MDQ6VXNlcjczNDQ0Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7344437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krmao", "html_url": "https://github.com/krmao", "followers_url": "https://api.github.com/users/krmao/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7807/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3377
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3377/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3377/comments
https://api.github.com/repos/ollama/ollama/issues/3377/events
https://github.com/ollama/ollama/pull/3377
2,211,943,739
PR_kwDOJ0Z1Ps5q_HTD
3,377
Bump ROCm to 6.0.2 patch release
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-03-27T21:32:27
2024-03-28T23:07:57
2024-03-28T23:07:54
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3377", "html_url": "https://github.com/ollama/ollama/pull/3377", "diff_url": "https://github.com/ollama/ollama/pull/3377.diff", "patch_url": "https://github.com/ollama/ollama/pull/3377.patch", "merged_at": "2024-03-28T23:07:54" }
Fixes #2455
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3377/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1229
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1229/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1229/comments
https://api.github.com/repos/ollama/ollama/issues/1229/events
https://github.com/ollama/ollama/pull/1229
2,005,053,670
PR_kwDOJ0Z1Ps5gEAFP
1,229
revert checksum calculation to calculate-as-you-go
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
1
2023-11-21T20:12:48
2023-11-30T18:54:39
2023-11-30T18:54:38
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1229", "html_url": "https://github.com/ollama/ollama/pull/1229", "diff_url": "https://github.com/ollama/ollama/pull/1229.diff", "patch_url": "https://github.com/ollama/ollama/pull/1229.patch", "merged_at": "2023-11-30T18:54:38" }
calculating the checksum as it's being transferred is faster overall since the file doesn't need to be reread
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1229/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6477
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6477/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6477/comments
https://api.github.com/repos/ollama/ollama/issues/6477/events
https://github.com/ollama/ollama/issues/6477
2,483,382,575
I_kwDOJ0Z1Ps6UBWkv
6,477
Llama3.1 template doesn't work well with multi function calling as well as Environment: ipython mode
{ "login": "martinkozle", "id": 48385621, "node_id": "MDQ6VXNlcjQ4Mzg1NjIx", "avatar_url": "https://avatars.githubusercontent.com/u/48385621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/martinkozle", "html_url": "https://github.com/martinkozle", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-08-23T15:26:07
2024-08-26T11:40:13
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## Tool descriptions The current template checks if the final message is of Role "user" to decide whether to add the tool descriptions to it: ```go {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<|start_header_id|>user<|end_h...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6477/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6477/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2472
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2472/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2472/comments
https://api.github.com/repos/ollama/ollama/issues/2472/events
https://github.com/ollama/ollama/issues/2472
2,131,828,829
I_kwDOJ0Z1Ps5_ESBd
2,472
Ollama floods /tmp with unnecessary libraries
{ "login": "knoopx", "id": 100993, "node_id": "MDQ6VXNlcjEwMDk5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/100993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/knoopx", "html_url": "https://github.com/knoopx", "followers_url": "https://api.github.com/users/knoopx/follow...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-02-13T09:05:57
2024-09-26T18:14:00
2024-03-20T15:28:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is what my `/tmp` dir looks after a few hours. I have no idea why ollama does this and why no cleanup is in place. ollama version is 0.1.24. haven't noticed this before this release. ![image](https://github.com/ollama/ollama/assets/100993/e48031ef-fcc3-4617-a005-0ff7f5b7d4d6) ![image](https://github.com/ollama...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2472/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3129
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3129/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3129/comments
https://api.github.com/repos/ollama/ollama/issues/3129/events
https://github.com/ollama/ollama/pull/3129
2,185,151,611
PR_kwDOJ0Z1Ps5pkViA
3,129
docs: pbcopy on mac
{ "login": "adrienbrault", "id": 611271, "node_id": "MDQ6VXNlcjYxMTI3MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/611271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adrienbrault", "html_url": "https://github.com/adrienbrault", "followers_url": "https://api.github.com/u...
[]
closed
false
null
[]
null
0
2024-03-14T00:29:06
2024-05-07T20:19:27
2024-05-06T20:47:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3129", "html_url": "https://github.com/ollama/ollama/pull/3129", "diff_url": "https://github.com/ollama/ollama/pull/3129.diff", "patch_url": "https://github.com/ollama/ollama/pull/3129.patch", "merged_at": "2024-05-06T20:47:00" }
Hey!
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3129/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3145
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3145/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3145/comments
https://api.github.com/repos/ollama/ollama/issues/3145/events
https://github.com/ollama/ollama/pull/3145
2,186,928,883
PR_kwDOJ0Z1Ps5pqVkA
3,145
docs: Add AnythingLLM to README as integration option
{ "login": "timothycarambat", "id": 16845892, "node_id": "MDQ6VXNlcjE2ODQ1ODky", "avatar_url": "https://avatars.githubusercontent.com/u/16845892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothycarambat", "html_url": "https://github.com/timothycarambat", "followers_url": "https://api...
[]
closed
false
null
[]
null
0
2024-03-14T17:43:37
2024-03-25T20:10:03
2024-03-25T18:54:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3145", "html_url": "https://github.com/ollama/ollama/pull/3145", "diff_url": "https://github.com/ollama/ollama/pull/3145.diff", "patch_url": "https://github.com/ollama/ollama/pull/3145.patch", "merged_at": "2024-03-25T18:54:48" }
Adding [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm) by Mintplex Labs (YCS22) as an integration option for Ollama. Supports all models with full RAG and on-device vector database and embedding. Supports Docker and has a native MacOS, Windows, and Linux application that can be used alongside Ollama.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3145/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/927
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/927/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/927/comments
https://api.github.com/repos/ollama/ollama/issues/927/events
https://github.com/ollama/ollama/issues/927
1,964,761,108
I_kwDOJ0Z1Ps51G-AU
927
error on push due to uppercase model name
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
3
2023-10-27T04:49:00
2024-02-20T00:56:04
2024-02-20T00:56:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It may be obvious to us but unless you know it's not obvious that a model needs to be named namespace/model. Can we make the error a bit more helpful? This is related to the uppercase issue with model names
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/927/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6316
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6316/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6316/comments
https://api.github.com/repos/ollama/ollama/issues/6316/events
https://github.com/ollama/ollama/issues/6316
2,459,914,511
I_kwDOJ0Z1Ps6Sn1EP
6,316
ollama create will use a large amount of disk space in the /tmp
{ "login": "garyyang85", "id": 20335728, "node_id": "MDQ6VXNlcjIwMzM1NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/20335728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/garyyang85", "html_url": "https://github.com/garyyang85", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-08-12T02:22:03
2024-08-13T08:14:16
2024-08-12T02:26:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama create cmd will use a large amount of disk space in the /tmp directory by default. Is there a way to change the /tmp to other directory? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version latest
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6316/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/453
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/453/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/453/comments
https://api.github.com/repos/ollama/ollama/issues/453/events
https://github.com/ollama/ollama/issues/453
1,877,942,749
I_kwDOJ0Z1Ps5v7yHd
453
Add some way to keep the model in memory
{ "login": "spott", "id": 53284, "node_id": "MDQ6VXNlcjUzMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/53284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spott", "html_url": "https://github.com/spott", "followers_url": "https://api.github.com/users/spott/followers", "f...
[]
closed
false
null
[]
null
1
2023-09-01T19:19:22
2023-09-01T19:22:08
2023-09-01T19:22:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It can take a while to load a model into memory, which currently needs to be done after every api call when using Ollama serve. ggml has a --mlock option that keeps the model in memory, so it can be repeatedly queried without falling out of memory, it would be great if there was a way to do the same with Ollama.
{ "login": "spott", "id": 53284, "node_id": "MDQ6VXNlcjUzMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/53284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spott", "html_url": "https://github.com/spott", "followers_url": "https://api.github.com/users/spott/followers", "f...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/453/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4014
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4014/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4014/comments
https://api.github.com/repos/ollama/ollama/issues/4014/events
https://github.com/ollama/ollama/issues/4014
2,267,951,595
I_kwDOJ0Z1Ps6HLjHr
4,014
Add support for Qwen-VL
{ "login": "dagehuifei", "id": 145953245, "node_id": "U_kgDOCLMR3Q", "avatar_url": "https://avatars.githubusercontent.com/u/145953245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dagehuifei", "html_url": "https://github.com/dagehuifei", "followers_url": "https://api.github.com/users/dag...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-04-29T01:30:37
2024-04-29T01:31:13
2024-04-29T01:31:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? https://huggingface.co/Qwen/Qwen-VL ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "dagehuifei", "id": 145953245, "node_id": "U_kgDOCLMR3Q", "avatar_url": "https://avatars.githubusercontent.com/u/145953245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dagehuifei", "html_url": "https://github.com/dagehuifei", "followers_url": "https://api.github.com/users/dag...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4014/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/1765
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1765/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1765/comments
https://api.github.com/repos/ollama/ollama/issues/1765/events
https://github.com/ollama/ollama/issues/1765
2,063,840,510
I_kwDOJ0Z1Ps57A7T-
1,765
Can't pull .ggml local model
{ "login": "reddiamond1234", "id": 122911466, "node_id": "U_kgDOB1N66g", "avatar_url": "https://avatars.githubusercontent.com/u/122911466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reddiamond1234", "html_url": "https://github.com/reddiamond1234", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-01-03T11:32:46
2024-01-04T08:35:29
2024-01-04T08:35:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I created Modelfile: <code>FROM /models/phi-2.Q4_0.gguf TEMPLATE "[INST] {{ .Prompt }} [/INST]" PARAMETER temperature 0 PARAMETER num_ctx 2048 PARAMETER num_thread 6 PARAMETER top_k 40 PARAMETER top_p 0.95 </code> when i use command to create my custom model <code>ollama create phi2-SC -f ./models/modelfi...
{ "login": "reddiamond1234", "id": 122911466, "node_id": "U_kgDOB1N66g", "avatar_url": "https://avatars.githubusercontent.com/u/122911466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reddiamond1234", "html_url": "https://github.com/reddiamond1234", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1765/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3909
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3909/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3909/comments
https://api.github.com/repos/ollama/ollama/issues/3909/events
https://github.com/ollama/ollama/issues/3909
2,263,502,226
I_kwDOJ0Z1Ps6G6k2S
3,909
ollama can not run the custom model (finetune on llama3) on M1 max
{ "login": "TobyYang7", "id": 42986654, "node_id": "MDQ6VXNlcjQyOTg2NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/42986654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TobyYang7", "html_url": "https://github.com/TobyYang7", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-25T12:47:52
2024-10-23T17:48:34
2024-10-23T17:48:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ❯ ollama run InsuranceGPT "What is your favourite condiment?" Error: llama runner process no longer running: -1 error:check_tensor_dims: tensor 'blk.0.attn_k.weight' has wrong shape; expected 4096, 4096, got 4096, 1024, 1, 1 ### OS macOS ### GPU Apple ### CPU Apple ### Ollama...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3909/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2371
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2371/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2371/comments
https://api.github.com/repos/ollama/ollama/issues/2371/events
https://github.com/ollama/ollama/issues/2371
2,120,498,216
I_kwDOJ0Z1Ps5-ZDwo
2,371
Documents translation (Japanese)
{ "login": "jesseclin", "id": 34976014, "node_id": "MDQ6VXNlcjM0OTc2MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/34976014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jesseclin", "html_url": "https://github.com/jesseclin", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
3
2024-02-06T10:57:27
2024-03-12T18:46:36
2024-03-12T18:46:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have translated the documentation files into [Japanese ones](https://github.com/jesseclin/ollama/blob/main/README_ja.md) and will keep them updated. Should I submit a PR? or is it better to leave them there?
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2371/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2371/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1479
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1479/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1479/comments
https://api.github.com/repos/ollama/ollama/issues/1479/events
https://github.com/ollama/ollama/pull/1479
2,037,434,289
PR_kwDOJ0Z1Ps5hxqI5
1,479
Fix Readme "Database -> MindsDB" link
{ "login": "ruecat", "id": 79139779, "node_id": "MDQ6VXNlcjc5MTM5Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/79139779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruecat", "html_url": "https://github.com/ruecat", "followers_url": "https://api.github.com/users/ruecat/fo...
[]
closed
false
null
[]
null
0
2023-12-12T10:20:25
2023-12-12T15:26:14
2023-12-12T15:26:13
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1479", "html_url": "https://github.com/ollama/ollama/pull/1479", "diff_url": "https://github.com/ollama/ollama/pull/1479.diff", "patch_url": "https://github.com/ollama/ollama/pull/1479.patch", "merged_at": "2023-12-12T15:26:13" }
This pull request fixes markdown ("MindsDB" link in Readme)
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1479/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4874
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4874/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4874/comments
https://api.github.com/repos/ollama/ollama/issues/4874/events
https://github.com/ollama/ollama/pull/4874
2,338,834,962
PR_kwDOJ0Z1Ps5xtn2q
4,874
Rocm v6 bump
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-06-06T17:44:29
2024-06-15T14:38:35
2024-06-15T14:38:32
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4874", "html_url": "https://github.com/ollama/ollama/pull/4874", "diff_url": "https://github.com/ollama/ollama/pull/4874.diff", "patch_url": "https://github.com/ollama/ollama/pull/4874.patch", "merged_at": "2024-06-15T14:38:32" }
null
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4874/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5065
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5065/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5065/comments
https://api.github.com/repos/ollama/ollama/issues/5065/events
https://github.com/ollama/ollama/pull/5065
2,354,966,357
PR_kwDOJ0Z1Ps5ykU7O
5,065
README: add llmcord.py extension
{ "login": "jakobdylanc", "id": 38699060, "node_id": "MDQ6VXNlcjM4Njk5MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/38699060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jakobdylanc", "html_url": "https://github.com/jakobdylanc", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-06-15T15:52:59
2024-06-26T18:44:58
2024-06-26T18:44:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5065", "html_url": "https://github.com/ollama/ollama/pull/5065", "diff_url": "https://github.com/ollama/ollama/pull/5065.diff", "patch_url": "https://github.com/ollama/ollama/pull/5065.patch", "merged_at": null }
Repo: https://github.com/jakobdylanc/discord-llm-chatbot
{ "login": "jakobdylanc", "id": 38699060, "node_id": "MDQ6VXNlcjM4Njk5MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/38699060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jakobdylanc", "html_url": "https://github.com/jakobdylanc", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5065/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4661
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4661/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4661/comments
https://api.github.com/repos/ollama/ollama/issues/4661/events
https://github.com/ollama/ollama/pull/4661
2,318,967,829
PR_kwDOJ0Z1Ps5wp16V
4,661
llm/server.go: Fix 2 minor typos
{ "login": "coolljt0725", "id": 8232360, "node_id": "MDQ6VXNlcjgyMzIzNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coolljt0725", "html_url": "https://github.com/coolljt0725", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
0
2024-05-27T11:48:32
2024-05-28T02:25:25
2024-05-28T00:21:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4661", "html_url": "https://github.com/ollama/ollama/pull/4661", "diff_url": "https://github.com/ollama/ollama/pull/4661.diff", "patch_url": "https://github.com/ollama/ollama/pull/4661.patch", "merged_at": "2024-05-28T00:21:10" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4661/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4661/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/764
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/764/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/764/comments
https://api.github.com/repos/ollama/ollama/issues/764/events
https://github.com/ollama/ollama/issues/764
1,939,699,988
I_kwDOJ0Z1Ps5znXkU
764
How to multi threading with api << python >>
{ "login": "missandi", "id": 90961639, "node_id": "MDQ6VXNlcjkwOTYxNjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90961639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/missandi", "html_url": "https://github.com/missandi", "followers_url": "https://api.github.com/users/mis...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
4
2023-10-12T10:35:22
2023-12-22T03:35:54
2023-12-22T03:35:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
def generate(model_name, prompt, system=None, template=None, context=None, options=None, callback=None): try: url = f"{BASE_URL}/api/generate" payload = { "model": model_name, "prompt": prompt, "system": system, "template": template, ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/764/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2700
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2700/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2700/comments
https://api.github.com/repos/ollama/ollama/issues/2700/events
https://github.com/ollama/ollama/pull/2700
2,150,422,951
PR_kwDOJ0Z1Ps5nuEmb
2,700
Add clear history cli cmd
{ "login": "halfnibble", "id": 5139752, "node_id": "MDQ6VXNlcjUxMzk3NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5139752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/halfnibble", "html_url": "https://github.com/halfnibble", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
2
2024-02-23T06:03:24
2024-09-05T02:36:35
2024-09-05T02:36:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2700", "html_url": "https://github.com/ollama/ollama/pull/2700", "diff_url": "https://github.com/ollama/ollama/pull/2700.diff", "patch_url": "https://github.com/ollama/ollama/pull/2700.patch", "merged_at": null }
After performing content safety testing on various models, I realized it would be nice to clear the history. Not sure if this warrants an api handler func like the other handlers? Also, I thought about abstracting the history path code.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2700/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2700/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6474
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6474/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6474/comments
https://api.github.com/repos/ollama/ollama/issues/6474/events
https://github.com/ollama/ollama/issues/6474
2,482,875,247
I_kwDOJ0Z1Ps6T_atv
6,474
Phi3.5 broken behaviour
{ "login": "derluke", "id": 6739699, "node_id": "MDQ6VXNlcjY3Mzk2OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6739699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/derluke", "html_url": "https://github.com/derluke", "followers_url": "https://api.github.com/users/derluke/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
1
2024-08-23T10:51:29
2024-08-28T08:04:57
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? As mentioned by a few others in https://github.com/ollama/ollama/issues/6449 the phi3.5 models never stop responding and quickly become nonsensical example: ![image](https://github.com/user-attachments/assets/15ad7703-6380-4d33-a8f8-1cd121575231) ### OS WSL2 ### GPU Nvidia ### CPU AM...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6474/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6474/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3399
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3399/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3399/comments
https://api.github.com/repos/ollama/ollama/issues/3399/events
https://github.com/ollama/ollama/issues/3399
2,214,315,153
I_kwDOJ0Z1Ps6D-8SR
3,399
New Model: "Jamba" (Production Grade Mamba by ai21)
{ "login": "Marviel", "id": 2037165, "node_id": "MDQ6VXNlcjIwMzcxNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/2037165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Marviel", "html_url": "https://github.com/Marviel", "followers_url": "https://api.github.com/users/Marviel/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
2
2024-03-28T23:11:14
2024-05-02T06:30:00
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? https://www.maginative.com/article/ai21-labs-unveils-jamba-the-first-production-grade-mamba-based-ai-model/ https://huggingface.co/ai21labs/Jamba-v0.1?ref=maginative.com ------ Thanks for the excellent software :)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3399/reactions", "total_count": 16, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 6 }
https://api.github.com/repos/ollama/ollama/issues/3399/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1209
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1209/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1209/comments
https://api.github.com/repos/ollama/ollama/issues/1209/events
https://github.com/ollama/ollama/issues/1209
2,002,508,046
I_kwDOJ0Z1Ps53W9kO
1,209
Stuck on verifying sha256 digest
{ "login": "lelehier", "id": 106826977, "node_id": "U_kgDOBl4M4Q", "avatar_url": "https://avatars.githubusercontent.com/u/106826977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lelehier", "html_url": "https://github.com/lelehier", "followers_url": "https://api.github.com/users/lelehier/...
[]
closed
false
null
[]
null
7
2023-11-20T15:53:32
2024-11-07T14:35:56
2023-12-05T00:02:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
So I installed ollama via docker. First few modells to pull worked flawlessly, but at some point ollama got stuck at the sha256 verifying stage. So i tried to setup a new docker container. First modell worked without issues. The second one I tried got stuck at the sha256 step. Now I installed ollama with the provided ...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1209/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1050
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1050/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1050/comments
https://api.github.com/repos/ollama/ollama/issues/1050/events
https://github.com/ollama/ollama/issues/1050
1,984,591,996
I_kwDOJ0Z1Ps52Snh8
1,050
default codellama web server
{ "login": "kritma", "id": 127416565, "node_id": "U_kgDOB5g49Q", "avatar_url": "https://avatars.githubusercontent.com/u/127416565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kritma", "html_url": "https://github.com/kritma", "followers_url": "https://api.github.com/users/kritma/follower...
[]
closed
false
null
[]
null
2
2023-11-09T00:07:15
2023-12-08T23:50:30
2023-12-04T23:13:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
#### `ollama run codellama`, starts web server which listen on port 60263(or dynamic, idk) * Is this is ok? * How can i change this port? <img width="754" alt="image" src="https://github.com/jmorganca/ollama/assets/127416565/d0a688a5-2317-404b-8d71-f9b5b9bdb603">
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1050/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7338
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7338/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7338/comments
https://api.github.com/repos/ollama/ollama/issues/7338/events
https://github.com/ollama/ollama/pull/7338
2,609,992,917
PR_kwDOJ0Z1Ps5_sD3Z
7,338
Better test and handle Unicode
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-10-23T22:53:51
2024-10-29T01:12:31
2024-10-29T01:12:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7338", "html_url": "https://github.com/ollama/ollama/pull/7338", "diff_url": "https://github.com/ollama/ollama/pull/7338.diff", "patch_url": "https://github.com/ollama/ollama/pull/7338.patch", "merged_at": "2024-10-29T01:12:29" }
Recent releases have hit Unicode bugs, which should be better tested. In addition, when we do have failures, we should handle them more gracefully. This test currently fails on Windows (due to #7311) and passes on other platforms. Will hold this patch until the one fixing that is merged.
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7338/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7344
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7344/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7344/comments
https://api.github.com/repos/ollama/ollama/issues/7344/events
https://github.com/ollama/ollama/issues/7344
2,612,026,456
I_kwDOJ0Z1Ps6bsFxY
7,344
after some time idle / phone standby , getting to the termux ollama run cmd makes it restart the dl from 0
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
open
false
null
[]
null
10
2024-10-24T16:07:31
2024-12-06T09:32:29
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? so i know ollama can resume downloads but the following issue happened to me now the second time on a different model dl i run ollama run model it downloads .. i can switch apps , switch back to termux ollama , no problem but after some screen of time i return to termux and see it just bega...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7344/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7916
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7916/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7916/comments
https://api.github.com/repos/ollama/ollama/issues/7916/events
https://github.com/ollama/ollama/issues/7916
2,715,088,299
I_kwDOJ0Z1Ps6h1PWr
7,916
Develop a Qt QML Client for Ollama
{ "login": "ebrahimi1989", "id": 19800872, "node_id": "MDQ6VXNlcjE5ODAwODcy", "avatar_url": "https://avatars.githubusercontent.com/u/19800872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ebrahimi1989", "html_url": "https://github.com/ebrahimi1989", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-12-03T13:50:04
2024-12-14T15:38:41
2024-12-14T15:38:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
## Description I would like to request the development of a **Qt QML client for Ollama**. This client would provide a cross-platform, user-friendly graphical interface to interact with Ollama's API and manage local AI models. Qt QML is an excellent choice for creating visually appealing and highly responsive user ...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7916/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7916/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4375
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4375/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4375/comments
https://api.github.com/repos/ollama/ollama/issues/4375/events
https://github.com/ollama/ollama/issues/4375
2,291,338,407
I_kwDOJ0Z1Ps6Ikwyn
4,375
Model Request: IBM Granite
{ "login": "Fix3dll", "id": 10743391, "node_id": "MDQ6VXNlcjEwNzQzMzkx", "avatar_url": "https://avatars.githubusercontent.com/u/10743391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fix3dll", "html_url": "https://github.com/Fix3dll", "followers_url": "https://api.github.com/users/Fix3dl...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-05-12T13:39:07
2024-05-12T13:42:01
2024-05-12T13:42:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/ibm-granite
{ "login": "Fix3dll", "id": 10743391, "node_id": "MDQ6VXNlcjEwNzQzMzkx", "avatar_url": "https://avatars.githubusercontent.com/u/10743391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fix3dll", "html_url": "https://github.com/Fix3dll", "followers_url": "https://api.github.com/users/Fix3dl...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4375/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8575
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8575/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8575/comments
https://api.github.com/repos/ollama/ollama/issues/8575/events
https://github.com/ollama/ollama/issues/8575
2,810,789,793
I_kwDOJ0Z1Ps6niT-h
8,575
MiniCPM-o-2_6
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2025-01-25T05:51:08
2025-01-25T08:39:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://hf-mirror.com/openbmb/MiniCPM-o-2_6 thanks.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8575/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8575/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2031
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2031/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2031/comments
https://api.github.com/repos/ollama/ollama/issues/2031/events
https://github.com/ollama/ollama/issues/2031
2,086,054,577
I_kwDOJ0Z1Ps58Vqqx
2,031
Is the Ollama.app necessary after installation
{ "login": "LeonardoGentile", "id": 412061, "node_id": "MDQ6VXNlcjQxMjA2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/412061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeonardoGentile", "html_url": "https://github.com/LeonardoGentile", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
7
2024-01-17T12:08:02
2024-01-27T00:40:18
2024-01-27T00:40:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was unsure what the Ollama.app was installing on mac but after it did its thing I've realized ollama is installed under `/usr/local/bin/ollama` which I could have done using brew or similar installation processes. I've realized my models are under `~/.ollama` so my question is: Is the `Ollama.app` still necessary...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2031/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1451
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1451/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1451/comments
https://api.github.com/repos/ollama/ollama/issues/1451/events
https://github.com/ollama/ollama/issues/1451
2,034,225,058
I_kwDOJ0Z1Ps55P8-i
1,451
[FEAT] One directory to model them all
{ "login": "kfsone", "id": 323009, "node_id": "MDQ6VXNlcjMyMzAwOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/323009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kfsone", "html_url": "https://github.com/kfsone", "followers_url": "https://api.github.com/users/kfsone/follow...
[]
closed
false
null
[]
null
4
2023-12-10T05:21:39
2023-12-19T19:37:17
2023-12-19T19:37:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please consider adding a way to allow Ollama to share models with other resources/tools. Either by allowing a "models dir" config setting/option somewhere, or a modelmap.yaml file: ``` - mistral-7b-instruct: - presents-as: Mistral-7B-Instruct-v0.1 - folder: /opt/ai/models/TheBloke/Mistral-7B-Instruct-v01-GGUF...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1451/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4108
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4108/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4108/comments
https://api.github.com/repos/ollama/ollama/issues/4108/events
https://github.com/ollama/ollama/pull/4108
2,276,572,087
PR_kwDOJ0Z1Ps5ualdJ
4,108
fix line ending
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-05-02T21:54:18
2024-05-02T21:55:19
2024-05-02T21:55:15
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4108", "html_url": "https://github.com/ollama/ollama/pull/4108", "diff_url": "https://github.com/ollama/ollama/pull/4108.diff", "patch_url": "https://github.com/ollama/ollama/pull/4108.patch", "merged_at": "2024-05-02T21:55:15" }
replace CRLF with LF CRLF leaves the file in a perpetually dirty state on non-windows systems without a way to reset
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4108/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4108/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/561
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/561/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/561/comments
https://api.github.com/repos/ollama/ollama/issues/561/events
https://github.com/ollama/ollama/issues/561
1,905,778,006
I_kwDOJ0Z1Ps5xl91W
561
Unexpected EOF with Falcon:40b
{ "login": "henry-prince-addepar", "id": 80268918, "node_id": "MDQ6VXNlcjgwMjY4OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/80268918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/henry-prince-addepar", "html_url": "https://github.com/henry-prince-addepar", "followers_url...
[]
closed
false
null
[]
null
4
2023-09-20T22:03:10
2023-10-22T06:18:35
2023-09-23T18:37:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm getting an error from `falcon:40b`. Any help would be greatly appreciated. I'm currently running MacOS 13.5.2 (22G91) on a M1 Max with 32 GB of RAM. Thanks in advance! ``` ➜ ~ ollama pull falcon:40b pulling manifest pulling a4a6e73500b0... 100% |███████████████████████████████████████████████████████████████...
{ "login": "henry-prince-addepar", "id": 80268918, "node_id": "MDQ6VXNlcjgwMjY4OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/80268918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/henry-prince-addepar", "html_url": "https://github.com/henry-prince-addepar", "followers_url...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/561/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1248
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1248/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1248/comments
https://api.github.com/repos/ollama/ollama/issues/1248/events
https://github.com/ollama/ollama/issues/1248
2,007,179,277
I_kwDOJ0Z1Ps53oyAN
1,248
v0.1.11 Crashes on Intel Mac
{ "login": "10REMSSeiller", "id": 20466077, "node_id": "MDQ6VXNlcjIwNDY2MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/20466077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/10REMSSeiller", "html_url": "https://github.com/10REMSSeiller", "followers_url": "https://api.githu...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
3
2023-11-22T22:09:07
2023-11-27T06:06:05
2023-11-27T06:06:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
v0.1.9 ran successfully on my Mac, but v0.1.11 causes crash. I'm not sure why. Below is excerpt of crash log. I was able to revert and run v0.1.9. For verification, I trashed the original ~/.ollama and application support folders and reinstalled v0.1.11. Same results. What other info is needed? > Process: ...
{ "login": "10REMSSeiller", "id": 20466077, "node_id": "MDQ6VXNlcjIwNDY2MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/20466077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/10REMSSeiller", "html_url": "https://github.com/10REMSSeiller", "followers_url": "https://api.githu...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1248/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1330
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1330/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1330/comments
https://api.github.com/repos/ollama/ollama/issues/1330/events
https://github.com/ollama/ollama/issues/1330
2,018,738,115
I_kwDOJ0Z1Ps54U3_D
1,330
Installation downloaded cuda likely unnecessarily
{ "login": "folovco", "id": 142908483, "node_id": "U_kgDOCIScQw", "avatar_url": "https://avatars.githubusercontent.com/u/142908483?v=4", "gravatar_id": "", "url": "https://api.github.com/users/folovco", "html_url": "https://github.com/folovco", "followers_url": "https://api.github.com/users/folovco/foll...
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2023-11-30T14:03:55
2024-03-12T16:14:52
2024-03-12T16:14:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
For quick testing on a CPU-only server it would seem possibly advantageous to avoid downloading the cuda drivers if the https://ollama.ai/install.sh script can determine there is no Nvidia device to use. >>> Installing ollama to /usr/local/bin... >>> Creating ollama user... >>> Addi...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1330/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1030
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1030/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1030/comments
https://api.github.com/repos/ollama/ollama/issues/1030/events
https://github.com/ollama/ollama/issues/1030
1,981,226,199
I_kwDOJ0Z1Ps52FxzX
1,030
WizardCoder models lack a prompt template
{ "login": "Nan-Do", "id": 3844058, "node_id": "MDQ6VXNlcjM4NDQwNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/3844058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nan-Do", "html_url": "https://github.com/Nan-Do", "followers_url": "https://api.github.com/users/Nan-Do/foll...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2023-11-07T12:22:23
2023-11-16T22:58:38
2023-11-16T22:58:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have been using the WizardCoder models and they do not use a template, this makes the quality of the output substantially worse, sometimes not writing python code and some others not offering an answer at all. I have been trying to see how to contribute this to the model but I haven't seen any feasible way to do ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1030/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2319
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2319/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2319/comments
https://api.github.com/repos/ollama/ollama/issues/2319/events
https://github.com/ollama/ollama/issues/2319
2,114,066,687
I_kwDOJ0Z1Ps5-Ahj_
2,319
Distrubuted LLM support ?
{ "login": "Donno191", "id": 10705947, "node_id": "MDQ6VXNlcjEwNzA1OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/10705947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Donno191", "html_url": "https://github.com/Donno191", "followers_url": "https://api.github.com/users/Don...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
11
2024-02-02T05:04:03
2024-10-02T01:25:47
2024-04-08T16:53:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have 3 x PC with 3090 and 1 x PC with 4090. Currently i am running ollama using my 4090 and it is working great for loading different models on the go, but the bottle neck is loading larger models and bigger context windows on the 24gb vram. It would be great to have something like pedals or MPI on llama.cpp. IDE...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2319/reactions", "total_count": 15, "+1": 15, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2319/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6938
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6938/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6938/comments
https://api.github.com/repos/ollama/ollama/issues/6938/events
https://github.com/ollama/ollama/pull/6938
2,545,953,147
PR_kwDOJ0Z1Ps58jIa6
6,938
add CLI completion for commands
{ "login": "pranitbauva1997", "id": 2959938, "node_id": "MDQ6VXNlcjI5NTk5Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/2959938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pranitbauva1997", "html_url": "https://github.com/pranitbauva1997", "followers_url": "https://api.g...
[]
open
false
null
[]
null
2
2024-09-24T17:20:40
2025-01-03T09:07:23
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6938", "html_url": "https://github.com/ollama/ollama/pull/6938", "diff_url": "https://github.com/ollama/ollama/pull/6938.diff", "patch_url": "https://github.com/ollama/ollama/pull/6938.patch", "merged_at": null }
For example, `ollama ru<TAB>` should complete it to `ollama run` TODO: `ollama run gemma2:<TAB>` should show all options for parameters. Currently, I have to visit the ollama.com/library to verify. I need help with this as I can easily find a list of all models. If I have the list, I can finish this as well. Please ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6938/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4571
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4571/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4571/comments
https://api.github.com/repos/ollama/ollama/issues/4571/events
https://github.com/ollama/ollama/pull/4571
2,309,712,780
PR_kwDOJ0Z1Ps5wKMQ2
4,571
chore: update tokenizer.go
{ "login": "eltociear", "id": 22633385, "node_id": "MDQ6VXNlcjIyNjMzMzg1", "avatar_url": "https://avatars.githubusercontent.com/u/22633385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eltociear", "html_url": "https://github.com/eltociear", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-05-22T06:47:45
2024-05-22T07:25:23
2024-05-22T07:25:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4571", "html_url": "https://github.com/ollama/ollama/pull/4571", "diff_url": "https://github.com/ollama/ollama/pull/4571.diff", "patch_url": "https://github.com/ollama/ollama/pull/4571.patch", "merged_at": "2024-05-22T07:25:23" }
PreTokenziers -> PreTokenizers
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4571/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8180
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8180/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8180/comments
https://api.github.com/repos/ollama/ollama/issues/8180/events
https://github.com/ollama/ollama/issues/8180
2,751,366,967
I_kwDOJ0Z1Ps6j_oc3
8,180
How to speed up model
{ "login": "QichangZheng", "id": 82627111, "node_id": "MDQ6VXNlcjgyNjI3MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/82627111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/QichangZheng", "html_url": "https://github.com/QichangZheng", "followers_url": "https://api.github.c...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
1
2024-12-19T20:33:36
2024-12-20T21:35:48
2024-12-20T21:35:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have 96GB VRAM and llama3.3 only takes up half. Can I utilize the VRAM to speed up the model?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8180/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1912
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1912/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1912/comments
https://api.github.com/repos/ollama/ollama/issues/1912/events
https://github.com/ollama/ollama/issues/1912
2,075,345,176
I_kwDOJ0Z1Ps57s0EY
1,912
Will Magicoder-S-DS-6.7B ever come back?
{ "login": "reaperkrew", "id": 25416226, "node_id": "MDQ6VXNlcjI1NDE2MjI2", "avatar_url": "https://avatars.githubusercontent.com/u/25416226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reaperkrew", "html_url": "https://github.com/reaperkrew", "followers_url": "https://api.github.com/use...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
2
2024-01-10T22:45:31
2024-11-12T01:43:53
2024-11-12T01:43:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi Everyone, I've heard a lot of good things about Magicoder-S-DS-6.7B. From browsing through some previously closed threads in this repository, it looks like at some point in early December of 2023 Magicoder-S-DS-6.7B was available. Does anyone know if it will come back? Thanks
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1912/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1912/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1428
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1428/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1428/comments
https://api.github.com/repos/ollama/ollama/issues/1428/events
https://github.com/ollama/ollama/pull/1428
2,031,754,662
PR_kwDOJ0Z1Ps5hed4z
1,428
document response in modelfile template variables
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
0
2023-12-08T01:02:21
2024-01-08T19:38:52
2024-01-08T19:38:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1428", "html_url": "https://github.com/ollama/ollama/pull/1428", "diff_url": "https://github.com/ollama/ollama/pull/1428.diff", "patch_url": "https://github.com/ollama/ollama/pull/1428.patch", "merged_at": "2024-01-08T19:38:51" }
Document #1427, to be merged on next release
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1428/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5184
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5184/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5184/comments
https://api.github.com/repos/ollama/ollama/issues/5184/events
https://github.com/ollama/ollama/issues/5184
2,364,586,768
I_kwDOJ0Z1Ps6M8LsQ
5,184
`ollama show` should have the exact parameter count rounded to 3 digits
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 7706482389, "node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q...
open
false
null
[]
null
1
2024-06-20T14:21:38
2024-11-06T01:18:52
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` % ollama show llama3 Model arch llama parameters 8.0B quantization Q4_0 context length 8192 ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5184/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7514
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7514/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7514/comments
https://api.github.com/repos/ollama/ollama/issues/7514/events
https://github.com/ollama/ollama/issues/7514
2,636,162,814
I_kwDOJ0Z1Ps6dIKb-
7,514
Realtime API like OpenAI (full fledged voice to voice integrations)
{ "login": "ryzxxn", "id": 89019551, "node_id": "MDQ6VXNlcjg5MDE5NTUx", "avatar_url": "https://avatars.githubusercontent.com/u/89019551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ryzxxn", "html_url": "https://github.com/ryzxxn", "followers_url": "https://api.github.com/users/ryzxxn/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
11
2024-11-05T18:19:26
2024-12-23T01:11:15
2024-12-23T01:11:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
if anyone is working on a realtime api like integration with Ollama, please reach out to me. iam working on a similar integration, and i think feedback from, all the amazing people can greatly impact the quality of this feature, i think its pretty cool what openAI has going for it, and iam also a big fan of running eve...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7514/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7514/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/663
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/663/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/663/comments
https://api.github.com/repos/ollama/ollama/issues/663/events
https://github.com/ollama/ollama/pull/663
1,920,709,180
PR_kwDOJ0Z1Ps5bnFep
663
Fix for #586, seed and temperature settings
{ "login": "hallh", "id": 12785324, "node_id": "MDQ6VXNlcjEyNzg1MzI0", "avatar_url": "https://avatars.githubusercontent.com/u/12785324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hallh", "html_url": "https://github.com/hallh", "followers_url": "https://api.github.com/users/hallh/follow...
[]
closed
false
null
[]
null
5
2023-10-01T11:12:37
2023-10-04T13:51:15
2023-10-02T18:54:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/663", "html_url": "https://github.com/ollama/ollama/pull/663", "diff_url": "https://github.com/ollama/ollama/pull/663.diff", "patch_url": "https://github.com/ollama/ollama/pull/663.patch", "merged_at": null }
Fix for #586. Seed was omitted in the params to the llama.cpp server and temperature had an `omitempty` filter specified, breaking support for `0` temperature.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/663/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/663/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1586
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1586/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1586/comments
https://api.github.com/repos/ollama/ollama/issues/1586/events
https://github.com/ollama/ollama/issues/1586
2,047,386,180
I_kwDOJ0Z1Ps56CKJE
1,586
ollama models corrupted?
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
[]
closed
false
null
[]
null
5
2023-12-18T20:19:16
2023-12-30T00:42:41
2023-12-29T04:50:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've noticed that after running a few models, sometimes the models don't behave normally. This is a session where that was occurring. I had first tried with bakllava but it wasn't being helpful either. But notice that after I did the systemctl restart ollama the results were much better. Is something being corrupte...
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1586/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/97
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/97/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/97/comments
https://api.github.com/repos/ollama/ollama/issues/97/events
https://github.com/ollama/ollama/pull/97
1,809,156,548
PR_kwDOJ0Z1Ps5VvpV0
97
add new list command
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-07-18T05:43:08
2023-07-18T16:09:46
2023-07-18T16:09:45
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/97", "html_url": "https://github.com/ollama/ollama/pull/97", "diff_url": "https://github.com/ollama/ollama/pull/97.diff", "patch_url": "https://github.com/ollama/ollama/pull/97.patch", "merged_at": "2023-07-18T16:09:45" }
This changes lets you list each of the models which you have pulled locally.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/97/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/97/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7770
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7770/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7770/comments
https://api.github.com/repos/ollama/ollama/issues/7770/events
https://github.com/ollama/ollama/pull/7770
2,677,372,026
PR_kwDOJ0Z1Ps6Clrmj
7,770
Add Orbiton to the README.md file
{ "login": "xyproto", "id": 52813, "node_id": "MDQ6VXNlcjUyODEz", "avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyproto", "html_url": "https://github.com/xyproto", "followers_url": "https://api.github.com/users/xyproto/follower...
[]
closed
false
null
[]
null
0
2024-11-20T22:34:32
2024-11-21T08:15:30
2024-11-21T07:24:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7770", "html_url": "https://github.com/ollama/ollama/pull/7770", "diff_url": "https://github.com/ollama/ollama/pull/7770.diff", "patch_url": "https://github.com/ollama/ollama/pull/7770.patch", "merged_at": "2024-11-21T07:24:05" }
Orbiton is a configuration-free text editor and IDE that can use Ollama for tab-completion.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7770/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1218
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1218/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1218/comments
https://api.github.com/repos/ollama/ollama/issues/1218/events
https://github.com/ollama/ollama/pull/1218
2,003,902,464
PR_kwDOJ0Z1Ps5gACQv
1,218
Update Maid repo
{ "login": "danemadsen", "id": 11537699, "node_id": "MDQ6VXNlcjExNTM3Njk5", "avatar_url": "https://avatars.githubusercontent.com/u/11537699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danemadsen", "html_url": "https://github.com/danemadsen", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
0
2023-11-21T10:03:47
2023-11-21T14:30:34
2023-11-21T14:30:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1218", "html_url": "https://github.com/ollama/ollama/pull/1218", "diff_url": "https://github.com/ollama/ollama/pull/1218.diff", "patch_url": "https://github.com/ollama/ollama/pull/1218.patch", "merged_at": "2023-11-21T14:30:34" }
Sorry for the extra PR but i noticed i accidently linked my personal repo instead of the main repo
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1218/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6159
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6159/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6159/comments
https://api.github.com/repos/ollama/ollama/issues/6159/events
https://github.com/ollama/ollama/issues/6159
2,447,074,706
I_kwDOJ0Z1Ps6R22WS
6,159
Bunny family of VLMs
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/follower...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-08-04T10:37:13
2024-08-04T10:37:13
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Bunny is a family of very promising VLMs. They are already supported by llama.cpp https://github.com/BAAI-DCAI/Bunny v1.1 4b: https://huggingface.co/BAAI/Bunny-v1_1-4B v1.1 llama3 8b: https://huggingface.co/BAAI/Bunny-v1_1-Llama-3-8B-V
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6159/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6620
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6620/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6620/comments
https://api.github.com/repos/ollama/ollama/issues/6620/events
https://github.com/ollama/ollama/pull/6620
2,503,941,663
PR_kwDOJ0Z1Ps56Uijn
6,620
Use cuda v11 for driver 525 and older
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-09-03T22:45:52
2024-09-04T00:15:34
2024-09-04T00:15:31
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6620", "html_url": "https://github.com/ollama/ollama/pull/6620", "diff_url": "https://github.com/ollama/ollama/pull/6620.diff", "patch_url": "https://github.com/ollama/ollama/pull/6620.patch", "merged_at": "2024-09-04T00:15:31" }
It looks like driver 525 (aka, cuda driver 12.0) has problems with the cuda v12 library we compile against, so run v11 on those older drivers if detected. Fixes #6556
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6620/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1928
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1928/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1928/comments
https://api.github.com/repos/ollama/ollama/issues/1928/events
https://github.com/ollama/ollama/issues/1928
2,077,163,575
I_kwDOJ0Z1Ps57zwA3
1,928
Prevent offloding
{ "login": "Hansson0728", "id": 9604420, "node_id": "MDQ6VXNlcjk2MDQ0MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/9604420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hansson0728", "html_url": "https://github.com/Hansson0728", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
6
2024-01-11T16:55:24
2024-01-28T22:30:50
2024-01-28T22:30:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The model offloads after 5 min on the api, it would be nice to be able to prevent this
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1928/reactions", "total_count": 7, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1928/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8635
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8635/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8635/comments
https://api.github.com/repos/ollama/ollama/issues/8635/events
https://github.com/ollama/ollama/issues/8635
2,815,779,059
I_kwDOJ0Z1Ps6n1WDz
8,635
Use of System Ram over RDMA in GPU to allow for GPU acceleration on lower VRAM hardware.
{ "login": "SlinkierElm5611", "id": 52179385, "node_id": "MDQ6VXNlcjUyMTc5Mzg1", "avatar_url": "https://avatars.githubusercontent.com/u/52179385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SlinkierElm5611", "html_url": "https://github.com/SlinkierElm5611", "followers_url": "https://api...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-28T14:05:32
2025-01-28T14:05:32
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi all! I'm a GPU dev who has been messing around with Ollama for some self hosting. I was wondering if there is any reason Ollama has not been able to take advantage of GPU acceleration while using system RAM through RDMA(reBar). I have done system ram access through RDMA on GPU for real time processing and have had ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8635/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3117
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3117/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3117/comments
https://api.github.com/repos/ollama/ollama/issues/3117/events
https://github.com/ollama/ollama/issues/3117
2,184,544,645
I_kwDOJ0Z1Ps6CNYGF
3,117
Api /tags should include type for embedding model or llm
{ "login": "Hansson0728", "id": 9604420, "node_id": "MDQ6VXNlcjk2MDQ0MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/9604420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hansson0728", "html_url": "https://github.com/Hansson0728", "followers_url": "https://api.github.com/us...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
5
2024-03-13T17:30:07
2024-11-06T17:57:40
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
As the title says, it would be nice to have that information so we can filter out embedd models if we want to allow for model switching on a frontend
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3117/reactions", "total_count": 6, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/3117/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6850
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6850/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6850/comments
https://api.github.com/repos/ollama/ollama/issues/6850/events
https://github.com/ollama/ollama/pull/6850
2,532,475,441
PR_kwDOJ0Z1Ps571Xsr
6,850
allow ctl-j to add a new line + fix multiline bracketed paste
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
open
false
null
[]
null
1
2024-09-18T01:17:00
2024-09-20T18:13:20
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6850", "html_url": "https://github.com/ollama/ollama/pull/6850", "diff_url": "https://github.com/ollama/ollama/pull/6850.diff", "patch_url": "https://github.com/ollama/ollama/pull/6850.patch", "merged_at": null }
This change allows users to use Ctrl-J to send a newline when typing in the terminal (the equivalent of Ctrl-Enter). It also fixes a glitch in bracketed paste mode if `"""` was part of the paste. Fixes #3387 #6674
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6850/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3242
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3242/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3242/comments
https://api.github.com/repos/ollama/ollama/issues/3242/events
https://github.com/ollama/ollama/issues/3242
2,194,553,155
I_kwDOJ0Z1Ps6CzjlD
3,242
When I run the model my CPU usage is high but GPU usage is low
{ "login": "wangshuai67", "id": 13214849, "node_id": "MDQ6VXNlcjEzMjE0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/13214849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangshuai67", "html_url": "https://github.com/wangshuai67", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
9
2024-03-19T10:15:12
2024-04-15T22:56:29
2024-04-15T22:56:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 1. This is the gpu information of docker container ![image](https://github.com/ollama/ollama/assets/13214849/d3d81824-5854-4fcf-b9fd-83bfa0d1c4ec) ![image](https://github.com/ollama/ollama/assets/13214849/426ab93f-4b91-4dfd-b178-c9112e0b944f) 2. This is the gpu information of the host mac...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3242/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3242/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5642
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5642/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5642/comments
https://api.github.com/repos/ollama/ollama/issues/5642/events
https://github.com/ollama/ollama/issues/5642
2,404,449,099
I_kwDOJ0Z1Ps6PUPtL
5,642
退出后显存仍在占用 - Video memory is still occupied after exiting
{ "login": "gfkdliucheng", "id": 24772003, "node_id": "MDQ6VXNlcjI0NzcyMDAz", "avatar_url": "https://avatars.githubusercontent.com/u/24772003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gfkdliucheng", "html_url": "https://github.com/gfkdliucheng", "followers_url": "https://api.github.c...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-07-12T00:51:54
2024-08-09T23:39:10
2024-08-09T23:39:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 退出后显存仍在占用 ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.2
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5642/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7712
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7712/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7712/comments
https://api.github.com/repos/ollama/ollama/issues/7712/events
https://github.com/ollama/ollama/pull/7712
2,666,759,805
PR_kwDOJ0Z1Ps6CLP44
7,712
Update README.md
{ "login": "samirgaire10", "id": 118608337, "node_id": "U_kgDOBxHR0Q", "avatar_url": "https://avatars.githubusercontent.com/u/118608337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samirgaire10", "html_url": "https://github.com/samirgaire10", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
0
2024-11-18T00:11:01
2024-11-18T05:04:01
2024-11-18T03:53:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7712", "html_url": "https://github.com/ollama/ollama/pull/7712", "diff_url": "https://github.com/ollama/ollama/pull/7712.diff", "patch_url": "https://github.com/ollama/ollama/pull/7712.patch", "merged_at": null }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7712/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1534
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1534/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1534/comments
https://api.github.com/repos/ollama/ollama/issues/1534/events
https://github.com/ollama/ollama/issues/1534
2,042,687,646
I_kwDOJ0Z1Ps55wPCe
1,534
macOS M2 32 GB -- processing failed
{ "login": "enzyme69", "id": 3952687, "node_id": "MDQ6VXNlcjM5NTI2ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/3952687?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enzyme69", "html_url": "https://github.com/enzyme69", "followers_url": "https://api.github.com/users/enzym...
[]
closed
false
null
[]
null
5
2023-12-15T00:14:12
2024-01-08T21:42:03
2024-01-08T21:42:03
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I get error message: "Error: llama runner process has terminated" Does that mean it run out of memory? Is it possible to make it smaller?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1534/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2301
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2301/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2301/comments
https://api.github.com/repos/ollama/ollama/issues/2301/events
https://github.com/ollama/ollama/issues/2301
2,111,401,682
I_kwDOJ0Z1Ps592W7S
2,301
Batching support in Ollama
{ "login": "canamika27", "id": 41502651, "node_id": "MDQ6VXNlcjQxNTAyNjUx", "avatar_url": "https://avatars.githubusercontent.com/u/41502651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/canamika27", "html_url": "https://github.com/canamika27", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
3
2024-02-01T03:08:39
2024-02-05T19:24:23
2024-02-05T19:24:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Does ollama supports batching ?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2301/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2973
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2973/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2973/comments
https://api.github.com/repos/ollama/ollama/issues/2973/events
https://github.com/ollama/ollama/pull/2973
2,173,079,555
PR_kwDOJ0Z1Ps5o7I3k
2,973
fix some typos
{ "login": "hishope", "id": 153272819, "node_id": "U_kgDOCSLB8w", "avatar_url": "https://avatars.githubusercontent.com/u/153272819?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hishope", "html_url": "https://github.com/hishope", "followers_url": "https://api.github.com/users/hishope/foll...
[]
closed
false
null
[]
null
0
2024-03-07T06:42:44
2024-03-07T06:50:12
2024-03-07T06:50:12
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2973", "html_url": "https://github.com/ollama/ollama/pull/2973", "diff_url": "https://github.com/ollama/ollama/pull/2973.diff", "patch_url": "https://github.com/ollama/ollama/pull/2973.patch", "merged_at": "2024-03-07T06:50:12" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2973/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4977
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4977/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4977/comments
https://api.github.com/repos/ollama/ollama/issues/4977/events
https://github.com/ollama/ollama/issues/4977
2,346,155,878
I_kwDOJ0Z1Ps6L139m
4,977
qwen2-72b start to output gibberish at some point if i set num_ctx to 8192
{ "login": "Mikhael-Danilov", "id": 536516, "node_id": "MDQ6VXNlcjUzNjUxNg==", "avatar_url": "https://avatars.githubusercontent.com/u/536516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mikhael-Danilov", "html_url": "https://github.com/Mikhael-Danilov", "followers_url": "https://api.git...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
4
2024-06-11T11:19:23
2024-08-27T08:18:25
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? qwen2-72b start to output gibberish like this: `.5"5.F9(CB;6@FC9!DC:$B$D60G5",3B+2;1-*,@%=876E0;5*:.98G4!980+D` at some point if i set num_ctx to 8192. Normal output from llm was expected. Issue persist when using `ollama run`, or when using api (Silly Tavern) qwen2-72b works fine with ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4977/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3970
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3970/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3970/comments
https://api.github.com/repos/ollama/ollama/issues/3970/events
https://github.com/ollama/ollama/pull/3970
2,266,821,897
PR_kwDOJ0Z1Ps5t5h1H
3,970
types/model: remove Digest (for now)
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[]
closed
false
null
[]
null
0
2024-04-27T03:59:55
2024-04-27T04:14:29
2024-04-27T04:14:28
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3970", "html_url": "https://github.com/ollama/ollama/pull/3970", "diff_url": "https://github.com/ollama/ollama/pull/3970.diff", "patch_url": "https://github.com/ollama/ollama/pull/3970.patch", "merged_at": "2024-04-27T04:14:28" }
The Digest type needs more thought and is not necessary at the moment.
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3970/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8308
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8308/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8308/comments
https://api.github.com/repos/ollama/ollama/issues/8308/events
https://github.com/ollama/ollama/pull/8308
2,769,135,279
PR_kwDOJ0Z1Ps6GvnxW
8,308
llama: update vendored code to commit 46e3556
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2025-01-05T06:19:27
2025-01-08T19:22:04
2025-01-08T19:22:01
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8308", "html_url": "https://github.com/ollama/ollama/pull/8308", "diff_url": "https://github.com/ollama/ollama/pull/8308.diff", "patch_url": "https://github.com/ollama/ollama/pull/8308.patch", "merged_at": "2025-01-08T19:22:01" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8308/reactions", "total_count": 19, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 4, "eyes": 4 }
https://api.github.com/repos/ollama/ollama/issues/8308/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8243
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8243/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8243/comments
https://api.github.com/repos/ollama/ollama/issues/8243/events
https://github.com/ollama/ollama/issues/8243
2,759,160,472
I_kwDOJ0Z1Ps6kdXKY
8,243
glm-edge-v-5b-gguf:Q6_K blk.0.attn_qkv.weight
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-12-26T01:24:58
2024-12-26T01:24:58
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
PS C:\Users\Administrator> ollama run modelscope.cn/ZhipuAI/glm-edge-v-5b-gguf:Q6_K Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.attn_qkv.weight'****
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8243/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1992
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1992/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1992/comments
https://api.github.com/repos/ollama/ollama/issues/1992/events
https://github.com/ollama/ollama/issues/1992
2,080,878,134
I_kwDOJ0Z1Ps58B642
1,992
CUDA GPU is too old
{ "login": "tlaanemaa", "id": 10545187, "node_id": "MDQ6VXNlcjEwNTQ1MTg3", "avatar_url": "https://avatars.githubusercontent.com/u/10545187?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tlaanemaa", "html_url": "https://github.com/tlaanemaa", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
4
2024-01-14T20:05:00
2024-05-06T18:16:54
2024-01-14T22:10:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello. First of all, thanks for bringing us this awesome project! I have a pretty old GPU, Nvidia GTX 970, but it used to work fine with Ollama 0.1.15. Now I upgraded to 0.1.20 and I get the following error: ``` 2024/01/14 19:50:06 gpu.go:88: Detecting GPU type 2024/01/14 19:50:06 gpu.go:203: Searching for GP...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1992/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4374
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4374/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4374/comments
https://api.github.com/repos/ollama/ollama/issues/4374/events
https://github.com/ollama/ollama/issues/4374
2,291,272,472
I_kwDOJ0Z1Ps6IkgsY
4,374
how to write script so that it will remember the last conversation .
{ "login": "View-my-Git-Lab-krafi", "id": 121858831, "node_id": "U_kgDOB0NrDw", "avatar_url": "https://avatars.githubusercontent.com/u/121858831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/View-my-Git-Lab-krafi", "html_url": "https://github.com/View-my-Git-Lab-krafi", "followers_url": ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
11
2024-05-12T10:32:19
2024-05-14T17:45:43
2024-05-14T17:45:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` You:: my name is rafi AI: Nice to meet you, Rafi ! You:: what was my name AI: I apologize , but I don 't think we ever established a specific name for you in our conversation! ``` **whether i use the** localhost:11434/api/generate or http://localhost:11434/api/chat same resul...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4374/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6478
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6478/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6478/comments
https://api.github.com/repos/ollama/ollama/issues/6478/events
https://github.com/ollama/ollama/issues/6478
2,483,404,693
I_kwDOJ0Z1Ps6UBb-V
6,478
Add linux start command to docs
{ "login": "bdytx5", "id": 32812705, "node_id": "MDQ6VXNlcjMyODEyNzA1", "avatar_url": "https://avatars.githubusercontent.com/u/32812705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdytx5", "html_url": "https://github.com/bdytx5", "followers_url": "https://api.github.com/users/bdytx5/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
5
2024-08-23T15:39:22
2024-08-24T20:09:20
2024-08-23T20:55:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
nohup ollama serve &
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6478/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8291
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8291/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8291/comments
https://api.github.com/repos/ollama/ollama/issues/8291/events
https://github.com/ollama/ollama/issues/8291
2,766,900,877
I_kwDOJ0Z1Ps6k646N
8,291
disable cpu offload for runing llm
{ "login": "verigle", "id": 32769358, "node_id": "MDQ6VXNlcjMyNzY5MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/32769358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/verigle", "html_url": "https://github.com/verigle", "followers_url": "https://api.github.com/users/verigl...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2025-01-03T02:52:35
2025-01-16T00:06:00
2025-01-16T00:06:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
cpu will be auto offload to cpu ,although has more than one gpu for free, so I want to disable cpu offload for llm inference. > 94%/6% CPU/GPU
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8291/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/7693
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7693/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7693/comments
https://api.github.com/repos/ollama/ollama/issues/7693/events
https://github.com/ollama/ollama/pull/7693
2,663,114,892
PR_kwDOJ0Z1Ps6CFyyT
7,693
[docs] [modelfile.md] num_predict: incorrect default value
{ "login": "owboson", "id": 115831817, "node_id": "U_kgDOBud0CQ", "avatar_url": "https://avatars.githubusercontent.com/u/115831817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/owboson", "html_url": "https://github.com/owboson", "followers_url": "https://api.github.com/users/owboson/foll...
[]
closed
false
null
[]
null
1
2024-11-15T20:51:47
2024-12-03T23:00:05
2024-12-03T23:00:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7693", "html_url": "https://github.com/ollama/ollama/pull/7693", "diff_url": "https://github.com/ollama/ollama/pull/7693.diff", "patch_url": "https://github.com/ollama/ollama/pull/7693.patch", "merged_at": "2024-12-03T23:00:05" }
The default value for `num_predict` in the documentations was incorrect (see https://github.com/ollama/ollama/issues/7691#issuecomment-2479856306). Fixes #7691
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7693/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1584
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1584/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1584/comments
https://api.github.com/repos/ollama/ollama/issues/1584/events
https://github.com/ollama/ollama/issues/1584
2,047,376,587
I_kwDOJ0Z1Ps56CHzL
1,584
is ollama server down?
{ "login": "ralyodio", "id": 27381, "node_id": "MDQ6VXNlcjI3Mzgx", "avatar_url": "https://avatars.githubusercontent.com/u/27381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ralyodio", "html_url": "https://github.com/ralyodio", "followers_url": "https://api.github.com/users/ralyodio/foll...
[]
closed
false
null
[]
null
7
2023-12-18T20:12:36
2023-12-21T11:46:21
2023-12-19T15:09:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I've beeing getting "Server connection error" the past few hours with ollama-webui
{ "login": "ralyodio", "id": 27381, "node_id": "MDQ6VXNlcjI3Mzgx", "avatar_url": "https://avatars.githubusercontent.com/u/27381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ralyodio", "html_url": "https://github.com/ralyodio", "followers_url": "https://api.github.com/users/ralyodio/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1584/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6956
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6956/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6956/comments
https://api.github.com/repos/ollama/ollama/issues/6956/events
https://github.com/ollama/ollama/issues/6956
2,548,322,589
I_kwDOJ0Z1Ps6X5FEd
6,956
Why doesn't the model know which model it is?
{ "login": "robotom", "id": 45123215, "node_id": "MDQ6VXNlcjQ1MTIzMjE1", "avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/robotom", "html_url": "https://github.com/robotom", "followers_url": "https://api.github.com/users/roboto...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
3
2024-09-25T15:34:17
2024-09-26T16:05:45
2024-09-26T16:05:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? If I load llama 3.1 8B and ask it which model it is, it does not know what LLaMa 3.1 is at all. Sometimes it thinks it's LLama 3 or a 7B param model. Is there any reason for this? How can I be sure what I am running except for whatever `ollama ps` reports? (running on 4070 8GB VRAM and i7-13...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6956/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3267
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3267/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3267/comments
https://api.github.com/repos/ollama/ollama/issues/3267/events
https://github.com/ollama/ollama/issues/3267
2,197,128,530
I_kwDOJ0Z1Ps6C9YVS
3,267
CUDA Error when changing models
{ "login": "iamashwin99", "id": 46030335, "node_id": "MDQ6VXNlcjQ2MDMwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/46030335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iamashwin99", "html_url": "https://github.com/iamashwin99", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-03-20T10:01:07
2024-04-15T22:58:04
2024-04-15T22:58:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I ran a query on ollama on 0.1.29 first using `llama2` then `nomic-embed-text` and then back to `llama2` . On the third change of model I get the cuda error: ```console llama_new_context_with_model: CUDA7 compute buffer size = 3.00 MiB llama_new_context_with_model: CUDA_Host compu...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3267/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1390
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1390/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1390/comments
https://api.github.com/repos/ollama/ollama/issues/1390/events
https://github.com/ollama/ollama/issues/1390
2,026,757,275
I_kwDOJ0Z1Ps54zdyb
1,390
`ollama create` not working
{ "login": "almonk", "id": 51724, "node_id": "MDQ6VXNlcjUxNzI0", "avatar_url": "https://avatars.githubusercontent.com/u/51724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/almonk", "html_url": "https://github.com/almonk", "followers_url": "https://api.github.com/users/almonk/followers", ...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2023-12-05T17:20:31
2023-12-05T20:18:02
2023-12-05T20:18:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Following the `Modelfile` tutorial in the readme, I can't get `ollama create` to work. My modelfile is as follows: ``` FROM codellama:13b-instruct SYSTEM """ You are Mario from super mario bros, acting as an assistant. """ ``` When I attempt to create from the modelfile, I get the following error: ```...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1390/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/596
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/596/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/596/comments
https://api.github.com/repos/ollama/ollama/issues/596/events
https://github.com/ollama/ollama/pull/596
1,912,410,622
PR_kwDOJ0Z1Ps5bLFCt
596
update install.sh
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-25T23:12:52
2023-09-26T00:59:14
2023-09-26T00:59:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/596", "html_url": "https://github.com/ollama/ollama/pull/596", "diff_url": "https://github.com/ollama/ollama/pull/596.diff", "patch_url": "https://github.com/ollama/ollama/pull/596.patch", "merged_at": "2023-09-26T00:59:14" }
This prevents the service from restarting too early and not detecting GPU before drivers are installed. Fix PATH for WSL user. WSL preinstalls CUDA toolkit but it's in a non-standard path (`/usr/lib/wsl/lib`). While this is set for a normal WSL user, it's not set for the ollama user. This change sets PATH of the oll...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/596/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6654
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6654/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6654/comments
https://api.github.com/repos/ollama/ollama/issues/6654/events
https://github.com/ollama/ollama/issues/6654
2,507,378,407
I_kwDOJ0Z1Ps6Vc47n
6,654
Multi-instance seems not working
{ "login": "bigsausage", "id": 22679135, "node_id": "MDQ6VXNlcjIyNjc5MTM1", "avatar_url": "https://avatars.githubusercontent.com/u/22679135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bigsausage", "html_url": "https://github.com/bigsausage", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-09-05T10:19:01
2024-09-06T01:44:20
2024-09-05T16:16:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? i want to use multi-process to increase the concurrency of my server i use the follow command to start the server first `CUDA_VISIBLE_DEVICES=3 OLLAMA_NUM_PARALLEL=3 OLLAMA_MAX_LOADED_MODELS=3 /usr/bin/ollama serve` then i created 3 different copy of the model `ollama create my_lama3_1 ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6654/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5507
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5507/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5507/comments
https://api.github.com/repos/ollama/ollama/issues/5507/events
https://github.com/ollama/ollama/pull/5507
2,393,208,927
PR_kwDOJ0Z1Ps50kfwE
5,507
llm: put back old include dir
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-07-05T22:43:28
2024-07-05T23:34:23
2024-07-05T23:34:21
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5507", "html_url": "https://github.com/ollama/ollama/pull/5507", "diff_url": "https://github.com/ollama/ollama/pull/5507.diff", "patch_url": "https://github.com/ollama/ollama/pull/5507.patch", "merged_at": "2024-07-05T23:34:21" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5507/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3673
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3673/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3673/comments
https://api.github.com/repos/ollama/ollama/issues/3673/events
https://github.com/ollama/ollama/issues/3673
2,246,100,159
I_kwDOJ0Z1Ps6F4MS_
3,673
truly opensource model called olmo
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/foll...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-04-16T13:40:58
2024-04-20T13:03:43
2024-04-16T23:15:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? Build with truly open dataset and fully open-source model can this be supported in olllama thanks. https://allenai.org/olmo https://huggingface.co/allenai/OLMo-7B
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3673/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5388
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5388/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5388/comments
https://api.github.com/repos/ollama/ollama/issues/5388/events
https://github.com/ollama/ollama/issues/5388
2,382,035,386
I_kwDOJ0Z1Ps6N-vm6
5,388
Ollama fails to create model when blob is already present and drive is full
{ "login": "thot-experiment", "id": 94414189, "node_id": "U_kgDOBaClbQ", "avatar_url": "https://avatars.githubusercontent.com/u/94414189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thot-experiment", "html_url": "https://github.com/thot-experiment", "followers_url": "https://api.github....
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
[ { "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api....
null
5
2024-06-30T01:25:59
2024-08-12T16:28:56
2024-08-12T16:28:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? If I try to create a model from a modelfile that references an existing blob, ollama (i think) copies the entire blob to a temp file before realizing it already exists and creating the model. This makes model import take needlessly long and means you need to have extra free space on you drive to...
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/jos...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5388/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2822
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2822/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2822/comments
https://api.github.com/repos/ollama/ollama/issues/2822/events
https://github.com/ollama/ollama/issues/2822
2,160,194,172
I_kwDOJ0Z1Ps6AwfJ8
2,822
multiple idle ollama threads for each ollama serve process
{ "login": "aiseei", "id": 30615541, "node_id": "MDQ6VXNlcjMwNjE1NTQx", "avatar_url": "https://avatars.githubusercontent.com/u/30615541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aiseei", "html_url": "https://github.com/aiseei", "followers_url": "https://api.github.com/users/aiseei/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-29T01:52:01
2024-04-09T15:05:15
2024-03-20T16:29:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ubuntu 20.04 we run a small proxy that creates multiple ollama serve processes on diff ports. I have noticed in htop that there a ton of threads created but not disposed under each parent/master process. This looks to be from every generate api call. Does ollama not manage this? Is there a workround to safely c...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2822/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7795
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7795/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7795/comments
https://api.github.com/repos/ollama/ollama/issues/7795/events
https://github.com/ollama/ollama/issues/7795
2,682,642,543
I_kwDOJ0Z1Ps6f5eBv
7,795
Empty output from chat-endpoint / non-empty endpoint for non-chat endpoint
{ "login": "Tomas2D", "id": 15633909, "node_id": "MDQ6VXNlcjE1NjMzOTA5", "avatar_url": "https://avatars.githubusercontent.com/u/15633909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomas2D", "html_url": "https://github.com/Tomas2D", "followers_url": "https://api.github.com/users/Tomas2...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
17
2024-11-22T09:44:47
2025-01-12T08:20:41
2024-12-09T19:02:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I request a chat endpoint with the attached request body, I receive an empty response (the content is an empty string) with `done_reason: stop`. When I send the exact same request wrapped (just wrapped in the appropriate models' template) to generate a (non-chat) endpoint, I receive the cor...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7795/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2802
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2802/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2802/comments
https://api.github.com/repos/ollama/ollama/issues/2802/events
https://github.com/ollama/ollama/issues/2802
2,158,235,139
I_kwDOJ0Z1Ps6ApA4D
2,802
Madlad400 model
{ "login": "malipetek", "id": 13527277, "node_id": "MDQ6VXNlcjEzNTI3Mjc3", "avatar_url": "https://avatars.githubusercontent.com/u/13527277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/malipetek", "html_url": "https://github.com/malipetek", "followers_url": "https://api.github.com/users/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
7
2024-02-28T06:44:47
2024-09-16T11:45:06
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I wanted to test [madlad400](https://huggingface.co/jbochi/madlad400-3b-mt/blob/main/model-q4k.gguf) which said to be a great translator model. I downloaded the GGUF and created a file with models name with only FROM line. It looks like model created but when I test-run it, it outputs 2 empty lines for some r...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2802/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2802/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5057
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5057/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5057/comments
https://api.github.com/repos/ollama/ollama/issues/5057/events
https://github.com/ollama/ollama/issues/5057
2,354,580,827
I_kwDOJ0Z1Ps6MWA1b
5,057
Is the location of saving the model different between automatic startup through 'systemictl' and manual 'serve'?
{ "login": "wszgrcy", "id": 9607121, "node_id": "MDQ6VXNlcjk2MDcxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/9607121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wszgrcy", "html_url": "https://github.com/wszgrcy", "followers_url": "https://api.github.com/users/wszgrcy/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-06-15T06:14:48
2024-06-21T01:52:55
2024-06-21T01:52:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I pull model `qwen2:7b`.and `ollama list`.can see the model ``` > ollama list NAME ID SIZE MODIFIED qwen2:7b e0d4e1163c58 4.4 GB 6 hours ago qwen:7b 2091ee8c8d8f 4.5 GB 2 weeks ago ``` but when I `systemctl stop` and `ollama serve`,`ollama list`. the model can' find...
{ "login": "wszgrcy", "id": 9607121, "node_id": "MDQ6VXNlcjk2MDcxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/9607121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wszgrcy", "html_url": "https://github.com/wszgrcy", "followers_url": "https://api.github.com/users/wszgrcy/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5057/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/284
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/284/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/284/comments
https://api.github.com/repos/ollama/ollama/issues/284/events
https://github.com/ollama/ollama/pull/284
1,837,002,931
PR_kwDOJ0Z1Ps5XNhWW
284
update to nous-hermes modelfile
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[]
closed
false
null
[]
null
0
2023-08-04T15:57:43
2023-08-08T23:04:49
2023-08-08T23:04:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/284", "html_url": "https://github.com/ollama/ollama/pull/284", "diff_url": "https://github.com/ollama/ollama/pull/284.diff", "patch_url": "https://github.com/ollama/ollama/pull/284.patch", "merged_at": null }
- The nous-hermes model will now accept a system prompt from a model that uses nous-hermes. - also updated the midjourney-prompter to use a better name. as per Hugging Face (https://huggingface.co/NousResearch/Nous-Hermes-13b#prompt-format), the prompt template is: ``` Prompt Format The model follows the Alpac...
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/284/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2768
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2768/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2768/comments
https://api.github.com/repos/ollama/ollama/issues/2768/events
https://github.com/ollama/ollama/issues/2768
2,154,679,947
I_kwDOJ0Z1Ps6Abc6L
2,768
Ollama Not Running Failing to Load
{ "login": "TankMan649", "id": 124530160, "node_id": "U_kgDOB2wt8A", "avatar_url": "https://avatars.githubusercontent.com/u/124530160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TankMan649", "html_url": "https://github.com/TankMan649", "followers_url": "https://api.github.com/users/Tan...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-02-26T17:07:41
2024-03-12T00:00:04
2024-03-11T23:59:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I keep encountering a problem with Ollama and when it has been solved I have no idea how it was solved and everything I am doing to solve it nothing works. I am running a Python script with LangChain and Ollama testing it on a a simple Gradio interface. Let me emphasize this is a script that has worked before and NOTH...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2768/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6318
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6318/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6318/comments
https://api.github.com/repos/ollama/ollama/issues/6318/events
https://github.com/ollama/ollama/issues/6318
2,460,056,009
I_kwDOJ0Z1Ps6SoXnJ
6,318
ollama.app cannot open on my macbookpro with m3 pro
{ "login": "Spockkk0225", "id": 54880260, "node_id": "MDQ6VXNlcjU0ODgwMjYw", "avatar_url": "https://avatars.githubusercontent.com/u/54880260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Spockkk0225", "html_url": "https://github.com/Spockkk0225", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-08-12T05:11:49
2024-09-02T22:01:22
2024-09-02T22:01:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? environment: macbook pro, m3 pro, 18gb memory, Sonoma 14.4.1 the ollama.app cannot be opened with double click it reports segmentation fault when I execute it in terminal \>\>\> /Applications/Ollama.app/Contents/MacOS/ollama <<< segmentation fault /Applications/Ollama.app/Contents/MacO...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6318/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5331
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5331/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5331/comments
https://api.github.com/repos/ollama/ollama/issues/5331/events
https://github.com/ollama/ollama/issues/5331
2,378,585,021
I_kwDOJ0Z1Ps6NxlO9
5,331
version 1.47 downloaded, gemma2 error
{ "login": "MeDott29", "id": 13264408, "node_id": "MDQ6VXNlcjEzMjY0NDA4", "avatar_url": "https://avatars.githubusercontent.com/u/13264408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MeDott29", "html_url": "https://github.com/MeDott29", "followers_url": "https://api.github.com/users/MeD...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
12
2024-06-27T16:17:09
2024-06-29T23:08:22
2024-06-29T23:06:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` Jun 27 12:06:15 ollama[11759]: INFO [main] build info | build=1 commit="7c26775" tid="124734763667456" timestamp=1719504375 Jun 27 12:06:15 ollama[11759]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5331/timeline
null
completed
false