url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/718
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/718/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/718/comments
https://api.github.com/repos/ollama/ollama/issues/718/events
https://github.com/ollama/ollama/pull/718
1,930,409,238
PR_kwDOJ0Z1Ps5cH0SF
718
not found error before pulling model
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-10-06T15:17:05
2023-10-06T20:06:21
2023-10-06T20:06:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/718", "html_url": "https://github.com/ollama/ollama/pull/718", "diff_url": "https://github.com/ollama/ollama/pull/718.diff", "patch_url": "https://github.com/ollama/ollama/pull/718.patch", "merged_at": "2023-10-06T20:06:20" }
When attempting to run a model through the API before pulling it a cryptic "no such file or directory" error was returned with the error path. Improve this error to suggest pulling the model first, like the CLI does automatically. ``` curl -X 'POST' -d '{"prompt":"hello", "model": "mistral"}' 'http://127.0.0.1:11...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/718/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3744
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3744/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3744/comments
https://api.github.com/repos/ollama/ollama/issues/3744/events
https://github.com/ollama/ollama/issues/3744
2,251,941,390
I_kwDOJ0Z1Ps6GOeYO
3,744
Download the models with alternative tools
{ "login": "pepo-ec", "id": 1961172, "node_id": "MDQ6VXNlcjE5NjExNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/1961172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pepo-ec", "html_url": "https://github.com/pepo-ec", "followers_url": "https://api.github.com/users/pepo-ec/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
5
2024-04-19T02:20:16
2024-11-30T15:12:12
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How can I download models with other tools like wget/curl and then import them to a local Ollama server? When I download a model **it takes up all the available bandwidth** and I want to be able to control the bandwidth so that it takes longer but does not leave my LAN without connectivity
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3744/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3744/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8085
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8085/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8085/comments
https://api.github.com/repos/ollama/ollama/issues/8085/events
https://github.com/ollama/ollama/issues/8085
2,737,963,986
I_kwDOJ0Z1Ps6jMgPS
8,085
ollama : /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.25 not found - Kylin Linux glibc++ version incompatible with official builds
{ "login": "ouber23", "id": 7042434, "node_id": "MDQ6VXNlcjcwNDI0MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7042434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ouber23", "html_url": "https://github.com/ouber23", "followers_url": "https://api.github.com/users/ouber23/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-12-13T09:43:47
2025-01-06T19:19:08
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? when run ollama the error has happend as follow: ollama : /usr/lib64/libstdc++.so.6: version GLIBCXX_3.4.25 not found ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8085/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2574
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2574/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2574/comments
https://api.github.com/repos/ollama/ollama/issues/2574/events
https://github.com/ollama/ollama/issues/2574
2,140,986,066
I_kwDOJ0Z1Ps5_nNrS
2,574
OLLAMA_MODELS Directory
{ "login": "shersoni610", "id": 57876250, "node_id": "MDQ6VXNlcjU3ODc2MjUw", "avatar_url": "https://avatars.githubusercontent.com/u/57876250?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shersoni610", "html_url": "https://github.com/shersoni610", "followers_url": "https://api.github.com/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
10
2024-02-18T13:17:30
2025-01-26T19:08:53
2024-03-14T00:19:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I am running Ollama on a Linus machine (zsh shell). I set the environmental variable OLLAMA_MODELS to link to an external hard drive. export OLLAMA_MODELS=/home/akbar/Disk2/Models/Ollama/models However, the models are still store in /usr/share/ollama/.ollama folder. I wish to store all the models to an ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2574/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2574/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5571
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5571/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5571/comments
https://api.github.com/repos/ollama/ollama/issues/5571/events
https://github.com/ollama/ollama/issues/5571
2,398,158,082
I_kwDOJ0Z1Ps6O8P0C
5,571
`CUDA error: unspecified launch failure` on inference on Nvidia V100 GPUs
{ "login": "louisbrulenaudet", "id": 35007448, "node_id": "MDQ6VXNlcjM1MDA3NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/35007448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/louisbrulenaudet", "html_url": "https://github.com/louisbrulenaudet", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
7
2024-07-09T13:00:25
2024-07-10T20:17:14
2024-07-10T20:17:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi everyone, Users of older versions of Ollama have no problems, but with the new version, an error appears during inference. This seems to be linked to an error during the process of copying data between host and device ([cudaMemcpyAsync](https://docs.nvidia.com/cuda/cuda-runtime-api/group...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5571/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3215
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3215/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3215/comments
https://api.github.com/repos/ollama/ollama/issues/3215/events
https://github.com/ollama/ollama/issues/3215
2,191,457,641
I_kwDOJ0Z1Ps6Cnv1p
3,215
Access Denied Using LocalTunnel or Ngrok
{ "login": "Sonali-Behera-TRT", "id": 131662185, "node_id": "U_kgDOB9kBaQ", "avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sonali-Behera-TRT", "html_url": "https://github.com/Sonali-Behera-TRT", "followers_url": "https://api...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
11
2024-03-18T07:46:21
2024-12-12T02:52:17
2024-03-18T09:18:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am unable to access my Ollama server locally using LocalTunnel or Ngrok. When attempting to access the server through the provided URL, I receive a `403 Forbidden` error message. I am using Ollama on Colab/Kaggle to utilize free GPU access. Ollama operates within a containerized environme...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3215/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/ollama/ollama/issues/3215/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5698
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5698/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5698/comments
https://api.github.com/repos/ollama/ollama/issues/5698/events
https://github.com/ollama/ollama/issues/5698
2,408,174,807
I_kwDOJ0Z1Ps6PidTX
5,698
add support MiniCPM-Llama3-V-2_5
{ "login": "LDLINGLINGLING", "id": 47373076, "node_id": "MDQ6VXNlcjQ3MzczMDc2", "avatar_url": "https://avatars.githubusercontent.com/u/47373076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LDLINGLINGLING", "html_url": "https://github.com/LDLINGLINGLING", "followers_url": "https://api.gi...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-07-15T08:37:13
2024-08-28T21:48:08
2024-08-28T21:48:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This model is the most powerful multi-modal model I have tried so far. It has a large number of users. However, it is not currently supported by ollama.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5698/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5698/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6345
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6345/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6345/comments
https://api.github.com/repos/ollama/ollama/issues/6345/events
https://github.com/ollama/ollama/pull/6345
2,464,171,418
PR_kwDOJ0Z1Ps54Rvqv
6,345
Update openai.md to remove extra checkbox for vision
{ "login": "pamelafox", "id": 297042, "node_id": "MDQ6VXNlcjI5NzA0Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/297042?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pamelafox", "html_url": "https://github.com/pamelafox", "followers_url": "https://api.github.com/users/pame...
[]
closed
false
null
[]
null
0
2024-08-13T20:33:50
2024-08-13T20:36:05
2024-08-13T20:36:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6345", "html_url": "https://github.com/ollama/ollama/pull/6345", "diff_url": "https://github.com/ollama/ollama/pull/6345.diff", "patch_url": "https://github.com/ollama/ollama/pull/6345.patch", "merged_at": "2024-08-13T20:36:05" }
The list has Vision twice- once checked, the other unchecked. I'm removing the second one, optimistically, but I haven't verified a vision model works yet. So maybe the first one should be removed instead?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6345/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/571
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/571/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/571/comments
https://api.github.com/repos/ollama/ollama/issues/571/events
https://github.com/ollama/ollama/pull/571
1,907,992,694
PR_kwDOJ0Z1Ps5a8Yee
571
update dockerfile.cuda
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-22T00:59:12
2023-09-22T19:34:42
2023-09-22T19:34:42
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/571", "html_url": "https://github.com/ollama/ollama/pull/571", "diff_url": "https://github.com/ollama/ollama/pull/571.diff", "patch_url": "https://github.com/ollama/ollama/pull/571.patch", "merged_at": "2023-09-22T19:34:42" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/571/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7327
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7327/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7327/comments
https://api.github.com/repos/ollama/ollama/issues/7327/events
https://github.com/ollama/ollama/issues/7327
2,606,888,973
I_kwDOJ0Z1Ps6bYfgN
7,327
ollama create Error: open config.json: file does not exist
{ "login": "dragoncdj", "id": 132640267, "node_id": "U_kgDOB-fuCw", "avatar_url": "https://avatars.githubusercontent.com/u/132640267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dragoncdj", "html_url": "https://github.com/dragoncdj", "followers_url": "https://api.github.com/users/dragon...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-10-23T01:31:29
2024-11-13T22:20:49
2024-11-13T22:20:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I use the create command >ollama create mymodel2 -f D:\AI\qwen7\Modelfile but return Error: open config.json: file does not exist This is my Modelfile FROM .\export\pytorch_model.bin PARAMETER stop <|eot|> PARAMETER top_p 0.9 PARAMETER temperature 1.0 but,There is a config. json...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7327/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2332
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2332/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2332/comments
https://api.github.com/repos/ollama/ollama/issues/2332/events
https://github.com/ollama/ollama/issues/2332
2,115,350,139
I_kwDOJ0Z1Ps5-Fa57
2,332
using a legacy x86_64 cpu and GTX 1050 Ti?
{ "login": "truatpasteurdotfr", "id": 8300215, "node_id": "MDQ6VXNlcjgzMDAyMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/8300215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/truatpasteurdotfr", "html_url": "https://github.com/truatpasteurdotfr", "followers_url": "https:/...
[]
closed
false
null
[]
null
7
2024-02-02T16:57:31
2024-02-03T16:31:51
2024-02-03T16:31:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I have an old machine I would try to play with: ``` $ lscpu ... Model name: Intel(R) Xeon(R) CPU E5410 @ 2.33GHz ... Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_ts...
{ "login": "truatpasteurdotfr", "id": 8300215, "node_id": "MDQ6VXNlcjgzMDAyMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/8300215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/truatpasteurdotfr", "html_url": "https://github.com/truatpasteurdotfr", "followers_url": "https:/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2332/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4561
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4561/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4561/comments
https://api.github.com/repos/ollama/ollama/issues/4561/events
https://github.com/ollama/ollama/issues/4561
2,308,666,586
I_kwDOJ0Z1Ps6Jm3Ta
4,561
Is llava license correct (possibly should be Llama2 not Apache)?
{ "login": "asmith26", "id": 6988036, "node_id": "MDQ6VXNlcjY5ODgwMzY=", "avatar_url": "https://avatars.githubusercontent.com/u/6988036?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asmith26", "html_url": "https://github.com/asmith26", "followers_url": "https://api.github.com/users/asmit...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
3
2024-05-21T16:21:28
2024-11-17T19:04:44
2024-11-17T19:04:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Looking at the llava ollama page, it lists the license as Apache: https://ollama.com/library/llava ![image](https://github.com/ollama/ollama/assets/6988036/93fe2d16-6ad4-482f-bc47-ab6b3296d236) Looking at the link to huggingface, it implies it's possibly Llama 2: https://huggingface.co/liuhaotian/llava-v1.5-7b#lice...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4561/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/83
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/83/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/83/comments
https://api.github.com/repos/ollama/ollama/issues/83/events
https://github.com/ollama/ollama/pull/83
1,805,796,907
PR_kwDOJ0Z1Ps5Vkd13
83
fix multibyte responses
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-07-15T01:30:51
2023-07-15T03:14:38
2023-07-15T03:12:12
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/83", "html_url": "https://github.com/ollama/ollama/pull/83", "diff_url": "https://github.com/ollama/ollama/pull/83.diff", "patch_url": "https://github.com/ollama/ollama/pull/83.patch", "merged_at": "2023-07-15T03:12:12" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/83/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/83/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4907
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4907/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4907/comments
https://api.github.com/repos/ollama/ollama/issues/4907/events
https://github.com/ollama/ollama/issues/4907
2,340,585,613
I_kwDOJ0Z1Ps6LgoCN
4,907
Cannot run qwen2 7B, 1.5b
{ "login": "SAXN-SYNX", "id": 59173145, "node_id": "MDQ6VXNlcjU5MTczMTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/59173145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SAXN-SYNX", "html_url": "https://github.com/SAXN-SYNX", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-06-07T14:20:11
2024-06-09T14:06:03
2024-06-07T22:57:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### Shows error while running it. ``` llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4907/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4907/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3509
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3509/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3509/comments
https://api.github.com/repos/ollama/ollama/issues/3509/events
https://github.com/ollama/ollama/issues/3509
2,229,050,488
I_kwDOJ0Z1Ps6E3Jx4
3,509
Can Ollama use both CPU and GPU for inference?
{ "login": "OPDEV001", "id": 120762872, "node_id": "U_kgDOBzKx-A", "avatar_url": "https://avatars.githubusercontent.com/u/120762872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OPDEV001", "html_url": "https://github.com/OPDEV001", "followers_url": "https://api.github.com/users/OPDEV001/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
3
2024-04-06T03:20:18
2024-04-12T21:53:18
2024-04-12T21:53:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? May I know whether ollama support to mix CPU and GPU together for running on windows? I know my hardware is not enough for ollama, but I still want to use the part ability of GPU. But I checked the parameter information from link below, I still can not mix CPU&GPU, most load by CPU. h...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3509/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/378
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/378/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/378/comments
https://api.github.com/repos/ollama/ollama/issues/378/events
https://github.com/ollama/ollama/pull/378
1,856,053,749
PR_kwDOJ0Z1Ps5YNr72
378
copy metadata from source
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-18T04:56:04
2023-08-18T20:49:10
2023-08-18T20:49:09
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/378", "html_url": "https://github.com/ollama/ollama/pull/378", "diff_url": "https://github.com/ollama/ollama/pull/378.diff", "patch_url": "https://github.com/ollama/ollama/pull/378.patch", "merged_at": "2023-08-18T20:49:09" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/378/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7516
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7516/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7516/comments
https://api.github.com/repos/ollama/ollama/issues/7516/events
https://github.com/ollama/ollama/pull/7516
2,636,184,712
PR_kwDOJ0Z1Ps6A90R2
7,516
Update README.md
{ "login": "rapidarchitect", "id": 126218667, "node_id": "U_kgDOB4Xxqw", "avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rapidarchitect", "html_url": "https://github.com/rapidarchitect", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
0
2024-11-05T18:33:16
2024-11-05T23:07:26
2024-11-05T23:07:26
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7516", "html_url": "https://github.com/ollama/ollama/pull/7516", "diff_url": "https://github.com/ollama/ollama/pull/7516.diff", "patch_url": "https://github.com/ollama/ollama/pull/7516.patch", "merged_at": "2024-11-05T23:07:26" }
added reddit rate below hexabot, ollama powered reddit search and analysis with streamlit for the intervace
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7516/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/324
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/324/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/324/comments
https://api.github.com/repos/ollama/ollama/issues/324/events
https://github.com/ollama/ollama/pull/324
1,846,012,991
PR_kwDOJ0Z1Ps5XrwE1
324
Generate private/public keypair for use w/ auth
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-08-10T23:24:30
2023-08-11T22:28:28
2023-08-11T17:58:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/324", "html_url": "https://github.com/ollama/ollama/pull/324", "diff_url": "https://github.com/ollama/ollama/pull/324.diff", "patch_url": "https://github.com/ollama/ollama/pull/324.patch", "merged_at": "2023-08-11T17:58:23" }
This change automatically creates a new OpenSSH compatible ed25519 key pair in your `~/.ollama` directory. The public key can be uploaded to Ollama and can be subsequently used to authenticate.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/324/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7317
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7317/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7317/comments
https://api.github.com/repos/ollama/ollama/issues/7317/events
https://github.com/ollama/ollama/issues/7317
2,605,703,293
I_kwDOJ0Z1Ps6bT-B9
7,317
ollama won't start as a service, will start using 'serve'?
{ "login": "MikeB2019x", "id": 49003263, "node_id": "MDQ6VXNlcjQ5MDAzMjYz", "avatar_url": "https://avatars.githubusercontent.com/u/49003263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeB2019x", "html_url": "https://github.com/MikeB2019x", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
null
[]
null
6
2024-10-22T14:54:17
2024-10-23T17:14:07
2024-10-23T17:14:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am trying to run ollama after a manual install on an Ubuntu VM with no internet connectivity. There is no GPU at the moment. I am able to run ollama successfully from the CLI with: ``` ollama serve ``` When I try to run ollama as a service with: ``` sudo systemctl daemon-reload sud...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7317/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6857
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6857/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6857/comments
https://api.github.com/repos/ollama/ollama/issues/6857/events
https://github.com/ollama/ollama/issues/6857
2,533,694,266
I_kwDOJ0Z1Ps6XBRs6
6,857
Issues getting rocm support to compile on Gentoo
{ "login": "Roger-Roger-debug", "id": 29002762, "node_id": "MDQ6VXNlcjI5MDAyNzYy", "avatar_url": "https://avatars.githubusercontent.com/u/29002762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Roger-Roger-debug", "html_url": "https://github.com/Roger-Roger-debug", "followers_url": "https...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
19
2024-09-18T13:05:40
2024-12-10T17:47:24
2024-12-10T17:47:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm trying to get the project to compile on Gentoo but am running into some issues as Gentoo uses different paths. On Gentoo, rocm libraries get installed into /usr/lib64, hip-clang lives somewhere else, and I'm sure there are some other differences as well. As suggested in the wiki, I set...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6857/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3920
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3920/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3920/comments
https://api.github.com/repos/ollama/ollama/issues/3920/events
https://github.com/ollama/ollama/pull/3920
2,264,330,323
PR_kwDOJ0Z1Ps5tw_4f
3,920
Reload model if `num_gpu` changes
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2024-04-25T19:20:28
2024-04-25T23:02:41
2024-04-25T23:02:40
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3920", "html_url": "https://github.com/ollama/ollama/pull/3920", "diff_url": "https://github.com/ollama/ollama/pull/3920.diff", "patch_url": "https://github.com/ollama/ollama/pull/3920.patch", "merged_at": "2024-04-25T23:02:40" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3920/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5577
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5577/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5577/comments
https://api.github.com/repos/ollama/ollama/issues/5577/events
https://github.com/ollama/ollama/issues/5577
2,398,786,428
I_kwDOJ0Z1Ps6O-pN8
5,577
Pulling model in docker-compose command
{ "login": "aditya6767", "id": 77670575, "node_id": "MDQ6VXNlcjc3NjcwNTc1", "avatar_url": "https://avatars.githubusercontent.com/u/77670575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aditya6767", "html_url": "https://github.com/aditya6767", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-09T17:36:27
2024-11-06T12:31:21
2024-11-06T12:31:20
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
- [ ] To run ollama pull llama2 in the docker-compose command
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5577/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8687
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8687/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8687/comments
https://api.github.com/repos/ollama/ollama/issues/8687/events
https://github.com/ollama/ollama/issues/8687
2,820,126,992
I_kwDOJ0Z1Ps6oF7kQ
8,687
Issue with Ollama Model Download: Restart Automatically or Throws an Error.
{ "login": "baraich", "id": 146362414, "node_id": "U_kgDOCLlQLg", "avatar_url": "https://avatars.githubusercontent.com/u/146362414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baraich", "html_url": "https://github.com/baraich", "followers_url": "https://api.github.com/users/baraich/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw...
open
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
3
2025-01-30T07:43:14
2025-01-30T08:50:40
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ## Setting the Context While downloading a model using the `ollama pull` command the downloading process is initiated. However, the process automatically decided to restart and begins to downloading again from 0%. I have seen other issues that were related to downloading, and I believe this pr...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8687/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7869
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7869/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7869/comments
https://api.github.com/repos/ollama/ollama/issues/7869/events
https://github.com/ollama/ollama/issues/7869
2,701,093,707
I_kwDOJ0Z1Ps6g_2tL
7,869
Installation not working on Fedora 41 Linux
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
5
2024-11-28T07:18:31
2024-11-28T08:02:40
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` curl -fsSL https://ollama.com/install.sh | sh > Installing ollama to /usr/local > [sudo] password for bns: > >>> Downloading Linux amd64 bundle > ######################################################################## 100.0% > >>> Creating ollama user... > >>> Adding ollama user to ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7869/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7869/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/208
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/208/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/208/comments
https://api.github.com/repos/ollama/ollama/issues/208/events
https://github.com/ollama/ollama/pull/208
1,820,587,068
PR_kwDOJ0Z1Ps5WWNTq
208
github issue templates
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-07-25T15:22:35
2023-08-04T14:06:41
2023-07-25T15:25:39
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/208", "html_url": "https://github.com/ollama/ollama/pull/208", "diff_url": "https://github.com/ollama/ollama/pull/208.diff", "patch_url": "https://github.com/ollama/ollama/pull/208.patch", "merged_at": null }
resolves #182
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/208/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5926
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5926/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5926/comments
https://api.github.com/repos/ollama/ollama/issues/5926/events
https://github.com/ollama/ollama/pull/5926
2,428,401,602
PR_kwDOJ0Z1Ps52Yp_X
5,926
Prevent loading too large models on windows
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-07-24T20:19:25
2024-08-12T16:08:31
2024-08-11T18:30:20
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5926", "html_url": "https://github.com/ollama/ollama/pull/5926", "diff_url": "https://github.com/ollama/ollama/pull/5926.diff", "patch_url": "https://github.com/ollama/ollama/pull/5926.patch", "merged_at": "2024-08-11T18:30:20" }
We already blocked linux memory exhaustion, but should apply the same check for Windows as well We can't apply the same logic to MacOS, as it uses fully dynamic swap space and has no concept of free swap space. Fixes #5882 Fixes #4955 Fixes #5958
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5926/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4054
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4054/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4054/comments
https://api.github.com/repos/ollama/ollama/issues/4054/events
https://github.com/ollama/ollama/issues/4054
2,271,742,386
I_kwDOJ0Z1Ps6HaAmy
4,054
llama-3-chinese-8b-instruct model infinite loop generate & cannot stop
{ "login": "gavinliu", "id": 3281741, "node_id": "MDQ6VXNlcjMyODE3NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3281741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gavinliu", "html_url": "https://github.com/gavinliu", "followers_url": "https://api.github.com/users/gavin...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-04-30T15:01:33
2024-05-24T00:33:08
2024-05-24T00:33:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hey, I found an issue of infinite generation that cannot be stopped, when deploying a [Chinese fine-tuned model of llama3 ](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-gguf) How to solve this problem? Modelfile file: ```Modelfile FROM /llama-3-chinese-8b-instruct/ggml-model-q8...
{ "login": "gavinliu", "id": 3281741, "node_id": "MDQ6VXNlcjMyODE3NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3281741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gavinliu", "html_url": "https://github.com/gavinliu", "followers_url": "https://api.github.com/users/gavin...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4054/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3595
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3595/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3595/comments
https://api.github.com/repos/ollama/ollama/issues/3595/events
https://github.com/ollama/ollama/pull/3595
2,237,694,351
PR_kwDOJ0Z1Ps5sW9eo
3,595
Added MindsDB information
{ "login": "chandrevdw31", "id": 32901682, "node_id": "MDQ6VXNlcjMyOTAxNjgy", "avatar_url": "https://avatars.githubusercontent.com/u/32901682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chandrevdw31", "html_url": "https://github.com/chandrevdw31", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
0
2024-04-11T13:10:33
2024-04-15T22:35:30
2024-04-15T22:35:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3595", "html_url": "https://github.com/ollama/ollama/pull/3595", "diff_url": "https://github.com/ollama/ollama/pull/3595.diff", "patch_url": "https://github.com/ollama/ollama/pull/3595.patch", "merged_at": "2024-04-15T22:35:30" }
Added more details to MindsDB so that Ollama users can know that they can connect their Ollama model with nearly 200 data platforms, including databases, vector stores, and applications.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3595/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7096
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7096/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7096/comments
https://api.github.com/repos/ollama/ollama/issues/7096/events
https://github.com/ollama/ollama/pull/7096
2,565,166,694
PR_kwDOJ0Z1Ps59j2bp
7,096
Add G1 to list of integrations
{ "login": "hidden1nin", "id": 8339670, "node_id": "MDQ6VXNlcjgzMzk2NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/8339670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hidden1nin", "html_url": "https://github.com/hidden1nin", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
0
2024-10-03T23:49:55
2024-10-05T18:57:53
2024-10-05T18:57:53
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7096", "html_url": "https://github.com/ollama/ollama/pull/7096", "diff_url": "https://github.com/ollama/ollama/pull/7096.diff", "patch_url": "https://github.com/ollama/ollama/pull/7096.patch", "merged_at": "2024-10-05T18:57:53" }
I added g1 to the list of integrations in the readme file. Hopefully this can bring this project more attention.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7096/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3038
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3038/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3038/comments
https://api.github.com/repos/ollama/ollama/issues/3038/events
https://github.com/ollama/ollama/issues/3038
2,177,610,819
I_kwDOJ0Z1Ps6By7RD
3,038
Log says "Nvidia GPU detected" and then "no GPU detected"
{ "login": "jimstevens2001", "id": 250203, "node_id": "MDQ6VXNlcjI1MDIwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/250203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimstevens2001", "html_url": "https://github.com/jimstevens2001", "followers_url": "https://api.github...
[]
closed
false
null
[]
null
1
2024-03-10T08:57:57
2024-03-10T12:43:27
2024-03-10T12:33:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am running a fresh install of Ollama inside of an Ubuntu 22.04 VM running an Nvidia RTX 4090 via pci passthrough (installed with "curl -fsSL https://ollama.com/install.sh | sh"). I have verified that nvidia-smi works as expected and a pytorch program can detect the GPU, but when I run Ollama, it uses the CPU to execu...
{ "login": "jimstevens2001", "id": 250203, "node_id": "MDQ6VXNlcjI1MDIwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/250203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimstevens2001", "html_url": "https://github.com/jimstevens2001", "followers_url": "https://api.github...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3038/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5185
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5185/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5185/comments
https://api.github.com/repos/ollama/ollama/issues/5185/events
https://github.com/ollama/ollama/issues/5185
2,364,595,820
I_kwDOJ0Z1Ps6M8N5s
5,185
florance vision model
{ "login": "iplayfast", "id": 751306, "node_id": "MDQ6VXNlcjc1MTMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iplayfast", "html_url": "https://github.com/iplayfast", "followers_url": "https://api.github.com/users/ipla...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
6
2024-06-20T14:25:47
2024-09-03T16:58:13
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/microsoft/Florence-2-large/tree/main uses pytorch https://huggingface.co/spaces/SixOpen/Florence-2-large-ft
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5185/reactions", "total_count": 26, "+1": 26, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5185/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3216
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3216/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3216/comments
https://api.github.com/repos/ollama/ollama/issues/3216/events
https://github.com/ollama/ollama/issues/3216
2,191,563,226
I_kwDOJ0Z1Ps6CoJna
3,216
baichuan-inc/Baichuan2-13B-Chat not supported. Can it be supported later
{ "login": "wangshuai67", "id": 13214849, "node_id": "MDQ6VXNlcjEzMjE0ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/13214849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangshuai67", "html_url": "https://github.com/wangshuai67", "followers_url": "https://api.github.com/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2024-03-18T08:46:09
2024-03-22T03:58:15
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat not supported ### How should we solve this? baichuan-inc/Baichuan2-13B-Chat not supported. Can it be supported later ### What is the impact of not solving this? _No response_ ### Anything else? _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3216/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2289
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2289/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2289/comments
https://api.github.com/repos/ollama/ollama/issues/2289/events
https://github.com/ollama/ollama/pull/2289
2,110,603,109
PR_kwDOJ0Z1Ps5lmRzw
2,289
fix: preserve last system message from modelfile
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2024-01-31T17:22:03
2024-02-01T02:45:02
2024-02-01T02:45:01
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2289", "html_url": "https://github.com/ollama/ollama/pull/2289", "diff_url": "https://github.com/ollama/ollama/pull/2289.diff", "patch_url": "https://github.com/ollama/ollama/pull/2289.patch", "merged_at": "2024-02-01T02:45:01" }
When truncating messages to fit in the context window if the system message from the modelfile was used it was not carried over, this preserves the modelfile system message in the case of truncation.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2289/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8078
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8078/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8078/comments
https://api.github.com/repos/ollama/ollama/issues/8078/events
https://github.com/ollama/ollama/pull/8078
2,736,953,410
PR_kwDOJ0Z1Ps6FEyRs
8,078
llama: update grammar test to expose lack of insertion order for JSON schema to grammar conversion
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
0
2024-12-12T21:54:27
2024-12-19T03:44:52
2024-12-19T03:44:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8078", "html_url": "https://github.com/ollama/ollama/pull/8078", "diff_url": "https://github.com/ollama/ollama/pull/8078.diff", "patch_url": "https://github.com/ollama/ollama/pull/8078.patch", "merged_at": "2024-12-19T03:44:50" }
This test is updated with a more complex JSON schema to expose the lack of maintaining insertion order generated from `json-schema-to-grammar` Documents behavior in: #7978
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8078/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7109
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7109/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7109/comments
https://api.github.com/repos/ollama/ollama/issues/7109/events
https://github.com/ollama/ollama/issues/7109
2,568,969,243
I_kwDOJ0Z1Ps6ZH1wb
7,109
Downloading models too slow
{ "login": "rubenmejiac", "id": 20344715, "node_id": "MDQ6VXNlcjIwMzQ0NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/20344715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rubenmejiac", "html_url": "https://github.com/rubenmejiac", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-10-06T23:20:14
2024-11-05T22:39:45
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have very slow downloads of models since I installed Ollama in Windows 11. No problems running models, etc. it's only the download speeds. The terminal seems to report a different speed than shown in my network monitor. I include screens of two downloads and the network monitor, which rep...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7109/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7109/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5970
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5970/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5970/comments
https://api.github.com/repos/ollama/ollama/issues/5970/events
https://github.com/ollama/ollama/issues/5970
2,431,342,195
I_kwDOJ0Z1Ps6Q61Zz
5,970
run glm4 Error: llama runner process has terminated: signal: aborted (core dumped)
{ "login": "x-future", "id": 23043471, "node_id": "MDQ6VXNlcjIzMDQzNDcx", "avatar_url": "https://avatars.githubusercontent.com/u/23043471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/x-future", "html_url": "https://github.com/x-future", "followers_url": "https://api.github.com/users/x-f...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
9
2024-07-26T03:35:56
2024-07-29T16:34:37
2024-07-29T16:34:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Error: llama runner process has terminated: signal: aborted (core dumped) # ollama run glm4 pulling manifest pulling b506a070d115... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 5.5 GB pulling e7e7aebd710c... 100% ▕█████████████████████████████████████████████████...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5970/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/590
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/590/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/590/comments
https://api.github.com/repos/ollama/ollama/issues/590/events
https://github.com/ollama/ollama/pull/590
1,912,061,080
PR_kwDOJ0Z1Ps5bJ4WE
590
fix dkms install
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-09-25T18:28:28
2023-09-25T19:17:32
2023-09-25T19:17:32
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/590", "html_url": "https://github.com/ollama/ollama/pull/590", "diff_url": "https://github.com/ollama/ollama/pull/590.diff", "patch_url": "https://github.com/ollama/ollama/pull/590.patch", "merged_at": "2023-09-25T19:17:32" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/590/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6904
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6904/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6904/comments
https://api.github.com/repos/ollama/ollama/issues/6904/events
https://github.com/ollama/ollama/issues/6904
2,540,478,396
I_kwDOJ0Z1Ps6XbJ-8
6,904
Option to know number of running request in ollama
{ "login": "Jegatheesh001", "id": 14847813, "node_id": "MDQ6VXNlcjE0ODQ3ODEz", "avatar_url": "https://avatars.githubusercontent.com/u/14847813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jegatheesh001", "html_url": "https://github.com/Jegatheesh001", "followers_url": "https://api.githu...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-09-21T19:23:52
2024-09-25T00:23:39
2024-09-25T00:23:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Option to know number of running request in ollama
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6904/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/4139
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4139/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4139/comments
https://api.github.com/repos/ollama/ollama/issues/4139/events
https://github.com/ollama/ollama/issues/4139
2,278,414,912
I_kwDOJ0Z1Ps6HzdpA
4,139
only 1 GPU found -- regression 1.32 -> 1.33
{ "login": "AlexLJordan", "id": 10133257, "node_id": "MDQ6VXNlcjEwMTMzMjU3", "avatar_url": "https://avatars.githubusercontent.com/u/10133257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexLJordan", "html_url": "https://github.com/AlexLJordan", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
25
2024-05-03T20:58:34
2025-01-10T12:48:37
2024-05-21T15:24:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi everyone, Sorry I don't have much time to write much; but going from 1.32 to 1.33, this: ``` ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 3 CUDA devices: Device 0: Tesla V100S-PCIE-32GB, compute capability 7.0, VMM: yes...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4139/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7448
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7448/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7448/comments
https://api.github.com/repos/ollama/ollama/issues/7448/events
https://github.com/ollama/ollama/issues/7448
2,626,605,042
I_kwDOJ0Z1Ps6cjs_y
7,448
Easily see latest version
{ "login": "jococo", "id": 3506048, "node_id": "MDQ6VXNlcjM1MDYwNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/3506048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jococo", "html_url": "https://github.com/jococo", "followers_url": "https://api.github.com/users/jococo/foll...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
open
false
null
[]
null
1
2024-10-31T11:19:44
2024-11-01T15:45:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would it be possible to have the version of the latest version of Ollama on the ollama.com website? So we don't have to click through to Github to find the info.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7448/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8352
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8352/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8352/comments
https://api.github.com/repos/ollama/ollama/issues/8352/events
https://github.com/ollama/ollama/pull/8352
2,776,467,113
PR_kwDOJ0Z1Ps6HIkMO
8,352
Add LangChain for .NET to libraries list
{ "login": "steveberdy", "id": 86739818, "node_id": "MDQ6VXNlcjg2NzM5ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/86739818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/steveberdy", "html_url": "https://github.com/steveberdy", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
0
2025-01-08T22:28:42
2025-01-14T17:37:35
2025-01-14T17:37:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8352", "html_url": "https://github.com/ollama/ollama/pull/8352", "diff_url": "https://github.com/ollama/ollama/pull/8352.diff", "patch_url": "https://github.com/ollama/ollama/pull/8352.patch", "merged_at": "2025-01-14T17:37:35" }
This is definitely not important, but for discoverability purposes, it would be nice to include the .NET LangChain library.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8352/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3414
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3414/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3414/comments
https://api.github.com/repos/ollama/ollama/issues/3414/events
https://github.com/ollama/ollama/pull/3414
2,216,376,711
PR_kwDOJ0Z1Ps5rOHbj
3,414
Add 'Knowledge Cutoff' column to model library table
{ "login": "saket3199", "id": 57292901, "node_id": "MDQ6VXNlcjU3MjkyOTAx", "avatar_url": "https://avatars.githubusercontent.com/u/57292901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saket3199", "html_url": "https://github.com/saket3199", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
5
2024-03-30T10:27:21
2024-04-06T17:43:13
2024-03-31T17:11:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3414", "html_url": "https://github.com/ollama/ollama/pull/3414", "diff_url": "https://github.com/ollama/ollama/pull/3414.diff", "patch_url": "https://github.com/ollama/ollama/pull/3414.patch", "merged_at": null }
resolves #3412
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3414/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8631
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8631/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8631/comments
https://api.github.com/repos/ollama/ollama/issues/8631/events
https://github.com/ollama/ollama/issues/8631
2,815,555,450
I_kwDOJ0Z1Ps6n0fd6
8,631
Please provide information about the model license in the search model interface
{ "login": "cquike", "id": 17937361, "node_id": "MDQ6VXNlcjE3OTM3MzYx", "avatar_url": "https://avatars.githubusercontent.com/u/17937361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cquike", "html_url": "https://github.com/cquike", "followers_url": "https://api.github.com/users/cquike/fo...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6573197867, "node_id": ...
open
false
null
[]
null
0
2025-01-28T12:42:38
2025-01-29T00:28:28
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? It would be useful to show the license of each model in the model search page https://ollama.com/search. Even better would be an option to filter by license. ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8631/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/439
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/439/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/439/comments
https://api.github.com/repos/ollama/ollama/issues/439/events
https://github.com/ollama/ollama/pull/439
1,870,822,307
PR_kwDOJ0Z1Ps5Y_aUK
439
add model IDs
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2023-08-29T03:37:36
2023-08-29T03:50:25
2023-08-29T03:50:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/439", "html_url": "https://github.com/ollama/ollama/pull/439", "diff_url": "https://github.com/ollama/ollama/pull/439.diff", "patch_url": "https://github.com/ollama/ollama/pull/439.patch", "merged_at": "2023-08-29T03:50:24" }
This change shows a portion (first 12 hex chars) of the sha256 sum of the manifest when running `ollama ls`. This makes it really easy at a glance to tell if two models are the same, and will make it easier in the future to match models inside of the ollama library. It looks something like: ``` NAME ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/439/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7054
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7054/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7054/comments
https://api.github.com/repos/ollama/ollama/issues/7054/events
https://github.com/ollama/ollama/issues/7054
2,558,039,172
I_kwDOJ0Z1Ps6YeJSE
7,054
Support for Zamba2
{ "login": "hg0428", "id": 45984899, "node_id": "MDQ6VXNlcjQ1OTg0ODk5", "avatar_url": "https://avatars.githubusercontent.com/u/45984899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hg0428", "html_url": "https://github.com/hg0428", "followers_url": "https://api.github.com/users/hg0428/fo...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2024-10-01T02:37:49
2024-10-01T02:47:42
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Zamba2 is a really cool model that uses a hybrid Mamba-Transformer system. https://huggingface.co/Zyphra/Zamba2-2.7B https://www.zyphra.com/post/zamba2-small I have been wanting to use this for a while and I would love if ollama could add this model soon.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7054/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7054/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8554
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8554/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8554/comments
https://api.github.com/repos/ollama/ollama/issues/8554/events
https://github.com/ollama/ollama/issues/8554
2,807,741,863
I_kwDOJ0Z1Ps6nWr2n
8,554
JSON With Ollama Library Contents
{ "login": "slyyyle", "id": 78447050, "node_id": "MDQ6VXNlcjc4NDQ3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/78447050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slyyyle", "html_url": "https://github.com/slyyyle", "followers_url": "https://api.github.com/users/slyyyl...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-23T19:30:08
2025-01-23T19:30:08
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would it be possible to add a JSON object that reflects all models contained in the library? I would prefer not to scrape against /search. It could have info from the model card in the search, and the more specific info about it contained found on library/model_name. It would be nice for many reasons - UI Ollama Mod...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8554/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6808
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6808/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6808/comments
https://api.github.com/repos/ollama/ollama/issues/6808/events
https://github.com/ollama/ollama/issues/6808
2,526,609,178
I_kwDOJ0Z1Ps6WmP8a
6,808
qos, serving websites with the server, but when downloading a model...
{ "login": "remco-pc", "id": 8077908, "node_id": "MDQ6VXNlcjgwNzc5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/remco-pc", "html_url": "https://github.com/remco-pc", "followers_url": "https://api.github.com/users/remco...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-09-14T21:13:19
2024-09-14T21:13:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
if a big model gets loaded, it loads at full speed, slowing down other services. can you make a throttle to only allow a certain amount of bandwidth consumed by downloading the model. I try to have websites running on that server, and they become unresponsive due to the model download (tested it with llama3.1:70b)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6808/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6808/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3131
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3131/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3131/comments
https://api.github.com/repos/ollama/ollama/issues/3131/events
https://github.com/ollama/ollama/issues/3131
2,185,225,928
I_kwDOJ0Z1Ps6CP-bI
3,131
Clip model isn't being freed correctly
{ "login": "RandomGitUser321", "id": 27916165, "node_id": "MDQ6VXNlcjI3OTE2MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/27916165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RandomGitUser321", "html_url": "https://github.com/RandomGitUser321", "followers_url": "https://...
[]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
6
2024-03-14T01:55:07
2024-03-15T00:55:09
2024-03-14T20:35:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm on Windows and do a lot of things with models. Mostly a VLM->Get a detailed description of an image->Use a different LLM that's better at writing prompts to inject/mix my ideas in with->Stable diffusion->Image type workflow with ComfyUI. Obviously, I need all the VRAM I can get, but I sometimes run into scenarios w...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3131/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3131/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4276
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4276/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4276/comments
https://api.github.com/repos/ollama/ollama/issues/4276/events
https://github.com/ollama/ollama/issues/4276
2,287,010,979
I_kwDOJ0Z1Ps6IUQSj
4,276
bge-m3
{ "login": "Mimicvat", "id": 141440461, "node_id": "U_kgDOCG41zQ", "avatar_url": "https://avatars.githubusercontent.com/u/141440461?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mimicvat", "html_url": "https://github.com/Mimicvat", "followers_url": "https://api.github.com/users/Mimicvat/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
5
2024-05-09T06:43:29
2024-05-21T14:08:16
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/vonjack/bge-m3-gguf from: https://github.com/ggerganov/llama.cpp/issues/6007 I am looking for recommendations on a high-quality multilingual embedder that includes support for Portuguese. Anything better than https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 would be nice.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4276/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4276/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6531
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6531/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6531/comments
https://api.github.com/repos/ollama/ollama/issues/6531/events
https://github.com/ollama/ollama/issues/6531
2,490,378,992
I_kwDOJ0Z1Ps6UcCrw
6,531
Prebuilt `ollama-linux-amd64.tgz` without cuda libs, please?
{ "login": "sevaseva", "id": 1168195, "node_id": "MDQ6VXNlcjExNjgxOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1168195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sevaseva", "html_url": "https://github.com/sevaseva", "followers_url": "https://api.github.com/users/sevas...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5755339642, "node_id": ...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-08-27T21:12:39
2024-12-02T11:34:44
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I occasionally update ollama on a linux box by downloading URLs like `https://github.com/ollama/ollama/releases/download/v0.3.7-rc6/ollama-linux-amd64.tgz` and extracting/overwriting files into a local directory (not into `/usr` as a root mind you, just into a local directory as a non-privileged user; that is how I pre...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6531/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6531/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3992
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3992/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3992/comments
https://api.github.com/repos/ollama/ollama/issues/3992/events
https://github.com/ollama/ollama/issues/3992
2,267,334,707
I_kwDOJ0Z1Ps6HJMgz
3,992
how to config octopus on ollama ?
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-28T04:33:04
2024-05-26T13:44:45
2024-05-09T08:57:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? this is output of octopus on my MAC. anyone know how to config it for better output? Set 'verbose' mode. >>> hi <nexa_end> Response: <nexa_13>('hi')<nexa_end> Function description: def search_youtube_videos(query): """ Searches YouTube for videos matching a query. P...
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3992/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4223
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4223/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4223/comments
https://api.github.com/repos/ollama/ollama/issues/4223/events
https://github.com/ollama/ollama/issues/4223
2,282,598,028
I_kwDOJ0Z1Ps6IDa6M
4,223
qwen:72b-chat-q4_K_S does not load
{ "login": "saddy001", "id": 13658554, "node_id": "MDQ6VXNlcjEzNjU4NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/13658554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saddy001", "html_url": "https://github.com/saddy001", "followers_url": "https://api.github.com/users/sad...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw...
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-05-07T08:17:32
2024-07-25T18:33:59
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Where the model "qwen:72b" loads successfully, the model "qwen:72b-chat-q4_K_S" does not load. The loading spinner just doesn't stop even after waiting a long time. Since the models occupy the same amount of memory (41 GB) I assume the RAM usage is roughly the same. Can somebody reproduce this? ...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4223/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5030
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5030/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5030/comments
https://api.github.com/repos/ollama/ollama/issues/5030/events
https://github.com/ollama/ollama/pull/5030
2,351,920,701
PR_kwDOJ0Z1Ps5yZ8Ea
5,030
Update README.md
{ "login": "Drlordbasil", "id": 126736516, "node_id": "U_kgDOB43YhA", "avatar_url": "https://avatars.githubusercontent.com/u/126736516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Drlordbasil", "html_url": "https://github.com/Drlordbasil", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
3
2024-06-13T19:38:21
2024-11-22T00:38:09
2024-11-21T08:35:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5030", "html_url": "https://github.com/ollama/ollama/pull/5030", "diff_url": "https://github.com/ollama/ollama/pull/5030.diff", "patch_url": "https://github.com/ollama/ollama/pull/5030.patch", "merged_at": null }
add my embedding example for ollama, but it includes Groq API calls too, is this allowed?
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5030/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3752
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3752/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3752/comments
https://api.github.com/repos/ollama/ollama/issues/3752/events
https://github.com/ollama/ollama/issues/3752
2,252,557,053
I_kwDOJ0Z1Ps6GQ0r9
3,752
command-r:latest run exception
{ "login": "zw6234336", "id": 5389245, "node_id": "MDQ6VXNlcjUzODkyNDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5389245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zw6234336", "html_url": "https://github.com/zw6234336", "followers_url": "https://api.github.com/users/zw...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-04-19T09:46:28
2024-05-10T00:11:44
2024-05-10T00:11:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="947" alt="Xnapper-2024-04-19-17 45 26" src="https://github.com/ollama/ollama/assets/5389245/4eeeee44-f4a1-4b8f-a202-ed78665d9772"> ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.27
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3752/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5450
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5450/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5450/comments
https://api.github.com/repos/ollama/ollama/issues/5450/events
https://github.com/ollama/ollama/issues/5450
2,387,379,664
I_kwDOJ0Z1Ps6OTIXQ
5,450
Inference fails on AMD when using >1 GPU.
{ "login": "Speedway1", "id": 100301611, "node_id": "U_kgDOBfp7Kw", "avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Speedway1", "html_url": "https://github.com/Speedway1", "followers_url": "https://api.github.com/users/Speedw...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-07-03T00:18:37
2024-07-10T18:48:02
2024-07-10T18:48:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? This is on AMD. I have 2 x Radeon 7900 XCX cards (24gb each). For models/memory use that only uses 1 GPU, everything works fine. As soon as both cards are required, the inference fails with garbage. As seen in this output: ``` ollama@TH-AI2:~$ ollama list NAME ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5450/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3849
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3849/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3849/comments
https://api.github.com/repos/ollama/ollama/issues/3849/events
https://github.com/ollama/ollama/issues/3849
2,259,573,320
I_kwDOJ0Z1Ps6GrlpI
3,849
Ollama super slow on macOS M1 in Docker
{ "login": "rb81", "id": 48117105, "node_id": "MDQ6VXNlcjQ4MTE3MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rb81", "html_url": "https://github.com/rb81", "followers_url": "https://api.github.com/users/rb81/followers"...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
6
2024-04-23T18:59:43
2024-11-12T23:34:10
2024-04-24T16:21:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama running natively on macOS is excellent. Ollama running on Docker is about 50% slower. (Unsure if this is a bug or config issue, but I am running default settings.) ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3849/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5854
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5854/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5854/comments
https://api.github.com/repos/ollama/ollama/issues/5854/events
https://github.com/ollama/ollama/pull/5854
2,423,220,386
PR_kwDOJ0Z1Ps52HC98
5,854
Refine error reporting for subprocess crash
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-07-22T15:56:13
2024-07-22T17:40:25
2024-07-22T17:40:22
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5854", "html_url": "https://github.com/ollama/ollama/pull/5854", "diff_url": "https://github.com/ollama/ollama/pull/5854.diff", "patch_url": "https://github.com/ollama/ollama/pull/5854.patch", "merged_at": "2024-07-22T17:40:22" }
On windows, the exit status winds up being the search term many users search for and end up piling in on issues that are unrelated. This refines the reporting so that if we have a more detailed message we'll suppress the exit status portion of the message. Example: Before ``` > ollama run akuldatta/mistral-ne...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5854/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8500
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8500/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8500/comments
https://api.github.com/repos/ollama/ollama/issues/8500/events
https://github.com/ollama/ollama/issues/8500
2,798,912,420
I_kwDOJ0Z1Ps6m1AOk
8,500
when using gguf files of qwen2-vl,something wrong happen:Error: invalid file magic!
{ "login": "twythebest", "id": 89891289, "node_id": "MDQ6VXNlcjg5ODkxMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/89891289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twythebest", "html_url": "https://github.com/twythebest", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
1
2025-01-20T10:50:55
2025-01-20T11:32:52
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have download two files:mmproj-model-f32.gguf and Qwen2-VL-7B-Instruct-Q8_0.gguf. Here is my modelfile: FROM ./mmproj-model-f32.gguf FROM ./Qwen2-VL-7B-Instruct-Q8_0.gguf TEMPLATE """{{- range $index, $_ := .Messages }}<|start_header_id|>{{ .Role }}<|end_header_id|> {{ .Content }} {{- if gt (len (slice $.Messages $...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8500/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/468
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/468/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/468/comments
https://api.github.com/repos/ollama/ollama/issues/468/events
https://github.com/ollama/ollama/issues/468
1,882,521,633
I_kwDOJ0Z1Ps5wNQAh
468
Add Refact model
{ "login": "Alainx277", "id": 26800509, "node_id": "MDQ6VXNlcjI2ODAwNTA5", "avatar_url": "https://avatars.githubusercontent.com/u/26800509?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alainx277", "html_url": "https://github.com/Alainx277", "followers_url": "https://api.github.com/users/...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
3
2023-09-05T18:28:46
2024-12-23T00:53:16
2024-12-23T00:53:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
A new 1.6b parameter model called "Refact" has been released. [Blog post](https://refact.ai/blog/2023/introducing-refact-code-llm/) [Hugging Face](https://huggingface.co/smallcloudai/Refact-1_6B-fim) I tried adding it myself, but the llama.cpp scripts to convert to GGML format did not work. Keep in mind that I...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/468/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/468/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/406
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/406/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/406/comments
https://api.github.com/repos/ollama/ollama/issues/406/events
https://github.com/ollama/ollama/issues/406
1,865,900,465
I_kwDOJ0Z1Ps5vN2Gx
406
Model request: brand new “Code Llama” released by Facebook
{ "login": "strangelearning", "id": 80677888, "node_id": "MDQ6VXNlcjgwNjc3ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/80677888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/strangelearning", "html_url": "https://github.com/strangelearning", "followers_url": "https://api...
[]
closed
false
null
[]
null
4
2023-08-24T21:18:52
2023-08-25T14:11:28
2023-08-24T22:16:13
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://ai.meta.com/blog/code-llama-large-language-model-coding/
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/406/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/406/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7966
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7966/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7966/comments
https://api.github.com/repos/ollama/ollama/issues/7966/events
https://github.com/ollama/ollama/issues/7966
2,722,613,300
I_kwDOJ0Z1Ps6iR8g0
7,966
ggml_cuda_cpy_fn: unsupported type combination (q4_0 to f32) in pre-release version
{ "login": "dkkb", "id": 82504881, "node_id": "MDQ6VXNlcjgyNTA0ODgx", "avatar_url": "https://avatars.githubusercontent.com/u/82504881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkkb", "html_url": "https://github.com/dkkb", "followers_url": "https://api.github.com/users/dkkb/followers"...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-12-06T10:13:32
2024-12-07T00:44:16
2024-12-07T00:44:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm using this model `https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF` with the v0.5.0 pre-release. After upgrading to the latest version, I was hoping to see improved performance. However, after making several API calls, I encountered the following error on the client side. I a...
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7966/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7966/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/1799
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1799/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1799/comments
https://api.github.com/repos/ollama/ollama/issues/1799/events
https://github.com/ollama/ollama/pull/1799
2,066,634,912
PR_kwDOJ0Z1Ps5jRnFd
1,799
fix to use ARCH var on downloading cuda driver
{ "login": "gimslab", "id": 1457044, "node_id": "MDQ6VXNlcjE0NTcwNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1457044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gimslab", "html_url": "https://github.com/gimslab", "followers_url": "https://api.github.com/users/gimslab/...
[]
open
false
null
[]
null
0
2024-01-05T02:31:04
2025-01-14T23:04:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1799", "html_url": "https://github.com/ollama/ollama/pull/1799", "diff_url": "https://github.com/ollama/ollama/pull/1799.diff", "patch_url": "https://github.com/ollama/ollama/pull/1799.patch", "merged_at": null }
I attempted to install Ollama on an AWS g5g instance with Ubuntu2204. but failed on that point. The link for Nvidia driver uses 'arm64' instead of 'aarch64'
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1799/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2689
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2689/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2689/comments
https://api.github.com/repos/ollama/ollama/issues/2689/events
https://github.com/ollama/ollama/issues/2689
2,149,616,155
I_kwDOJ0Z1Ps6AIIob
2,689
Gemma model quantization or implementation seems botched
{ "login": "horiacristescu", "id": 1104033, "node_id": "MDQ6VXNlcjExMDQwMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1104033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/horiacristescu", "html_url": "https://github.com/horiacristescu", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
4
2024-02-22T17:57:44
2024-08-04T22:32:48
2024-02-23T01:06:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I tried the gemma model today and it responds with - inconsistent formatting, such as using two commas instead of one - inconsistent phrasing, such as a noun not being at plural when it should, or absurd phrases like "copyright infringement is being violated" I tried the same model on labs.perplexity.ai and their ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2689/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8470
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8470/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8470/comments
https://api.github.com/repos/ollama/ollama/issues/8470/events
https://github.com/ollama/ollama/issues/8470
2,795,437,782
I_kwDOJ0Z1Ps6mnv7W
8,470
ollama._types.ResponseError: timed out waiting for llama runner to start - progress 0.00 -
{ "login": "legendier", "id": 116647945, "node_id": "U_kgDOBvPoCQ", "avatar_url": "https://avatars.githubusercontent.com/u/116647945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/legendier", "html_url": "https://github.com/legendier", "followers_url": "https://api.github.com/users/legend...
[]
closed
false
null
[]
null
2
2025-01-17T13:05:17
2025-01-20T09:52:00
2025-01-20T09:52:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The process of loading large models into GPU memory is very slow. Then an error will occur: ”**ollama._types.ResponseError: timed out waiting for llama runner to start - progress 0.00 -** “ Previously used normally, but recently the large model has been unable to load successfully. Why is this?
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8470/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2163
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2163/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2163/comments
https://api.github.com/repos/ollama/ollama/issues/2163/events
https://github.com/ollama/ollama/pull/2163
2,096,921,001
PR_kwDOJ0Z1Ps5k4aou
2,163
Expose llm library and layer info in verbose output
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-01-23T20:24:48
2024-01-24T01:41:08
2024-01-24T01:40:52
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2163", "html_url": "https://github.com/ollama/ollama/pull/2163", "diff_url": "https://github.com/ollama/ollama/pull/2163.diff", "patch_url": "https://github.com/ollama/ollama/pull/2163.patch", "merged_at": null }
This wires up additional information in our verbose metrics so you can see which llm library was used, and how many layers were loaded into the GPU. Example output in the CLI: ``` ./ollama run orca-mini >>> /set verbose Set 'verbose' mode. >>> hello Hello, how can I assist you today? total duration: ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2163/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3370
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3370/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3370/comments
https://api.github.com/repos/ollama/ollama/issues/3370/events
https://github.com/ollama/ollama/issues/3370
2,210,836,518
I_kwDOJ0Z1Ps6DxrAm
3,370
databricks-dbrx
{ "login": "Sparkenstein", "id": 24642451, "node_id": "MDQ6VXNlcjI0NjQyNDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24642451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sparkenstein", "html_url": "https://github.com/Sparkenstein", "followers_url": "https://api.github.c...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
21
2024-03-27T13:39:40
2024-04-18T11:24:09
2024-04-17T15:45:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What model would you like? Databricks just released a new model that is supposed to perform better than mistral. IMO would be a good addition https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm https://huggingface.co/databricks/dbrx-instruct _No response_
{ "login": "Sparkenstein", "id": 24642451, "node_id": "MDQ6VXNlcjI0NjQyNDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24642451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sparkenstein", "html_url": "https://github.com/Sparkenstein", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3370/reactions", "total_count": 115, "+1": 115, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3370/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1820
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1820/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1820/comments
https://api.github.com/repos/ollama/ollama/issues/1820/events
https://github.com/ollama/ollama/issues/1820
2,068,412,448
I_kwDOJ0Z1Ps57SXgg
1,820
Pulled SQLCoder2 even though it's not listed in the library
{ "login": "lestan", "id": 1471736, "node_id": "MDQ6VXNlcjE0NzE3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lestan", "html_url": "https://github.com/lestan", "followers_url": "https://api.github.com/users/lestan/foll...
[]
closed
false
null
[]
null
2
2024-01-06T05:56:58
2024-03-11T22:14:33
2024-03-11T22:14:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I wanted to test out sqlcoder2, but only saw sqlcoder on the [model library page](https://ollama.ai/library?sort=newest&q=llama) I still tried to see what would happen if I ran Ollama pull sqlcoder2...and it worked It pulled down the model named sqlcoder2:latest Is this an issue with the model library not bei...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1820/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4507
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4507/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4507/comments
https://api.github.com/repos/ollama/ollama/issues/4507/events
https://github.com/ollama/ollama/issues/4507
2,303,686,330
I_kwDOJ0Z1Ps6JT3a6
4,507
I hope ollama completes my command input.
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-05-17T23:33:49
2024-05-21T20:27:21
2024-05-21T20:27:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I hope ollama completes my command input. for example below,, I hope when I press TAB button, ollama complete 'qwen:32b-chat-v1.5-q8_0 ' thanks. `taozhiyu@192 ~ % ollama list NAME ID SIZE MODIFIED qwen:32b-chat-v1.5-q8_0 33c6cb647280 ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4507/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7383
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7383/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7383/comments
https://api.github.com/repos/ollama/ollama/issues/7383/events
https://github.com/ollama/ollama/pull/7383
2,616,439,116
PR_kwDOJ0Z1Ps6AAgod
7,383
Add Swollama links to README.md
{ "login": "marcusziade", "id": 47460844, "node_id": "MDQ6VXNlcjQ3NDYwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/47460844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcusziade", "html_url": "https://github.com/marcusziade", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
1
2024-10-27T09:10:28
2024-11-21T18:24:55
2024-11-20T18:49:15
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7383", "html_url": "https://github.com/ollama/ollama/pull/7383", "diff_url": "https://github.com/ollama/ollama/pull/7383.diff", "patch_url": "https://github.com/ollama/ollama/pull/7383.patch", "merged_at": "2024-11-20T18:49:15" }
This PR updates the README by adding a link to a feature-complete Swift client library I built called [Swollama](https://github.com/marcusziade/Swollama) I have _extensive_ documentation in [DocC](https://marcusziade.github.io/Swollama/documentation/swollama/), and I already have a draft PR open for Linux and Docker...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7383/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3180
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3180/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3180/comments
https://api.github.com/repos/ollama/ollama/issues/3180/events
https://github.com/ollama/ollama/issues/3180
2,190,097,551
I_kwDOJ0Z1Ps6CijyP
3,180
Add support for AMD iGPUs, such as gfx1103.
{ "login": "louwangzhiyuY", "id": 6920071, "node_id": "MDQ6VXNlcjY5MjAwNzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6920071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/louwangzhiyuY", "html_url": "https://github.com/louwangzhiyuY", "followers_url": "https://api.github....
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-03-16T15:23:03
2024-07-02T04:13:18
2024-03-16T18:15:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? _No response_ ### How should we solve this? _No response_ ### What is the impact of not solving this? _No response_ ### Anything else? _No response_
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3180/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3180/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4622
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4622/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4622/comments
https://api.github.com/repos/ollama/ollama/issues/4622/events
https://github.com/ollama/ollama/pull/4622
2,316,271,617
PR_kwDOJ0Z1Ps5wgxVN
4,622
Update README.md
{ "login": "rajatrocks", "id": 7295726, "node_id": "MDQ6VXNlcjcyOTU3MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/7295726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajatrocks", "html_url": "https://github.com/rajatrocks", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2024-05-24T21:13:58
2024-11-21T08:38:03
2024-11-21T08:38:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4622", "html_url": "https://github.com/ollama/ollama/pull/4622", "diff_url": "https://github.com/ollama/ollama/pull/4622.diff", "patch_url": "https://github.com/ollama/ollama/pull/4622.patch", "merged_at": null }
Added the Ask Steve Chrome Extension, which enables you to connect Ollama: https://www.asksteve.to/docs/local-models#how-do-i-use-ollama-with-ask-steve
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4622/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1505
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1505/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1505/comments
https://api.github.com/repos/ollama/ollama/issues/1505/events
https://github.com/ollama/ollama/pull/1505
2,040,029,408
PR_kwDOJ0Z1Ps5h6gnD
1,505
set version string to current (pre)release
{ "login": "tohn", "id": 427159, "node_id": "MDQ6VXNlcjQyNzE1OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/427159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tohn", "html_url": "https://github.com/tohn", "followers_url": "https://api.github.com/users/tohn/followers", ...
[]
closed
false
null
[]
null
5
2023-12-13T16:06:18
2024-01-06T19:39:19
2023-12-13T16:15:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1505", "html_url": "https://github.com/ollama/ollama/pull/1505", "diff_url": "https://github.com/ollama/ollama/pull/1505.diff", "patch_url": "https://github.com/ollama/ollama/pull/1505.patch", "merged_at": null }
according to the github tags and by using <https://semver.org>
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1505/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4308
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4308/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4308/comments
https://api.github.com/repos/ollama/ollama/issues/4308/events
https://github.com/ollama/ollama/issues/4308
2,288,937,781
I_kwDOJ0Z1Ps6Ibms1
4,308
I have uploaded this model, but it is not shown on my page.
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-05-10T05:04:33
2024-05-10T05:12:57
2024-05-10T05:12:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? <img width="1091" alt="截屏2024-05-10 13 00 11" src="https://github.com/ollama/ollama/assets/146583103/f809d253-4deb-4224-99f8-3a20501ad869"> I have uploaded this model, but it is not shown on my page. ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version ...
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4308/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3406
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3406/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3406/comments
https://api.github.com/repos/ollama/ollama/issues/3406/events
https://github.com/ollama/ollama/issues/3406
2,215,082,120
I_kwDOJ0Z1Ps6EB3iI
3,406
Official arm64 build does not work on Jetson Nano Orin
{ "login": "gab0220", "id": 127881776, "node_id": "U_kgDOB59SMA", "avatar_url": "https://avatars.githubusercontent.com/u/127881776?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gab0220", "html_url": "https://github.com/gab0220", "followers_url": "https://api.github.com/users/gab0220/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
21
2024-03-29T10:26:26
2024-09-13T12:34:00
2024-05-21T17:58:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello everyone, thank you for your work. I'm using a Jetson Nano Orin. Following #3098, some days ago I done a ```git checkout``` using #2279 commit and install this version on my device. It works. Today I tried to: * Install the v0.1.30 using [this tutorial](https://github.com/ollama/o...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3406/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3406/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7194
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7194/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7194/comments
https://api.github.com/repos/ollama/ollama/issues/7194/events
https://github.com/ollama/ollama/pull/7194
2,584,278,438
PR_kwDOJ0Z1Ps5-dUf9
7,194
Update README.md - New Mobile Client
{ "login": "Calvicii", "id": 80085756, "node_id": "MDQ6VXNlcjgwMDg1NzU2", "avatar_url": "https://avatars.githubusercontent.com/u/80085756?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Calvicii", "html_url": "https://github.com/Calvicii", "followers_url": "https://api.github.com/users/Cal...
[]
closed
false
null
[]
null
2
2024-10-13T21:18:42
2024-11-21T07:58:45
2024-11-21T07:58:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7194", "html_url": "https://github.com/ollama/ollama/pull/7194", "diff_url": "https://github.com/ollama/ollama/pull/7194.diff", "patch_url": "https://github.com/ollama/ollama/pull/7194.patch", "merged_at": null }
Added my mobile Ollama client to the list.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7194/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2304
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2304/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2304/comments
https://api.github.com/repos/ollama/ollama/issues/2304/events
https://github.com/ollama/ollama/issues/2304
2,111,802,827
I_kwDOJ0Z1Ps59343L
2,304
Adding Yi-VL models
{ "login": "ddpasa", "id": 112642920, "node_id": "U_kgDOBrbLaA", "avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddpasa", "html_url": "https://github.com/ddpasa", "followers_url": "https://api.github.com/users/ddpasa/follower...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
4
2024-02-01T08:04:24
2024-11-15T09:13:34
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Yi LM [is supported in ollama](https://ollama.ai/library/yi), but I don't think the multimodel Yi-VL models are. These are supposed to be very good, so it would be great to have them. Here are the huggingface links: 6B: https://huggingface.co/01-ai/Yi-VL-6B 34B: https://huggingface.co/01-ai/Yi-VL-34B
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2304/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/2304/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5046
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5046/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5046/comments
https://api.github.com/repos/ollama/ollama/issues/5046/events
https://github.com/ollama/ollama/pull/5046
2,353,725,529
PR_kwDOJ0Z1Ps5ygFhm
5,046
server: longer timeout in `TestRequests`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-06-14T16:37:12
2024-06-14T16:48:25
2024-06-14T16:48:25
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5046", "html_url": "https://github.com/ollama/ollama/pull/5046", "diff_url": "https://github.com/ollama/ollama/pull/5046.diff", "patch_url": "https://github.com/ollama/ollama/pull/5046.patch", "merged_at": "2024-06-14T16:48:25" }
@dhiltgen this seems like a band-aid - is there something deeper we should fix in this test?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5046/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2557
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2557/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2557/comments
https://api.github.com/repos/ollama/ollama/issues/2557/events
https://github.com/ollama/ollama/issues/2557
2,139,867,544
I_kwDOJ0Z1Ps5_i8mY
2,557
How can I use ollama in pycharm
{ "login": "Matrixsun", "id": 11818446, "node_id": "MDQ6VXNlcjExODE4NDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11818446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Matrixsun", "html_url": "https://github.com/Matrixsun", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
3
2024-02-17T07:02:06
2024-05-17T22:42:34
2024-05-17T22:42:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi all. I want use ollama in pycharm, how to do it?
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2557/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2941
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2941/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2941/comments
https://api.github.com/repos/ollama/ollama/issues/2941/events
https://github.com/ollama/ollama/issues/2941
2,170,179,604
I_kwDOJ0Z1Ps6BWlAU
2,941
Global Configuration Variables for Ollama
{ "login": "bkawakami", "id": 1881935, "node_id": "MDQ6VXNlcjE4ODE5MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1881935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bkawakami", "html_url": "https://github.com/bkawakami", "followers_url": "https://api.github.com/users/bk...
[]
closed
false
null
[]
null
7
2024-03-05T21:32:44
2025-01-30T00:53:36
2024-03-06T01:12:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am currently using Ollama for running LLMs locally and am greatly appreciative of the functionality it offers. However, I've come across a point of confusion regarding the global configuration of the Ollama environment, especially when it comes to setting it up for different use cases. Could you provide more detai...
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2941/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/619
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/619/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/619/comments
https://api.github.com/repos/ollama/ollama/issues/619/events
https://github.com/ollama/ollama/issues/619
1,914,831,152
I_kwDOJ0Z1Ps5yIgEw
619
Segfault when using /show parameters
{ "login": "lstep", "id": 2028, "node_id": "MDQ6VXNlcjIwMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lstep", "html_url": "https://github.com/lstep", "followers_url": "https://api.github.com/users/lstep/followers", "fol...
[]
closed
false
null
[]
null
1
2023-09-27T06:58:07
2023-09-28T21:25:24
2023-09-28T21:25:24
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
From a fresh install (`curl https://ollama.ai/install.sh | sh` on Ubuntu Linux 22.04) using `ollama run codeup:13b-llama2-chat-q4_K_M`, runs but when I try `/show parameters`, generates a segfault: ``` >>> /list NAME ID SIZE MODIFIED codeup:13b-llama2-chat-q4_K_M d9c41194...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/619/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3538
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3538/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3538/comments
https://api.github.com/repos/ollama/ollama/issues/3538/events
https://github.com/ollama/ollama/issues/3538
2,230,925,586
I_kwDOJ0Z1Ps6E-TkS
3,538
binary install on a cluster produces extra information in responses in both cpu and gpu mode
{ "login": "bozo32", "id": 102033973, "node_id": "U_kgDOBhTqNQ", "avatar_url": "https://avatars.githubusercontent.com/u/102033973?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bozo32", "html_url": "https://github.com/bozo32", "followers_url": "https://api.github.com/users/bozo32/follower...
[ { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
4
2024-04-08T11:16:47
2024-06-22T00:12:52
2024-06-22T00:12:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I installed ollama on the university cluster following the instructions here: The download page has a list of assets, one of them is binary for Linux named ollama-linux-amd64. Just download it to your Linux cluster, then run the following: start the server in background ./ollama-linux-...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3538/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5438
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5438/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5438/comments
https://api.github.com/repos/ollama/ollama/issues/5438/events
https://github.com/ollama/ollama/pull/5438
2,386,676,661
PR_kwDOJ0Z1Ps50ON1l
5,438
Centos 7 EOL broke mirrors
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-07-02T16:23:13
2024-07-02T16:28:02
2024-07-02T16:28:00
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5438", "html_url": "https://github.com/ollama/ollama/pull/5438", "diff_url": "https://github.com/ollama/ollama/pull/5438.diff", "patch_url": "https://github.com/ollama/ollama/pull/5438.patch", "merged_at": "2024-07-02T16:28:00" }
As of July 1st 2024: Could not resolve host: mirrorlist.centos.org This is expected due to EOL dates.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5438/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/825
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/825/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/825/comments
https://api.github.com/repos/ollama/ollama/issues/825/events
https://github.com/ollama/ollama/pull/825
1,948,172,334
PR_kwDOJ0Z1Ps5dDqnz
825
relay CUDA errors to the client
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-10-17T20:13:05
2023-10-18T19:36:58
2023-10-18T19:36:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/825", "html_url": "https://github.com/ollama/ollama/pull/825", "diff_url": "https://github.com/ollama/ollama/pull/825.diff", "patch_url": "https://github.com/ollama/ollama/pull/825.patch", "merged_at": "2023-10-18T19:36:57" }
When the llama.cpp runner failed with CUDA error the error message was not relayed to the client. Instead the client would only see an EOF error. Update the llama.cpp subprocess log monitor to capture CUDA errors and relay them to the client. Before: ``` Error: error reading llm response: unexpected EOF ``` Af...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/825/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1210
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1210/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1210/comments
https://api.github.com/repos/ollama/ollama/issues/1210/events
https://github.com/ollama/ollama/pull/1210
2,002,716,154
PR_kwDOJ0Z1Ps5f8AmN
1,210
Add `user` to prompt template
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
2
2023-11-20T17:54:13
2024-02-20T04:22:26
2024-02-20T04:22:26
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1210", "html_url": "https://github.com/ollama/ollama/pull/1210", "diff_url": "https://github.com/ollama/ollama/pull/1210.diff", "patch_url": "https://github.com/ollama/ollama/pull/1210.patch", "merged_at": null }
With the upcoming `messages` API change the lack of symmetry between the `user` role and the `prompt` in the template is confusing. This change proposes adding `{{ .User }}` as an alternative to `{{ .Prompt }}` for the model template. Here's an example: ``` FROM llama2 PARAMETER temperature 1 TEMPLATE """[INST] ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1210/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2980
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2980/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2980/comments
https://api.github.com/repos/ollama/ollama/issues/2980/events
https://github.com/ollama/ollama/issues/2980
2,173,757,611
I_kwDOJ0Z1Ps6BkOir
2,980
Uninstall CLI ollama on Mac
{ "login": "X1AOX1A", "id": 52992366, "node_id": "MDQ6VXNlcjUyOTkyMzY2", "avatar_url": "https://avatars.githubusercontent.com/u/52992366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/X1AOX1A", "html_url": "https://github.com/X1AOX1A", "followers_url": "https://api.github.com/users/X1AOX1...
[]
closed
false
null
[]
null
3
2024-03-07T12:31:15
2024-05-31T04:22:41
2024-03-07T16:21:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
How to uninstall CLI ollama on Mac?
{ "login": "X1AOX1A", "id": 52992366, "node_id": "MDQ6VXNlcjUyOTkyMzY2", "avatar_url": "https://avatars.githubusercontent.com/u/52992366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/X1AOX1A", "html_url": "https://github.com/X1AOX1A", "followers_url": "https://api.github.com/users/X1AOX1...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2980/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1836
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1836/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1836/comments
https://api.github.com/repos/ollama/ollama/issues/1836/events
https://github.com/ollama/ollama/issues/1836
2,069,041,766
I_kwDOJ0Z1Ps57UxJm
1,836
Consult where Ollama models are saved in Linux.( in WSL on windows)
{ "login": "zephirusgit", "id": 20031912, "node_id": "MDQ6VXNlcjIwMDMxOTEy", "avatar_url": "https://avatars.githubusercontent.com/u/20031912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zephirusgit", "html_url": "https://github.com/zephirusgit", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
6
2024-01-07T08:35:11
2024-03-11T20:42:30
2024-03-11T20:42:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, I'm really running Ollama, in WSL Windows Subsystem Linux, (in Windows) Now, my problem is that when you lower a new model, call2, llava, or create some, these models are downloaded, or copied, in some folder , I imagine the WSL? De Linux? or Windows? For example, I wanted to run the mixtral model, which occupi...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1836/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6993
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6993/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6993/comments
https://api.github.com/repos/ollama/ollama/issues/6993/events
https://github.com/ollama/ollama/issues/6993
2,551,916,302
I_kwDOJ0Z1Ps6YGycO
6,993
llama3.1:70b CPU battle neck?
{ "login": "jasonliuspark123", "id": 71071196, "node_id": "MDQ6VXNlcjcxMDcxMTk2", "avatar_url": "https://avatars.githubusercontent.com/u/71071196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jasonliuspark123", "html_url": "https://github.com/jasonliuspark123", "followers_url": "https://...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-09-27T03:30:11
2024-09-28T23:26:41
2024-09-28T23:26:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? CPU only use one core at 100%, while gpu cores mostly run at at less than 20%. Model is not responding at good speed. I'm wondering if this usage of one cpu core becomes the bottle neck for the performance. I have read https://github.com/ggerganov/llama.cpp/issues/8684, but have not s...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6993/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5033
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5033/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5033/comments
https://api.github.com/repos/ollama/ollama/issues/5033/events
https://github.com/ollama/ollama/pull/5033
2,352,045,586
PR_kwDOJ0Z1Ps5yaX11
5,033
Add ModifiedAt Field to /api/show
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
[]
closed
false
null
[]
null
0
2024-06-13T20:53:22
2024-06-16T03:53:57
2024-06-16T03:53:57
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5033", "html_url": "https://github.com/ollama/ollama/pull/5033", "diff_url": "https://github.com/ollama/ollama/pull/5033.diff", "patch_url": "https://github.com/ollama/ollama/pull/5033.patch", "merged_at": "2024-06-16T03:53:56" }
Changed `model` variable name to `m` due to `ParseName` function from `model `package E.g. ... ``` "template": "[INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]", "details": { "parent_model": "", "format": "gguf", "family": "llama", "families": [ "llama", ...
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5033/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1181
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1181/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1181/comments
https://api.github.com/repos/ollama/ollama/issues/1181/events
https://github.com/ollama/ollama/issues/1181
2,000,001,657
I_kwDOJ0Z1Ps53NZp5
1,181
error: invalid cross-device link
{ "login": "0xRavenBlack", "id": 71230759, "node_id": "MDQ6VXNlcjcxMjMwNzU5", "avatar_url": "https://avatars.githubusercontent.com/u/71230759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0xRavenBlack", "html_url": "https://github.com/0xRavenBlack", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
1
2023-11-17T22:06:17
2023-11-20T04:32:24
2023-11-18T05:54:55
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Description: When attempting to create a new model using the provided Hugging Face model (https://huggingface.co/TheBloke/Leo-Mistral-Hessianai-7B-Chat-GGUF) with the following command: ollama create game-mistral-7b -f ./Modelfile an error occurs during the process, resulting in the following error message: ...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1181/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6076
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6076/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6076/comments
https://api.github.com/repos/ollama/ollama/issues/6076/events
https://github.com/ollama/ollama/issues/6076
2,438,123,498
I_kwDOJ0Z1Ps6RUs_q
6,076
add mamba
{ "login": "windkwbs", "id": 129468439, "node_id": "U_kgDOB7eIFw", "avatar_url": "https://avatars.githubusercontent.com/u/129468439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/windkwbs", "html_url": "https://github.com/windkwbs", "followers_url": "https://api.github.com/users/windkwbs/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
4
2024-07-30T15:31:35
2024-10-01T02:46:39
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
[mamba-codestral-7B-v0.1](https://huggingface.co/mistralai/mamba-codestral-7B-v0.1)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6076/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6076/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/411
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/411/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/411/comments
https://api.github.com/repos/ollama/ollama/issues/411/events
https://github.com/ollama/ollama/pull/411
1,867,405,634
PR_kwDOJ0Z1Ps5Y0G85
411
patch llama.cpp for 34B
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2023-08-25T17:07:10
2023-08-25T18:59:06
2023-08-25T18:59:05
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/411", "html_url": "https://github.com/ollama/ollama/pull/411", "diff_url": "https://github.com/ollama/ollama/pull/411.diff", "patch_url": "https://github.com/ollama/ollama/pull/411.patch", "merged_at": "2023-08-25T18:59:05" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/411/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6475
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6475/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6475/comments
https://api.github.com/repos/ollama/ollama/issues/6475/events
https://github.com/ollama/ollama/issues/6475
2,482,966,667
I_kwDOJ0Z1Ps6T_xCL
6,475
The issue of high CPU utilization in Ollama
{ "login": "fenggaobj", "id": 13727907, "node_id": "MDQ6VXNlcjEzNzI3OTA3", "avatar_url": "https://avatars.githubusercontent.com/u/13727907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fenggaobj", "html_url": "https://github.com/fenggaobj", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
2
2024-08-23T11:46:39
2024-08-27T21:18:31
2024-08-27T21:18:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? "ollama run qwen2" command loads until timeout **Seeking help:** How can I resolve this high CPU utilization issue with Ollama? Is it possible to configure JIT compilation to support multithreading? **Please review the following analysis process.** **(1) Environment and version info...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6475/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8068
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8068/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8068/comments
https://api.github.com/repos/ollama/ollama/issues/8068/events
https://github.com/ollama/ollama/issues/8068
2,735,516,995
I_kwDOJ0Z1Ps6jDK1D
8,068
0.5.2 does not use cuda on multi-gpu nvidia setups
{ "login": "frenzybiscuit", "id": 190028151, "node_id": "U_kgDOC1OZdw", "avatar_url": "https://avatars.githubusercontent.com/u/190028151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frenzybiscuit", "html_url": "https://github.com/frenzybiscuit", "followers_url": "https://api.github.com/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-12-12T10:37:50
2024-12-13T19:57:10
2024-12-13T19:57:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Basically, title. 0.5.2 doesn't use cuda (or the GPU at all) on multi GPU setups. It reverts to CPU only. Output below. ``` root@helga:/usr/share/ollama/.ollama# journalctl -u ollama --no-pager -f Dec 12 02:31:35 sub.domain.tld ollama[95532]: [GIN-debug] HEAD /api/tags ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8068/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1360
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1360/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1360/comments
https://api.github.com/repos/ollama/ollama/issues/1360/events
https://github.com/ollama/ollama/pull/1360
2,022,380,507
PR_kwDOJ0Z1Ps5g-hHn
1,360
Add link to Ollama Modelfiles repository
{ "login": "tusharhero", "id": 54012021, "node_id": "MDQ6VXNlcjU0MDEyMDIx", "avatar_url": "https://avatars.githubusercontent.com/u/54012021?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tusharhero", "html_url": "https://github.com/tusharhero", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
3
2023-12-03T05:41:10
2023-12-05T10:40:52
2023-12-05T05:09:38
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1360", "html_url": "https://github.com/ollama/ollama/pull/1360", "diff_url": "https://github.com/ollama/ollama/pull/1360.diff", "patch_url": "https://github.com/ollama/ollama/pull/1360.patch", "merged_at": null }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1360/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6000
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6000/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6000/comments
https://api.github.com/repos/ollama/ollama/issues/6000/events
https://github.com/ollama/ollama/issues/6000
2,433,025,574
I_kwDOJ0Z1Ps6RBQYm
6,000
Cli broken with the new tools update
{ "login": "anandanand84dv", "id": 170383551, "node_id": "U_kgDOCifYvw", "avatar_url": "https://avatars.githubusercontent.com/u/170383551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anandanand84dv", "html_url": "https://github.com/anandanand84dv", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-07-26T21:53:44
2024-07-26T21:57:48
2024-07-26T21:57:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? After the new tools implementation. It errors out on the second question. ```Error: template: :28:7: executing "" at <.ToolCalls>: can't evaluate field ToolCalls in type *api.Message``` ![image](https://github.com/user-attachments/assets/b4b283ef-7f4e-498f-97a7-7afa71b0e32a) ### OS Lin...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6000/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2521
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2521/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2521/comments
https://api.github.com/repos/ollama/ollama/issues/2521/events
https://github.com/ollama/ollama/issues/2521
2,137,406,899
I_kwDOJ0Z1Ps5_Zj2z
2,521
Restart to update shows twice on Windows
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-02-15T20:37:34
2024-02-17T01:23:38
2024-02-17T01:23:38
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![image](https://github.com/ollama/ollama/assets/251292/11aa2472-332f-4b72-b916-d9db6055bad4)
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2521/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2014
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2014/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2014/comments
https://api.github.com/repos/ollama/ollama/issues/2014/events
https://github.com/ollama/ollama/issues/2014
2,083,517,238
I_kwDOJ0Z1Ps58L_M2
2,014
How to make output consistent
{ "login": "Fei-Wang", "id": 11441526, "node_id": "MDQ6VXNlcjExNDQxNTI2", "avatar_url": "https://avatars.githubusercontent.com/u/11441526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fei-Wang", "html_url": "https://github.com/Fei-Wang", "followers_url": "https://api.github.com/users/Fei...
[]
closed
false
null
[]
null
6
2024-01-16T10:03:32
2024-01-27T01:07:24
2024-01-27T01:07:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Setting seed and temperature cannot make the output consistent. <img width="1087" alt="image" src="https://github.com/jmorganca/ollama/assets/11441526/9a00ac1f-c120-4211-9b2e-fcec627f69e1">
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2014/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1951
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1951/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1951/comments
https://api.github.com/repos/ollama/ollama/issues/1951/events
https://github.com/ollama/ollama/issues/1951
2,078,868,898
I_kwDOJ0Z1Ps576QWi
1,951
Ollama GPU Process does not automatically terminate after inactivity
{ "login": "chereszabor", "id": 7354324, "node_id": "MDQ6VXNlcjczNTQzMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7354324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chereszabor", "html_url": "https://github.com/chereszabor", "followers_url": "https://api.github.com/us...
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
3
2024-01-12T13:36:15
2024-01-18T16:58:53
2024-01-18T16:58:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Noticed with recent releases the ollama process does not get automatically terminated after a period of inactivity, idling the GPU process and keeping the last used model in VRAM. This also increases the time required to load a new model into VRAM and increases 'standby' power usage of the GPU. I am deploying ollama...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1951/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/621
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/621/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/621/comments
https://api.github.com/repos/ollama/ollama/issues/621/events
https://github.com/ollama/ollama/pull/621
1,915,255,182
PR_kwDOJ0Z1Ps5bUqpR
621
Added missing return preventing SIGSEGV because of missing resp
{ "login": "lstep", "id": 2028, "node_id": "MDQ6VXNlcjIwMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lstep", "html_url": "https://github.com/lstep", "followers_url": "https://api.github.com/users/lstep/followers", "fol...
[]
closed
false
null
[]
null
1
2023-09-27T10:49:31
2023-09-28T21:25:23
2023-09-28T21:25:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/621", "html_url": "https://github.com/ollama/ollama/pull/621", "diff_url": "https://github.com/ollama/ollama/pull/621.diff", "patch_url": "https://github.com/ollama/ollama/pull/621.patch", "merged_at": "2023-09-28T21:25:23" }
Closes #619
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/621/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6401
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6401/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6401/comments
https://api.github.com/repos/ollama/ollama/issues/6401/events
https://github.com/ollama/ollama/issues/6401
2,471,629,115
I_kwDOJ0Z1Ps6TUhE7
6,401
embeddings models keep_alive
{ "login": "Abdulrahman392011", "id": 175052671, "node_id": "U_kgDOCm8Xfw", "avatar_url": "https://avatars.githubusercontent.com/u/175052671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abdulrahman392011", "html_url": "https://github.com/Abdulrahman392011", "followers_url": "https://api...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-08-17T18:46:16
2024-08-17T23:29:43
2024-08-17T23:29:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I use embeddings models a lot and every time it loads the model do the vectoring and then unload it immediately. when I try to keep alive by using this command $ curl http://localhost:11434/api/generate -d '{"model": "mxbai-embed-large:latest", "keep_alive": -1}' it tells me that this model isn't a generative m...
{ "login": "Abdulrahman392011", "id": 175052671, "node_id": "U_kgDOCm8Xfw", "avatar_url": "https://avatars.githubusercontent.com/u/175052671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abdulrahman392011", "html_url": "https://github.com/Abdulrahman392011", "followers_url": "https://api...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6401/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6401/timeline
null
completed
false