url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4637 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4637/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4637/comments | https://api.github.com/repos/ollama/ollama/issues/4637/events | https://github.com/ollama/ollama/issues/4637 | 2,317,069,953 | I_kwDOJ0Z1Ps6KG66B | 4,637 | windows gpu memory.available always be one value | {
"login": "mofanke",
"id": 54242816,
"node_id": "MDQ6VXNlcjU0MjQyODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mofanke",
"html_url": "https://github.com/mofanke",
"followers_url": "https://api.github.com/users/mofank... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-05-25T14:54:38 | 2024-05-31T19:59:43 | 2024-05-31T19:59:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
windows gpu memory.available always be one value , no matter how many model was loaded
### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.38 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4637/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/268 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/268/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/268/comments | https://api.github.com/repos/ollama/ollama/issues/268/events | https://github.com/ollama/ollama/pull/268 | 1,835,276,983 | PR_kwDOJ0Z1Ps5XHyWq | 268 | Update README.md | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2023-08-03T15:23:07 | 2023-08-03T15:23:33 | 2023-08-03T15:23:32 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/268",
"html_url": "https://github.com/ollama/ollama/pull/268",
"diff_url": "https://github.com/ollama/ollama/pull/268.diff",
"patch_url": "https://github.com/ollama/ollama/pull/268.patch",
"merged_at": "2023-08-03T15:23:32"
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/268/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/673 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/673/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/673/comments | https://api.github.com/repos/ollama/ollama/issues/673/events | https://github.com/ollama/ollama/pull/673 | 1,922,391,010 | PR_kwDOJ0Z1Ps5bsqjJ | 673 | clean up num_gpu calculation code | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-02T18:11:47 | 2023-10-02T18:53:42 | 2023-10-02T18:53:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/673",
"html_url": "https://github.com/ollama/ollama/pull/673",
"diff_url": "https://github.com/ollama/ollama/pull/673.diff",
"patch_url": "https://github.com/ollama/ollama/pull/673.patch",
"merged_at": "2023-10-02T18:53:42"
} | there were some unreachable code paths and unused variables here from iterations on an old branch, remove them | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/673/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6554 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6554/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6554/comments | https://api.github.com/repos/ollama/ollama/issues/6554/events | https://github.com/ollama/ollama/issues/6554 | 2,494,275,014 | I_kwDOJ0Z1Ps6Uq53G | 6,554 | Error: llama runner process has terminated: exit status 0xc0000135 | {
"login": "balaji1732000",
"id": 70811241,
"node_id": "MDQ6VXNlcjcwODExMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/70811241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balaji1732000",
"html_url": "https://github.com/balaji1732000",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-08-29T12:16:26 | 2024-09-01T23:20:14 | 2024-09-01T23:20:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I followed the below document to run the ollama model in GPU using Intel IPEX
https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md
https://www.intel.com/content/www/us/en/content-details/826081/running-ollama-with-open-webui-on-intel-hardware-pl... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6554/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6949 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6949/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6949/comments | https://api.github.com/repos/ollama/ollama/issues/6949/events | https://github.com/ollama/ollama/issues/6949 | 2,547,140,889 | I_kwDOJ0Z1Ps6X0kkZ | 6,949 | Is there a better model that can accurately recognize image information?下载了好几个多模态的模型,图片识别效果都不好 | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/follower... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 5 | 2024-09-25T07:09:37 | 2025-01-08T00:00:26 | 2025-01-08T00:00:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
"Using fastgpt --onapi to call local Ollama models, I have downloaded several multimodal models, but the image recognition accuracy is not good. Is there a better model that can accurately recognize image information?"
使用fastgpt--onapi调用ollama本地模型,下载了好几个多模态的模型,图片识别效果都不准确。
有没有好一点的可以识别图片信息的模... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6949/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8129 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8129/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8129/comments | https://api.github.com/repos/ollama/ollama/issues/8129/events | https://github.com/ollama/ollama/pull/8129 | 2,743,834,133 | PR_kwDOJ0Z1Ps6Fb0jh | 8,129 | build: Enable -mf16c and -mfma in ROCm on x86 only | {
"login": "hack3ric",
"id": 18899791,
"node_id": "MDQ6VXNlcjE4ODk5Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/18899791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hack3ric",
"html_url": "https://github.com/hack3ric",
"followers_url": "https://api.github.com/users/hac... | [] | open | false | null | [] | null | 0 | 2024-12-17T03:20:59 | 2024-12-17T03:20:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8129",
"html_url": "https://github.com/ollama/ollama/pull/8129",
"diff_url": "https://github.com/ollama/ollama/pull/8129.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8129.patch",
"merged_at": null
} | These flags are not available outside of x86. I've successfully built Ollama with ROCm support on RISC-V hardware and Arch Linux RISC-V. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8129/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8636 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8636/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8636/comments | https://api.github.com/repos/ollama/ollama/issues/8636/events | https://github.com/ollama/ollama/issues/8636 | 2,815,799,891 | I_kwDOJ0Z1Ps6n1bJT | 8,636 | Upload compressed package file, unable to decompress and error reported | {
"login": "terling",
"id": 174825001,
"node_id": "U_kgDOCmueKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174825001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/terling",
"html_url": "https://github.com/terling",
"followers_url": "https://api.github.com/users/terling/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-28T14:13:01 | 2025-01-29T23:29:46 | 2025-01-29T23:29:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Thanks for this great program, I love it! However, I uploaded a compressed package containing the project source code in the dialog interface, and an error occurred when the program was run. Can this problem be solved?

. A.---....... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2328/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/901 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/901/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/901/comments | https://api.github.com/repos/ollama/ollama/issues/901/events | https://github.com/ollama/ollama/issues/901 | 1,960,492,610 | I_kwDOJ0Z1Ps502r5C | 901 | Setting correct rope frequency on llama2-chinese | {
"login": "ddv404",
"id": 97394404,
"node_id": "U_kgDOBc4e5A",
"avatar_url": "https://avatars.githubusercontent.com/u/97394404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddv404",
"html_url": "https://github.com/ddv404",
"followers_url": "https://api.github.com/users/ddv404/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2023-10-25T04:09:08 | 2024-04-17T02:15:38 | 2024-04-17T02:15:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | <img width="1566" alt="image" src="https://github.com/jmorganca/ollama/assets/97394404/007005ae-456c-4b66-a509-7c57849e79ec">
回答一直在换行?这是为什么?
(Always on line feed display,Why?) | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/901/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1868 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1868/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1868/comments | https://api.github.com/repos/ollama/ollama/issues/1868/events | https://github.com/ollama/ollama/issues/1868 | 2,072,575,275 | I_kwDOJ0Z1Ps57iP0r | 1,868 | ollama in a docker - can't check healthiness - Support Ollama under Rosetta | {
"login": "FreakDev",
"id": 187670,
"node_id": "MDQ6VXNlcjE4NzY3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/187670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreakDev",
"html_url": "https://github.com/FreakDev",
"followers_url": "https://api.github.com/users/FreakDe... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-01-09T15:14:05 | 2024-01-11T22:00:49 | 2024-01-11T22:00:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello !
i'm trying to setup ollama to run in a docker container, in order to have it run in runpod serverless function and to do so i'd like to pull a model file in my container image (embed the model file into the docker image)
basically i'd like to have a script like this that run during the build fo the image ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1868/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1257 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1257/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1257/comments | https://api.github.com/repos/ollama/ollama/issues/1257/events | https://github.com/ollama/ollama/pull/1257 | 2,008,758,473 | PR_kwDOJ0Z1Ps5gQggk | 1,257 | env variable to configure defaultSessionDuration | {
"login": "Pr0dt0s",
"id": 24417072,
"node_id": "MDQ6VXNlcjI0NDE3MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/24417072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pr0dt0s",
"html_url": "https://github.com/Pr0dt0s",
"followers_url": "https://api.github.com/users/Pr0dt0... | [] | closed | false | null | [] | null | 5 | 2023-11-23T20:26:27 | 2024-05-07T23:47:45 | 2024-05-07T23:47:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1257",
"html_url": "https://github.com/ollama/ollama/pull/1257",
"diff_url": "https://github.com/ollama/ollama/pull/1257.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1257.patch",
"merged_at": null
} | This adds a simple environment variable to configure the defautSessionDuration that currently is hardcoded as 5 minutes.
Fixes issues #1048 and #931 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1257/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1257/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3705 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3705/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3705/comments | https://api.github.com/repos/ollama/ollama/issues/3705/events | https://github.com/ollama/ollama/pull/3705 | 2,248,777,188 | PR_kwDOJ0Z1Ps5s8vUi | 3,705 | Update api.md | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | closed | false | null | [] | null | 0 | 2024-04-17T17:04:11 | 2024-04-20T19:17:04 | 2024-04-20T19:17:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3705",
"html_url": "https://github.com/ollama/ollama/pull/3705",
"diff_url": "https://github.com/ollama/ollama/pull/3705.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3705.patch",
"merged_at": "2024-04-20T19:17:03"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3705/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1008 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1008/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1008/comments | https://api.github.com/repos/ollama/ollama/issues/1008/events | https://github.com/ollama/ollama/issues/1008 | 1,977,961,592 | I_kwDOJ0Z1Ps515Ux4 | 1,008 | Message repeated infinitly with last version of Zephyr and Ollama 0.1.8 | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 3 | 2023-11-05T20:59:10 | 2023-11-24T10:43:03 | 2023-11-24T10:43:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I installed Ollama 0.1.8 and ran Zephyr. I asked in French how many languages Zephyr could translate, and Zephyr answered me with the same paragraph repeatedly:
Note : Les tarifs sont approximatifs et peuvent varier selon la complexité du texte, sa
longueur et la disponibilité des traducteurs qualifiés dans la lan... | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1008/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2457 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2457/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2457/comments | https://api.github.com/repos/ollama/ollama/issues/2457/events | https://github.com/ollama/ollama/issues/2457 | 2,129,222,077 | I_kwDOJ0Z1Ps5-6Vm9 | 2,457 | Support for more image types | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-02-11T23:27:16 | 2024-02-12T07:35:43 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently image models such as Llava only support `png` and `jpeg`. Add support for more such as `webp`, `avif` and others. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2457/reactions",
"total_count": 7,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2457/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1497 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1497/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1497/comments | https://api.github.com/repos/ollama/ollama/issues/1497/events | https://github.com/ollama/ollama/pull/1497 | 2,039,058,173 | PR_kwDOJ0Z1Ps5h3LQo | 1,497 | patches: Clean up llama.cpp patches, update submodules to latest upstream | {
"login": "tmc",
"id": 3977,
"node_id": "MDQ6VXNlcjM5Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmc",
"html_url": "https://github.com/tmc",
"followers_url": "https://api.github.com/users/tmc/followers",
"following_u... | [] | closed | false | null | [] | null | 2 | 2023-12-13T06:41:36 | 2024-01-18T22:28:57 | 2024-01-18T22:28:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1497",
"html_url": "https://github.com/ollama/ollama/pull/1497",
"diff_url": "https://github.com/ollama/ollama/pull/1497.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1497.patch",
"merged_at": null
} | This updates llama.cpp submodules to latest (fecac4) and removes the patches that have landed in llama.cpp already (most of them).
Since the "ggml" source tree doesn't appear to need to be patched anymore it seems like we can reduce down to one submodule here but I did not perform that refactor for simplicity, let m... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1497/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1497/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2616 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2616/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2616/comments | https://api.github.com/repos/ollama/ollama/issues/2616/events | https://github.com/ollama/ollama/issues/2616 | 2,144,627,054 | I_kwDOJ0Z1Ps5_1Glu | 2,616 | Change Bind IP address | {
"login": "Jacoub",
"id": 11414612,
"node_id": "MDQ6VXNlcjExNDE0NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11414612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jacoub",
"html_url": "https://github.com/Jacoub",
"followers_url": "https://api.github.com/users/Jacoub/fo... | [] | closed | false | null | [] | null | 4 | 2024-02-20T15:19:58 | 2024-05-31T22:13:44 | 2024-02-20T18:49:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Tried changing bind localhost:11434 to IP:11434 to server requests from Ollama WEBUI running on a separate docker host | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2616/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4008 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4008/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4008/comments | https://api.github.com/repos/ollama/ollama/issues/4008/events | https://github.com/ollama/ollama/issues/4008 | 2,267,745,810 | I_kwDOJ0Z1Ps6HKw4S | 4,008 | Compute Capability Misidentification with PhysX cudart library | {
"login": "aaronjrod",
"id": 35236356,
"node_id": "MDQ6VXNlcjM1MjM2MzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35236356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjrod",
"html_url": "https://github.com/aaronjrod",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 24 | 2024-04-28T19:03:26 | 2024-09-25T17:17:55 | 2024-05-06T20:30:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama server incorrectly identifies the Compute Capability of my GPU (detects 1.0 instead of 5.2). It seems to me that this is due to a recent change in [gpu/gpu.go](https://github.com/ollama/ollama/commit/34b9db5afc43b352c5ef04fe6ef52684bfdd57b5#diff-b3bde438f86c17903c484c6a1f48f7c98437f5ed1... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4008/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6060 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6060/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6060/comments | https://api.github.com/repos/ollama/ollama/issues/6060/events | https://github.com/ollama/ollama/issues/6060 | 2,436,182,125 | I_kwDOJ0Z1Ps6RNTBt | 6,060 | Update template: Llama 3.1 | {
"login": "MaxJa4",
"id": 74194322,
"node_id": "MDQ6VXNlcjc0MTk0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/74194322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaxJa4",
"html_url": "https://github.com/MaxJa4",
"followers_url": "https://api.github.com/users/MaxJa4/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 4 | 2024-07-29T19:28:54 | 2024-08-07T16:28:13 | 2024-08-07T16:28:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Meta / HF updated the tokenizer config (specifically the chat template) of all the Llama 3.1 (instruct) models a few hours ago:
- [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/commit/b2a4d0f33b41fcd59a6d31662cc63b8d53367e1e)
- [Meta-Llama-3.1-70B-Instruct](https://hugging... | {
"login": "MaxJa4",
"id": 74194322,
"node_id": "MDQ6VXNlcjc0MTk0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/74194322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaxJa4",
"html_url": "https://github.com/MaxJa4",
"followers_url": "https://api.github.com/users/MaxJa4/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6060/reactions",
"total_count": 24,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6060/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5117 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5117/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5117/comments | https://api.github.com/repos/ollama/ollama/issues/5117/events | https://github.com/ollama/ollama/pull/5117 | 2,360,419,823 | PR_kwDOJ0Z1Ps5y26zJ | 5,117 | Handle models with divergent layer sizes | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-06-18T18:06:13 | 2024-06-18T18:36:54 | 2024-06-18T18:36:51 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5117",
"html_url": "https://github.com/ollama/ollama/pull/5117",
"diff_url": "https://github.com/ollama/ollama/pull/5117.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5117.patch",
"merged_at": "2024-06-18T18:36:51"
} | The recent refactoring of the memory prediction assumed all layers are the same size, but for some models (like deepseek-coder-v2) this is not the case, so our predictions were significantly off.
Without the fix:
```
time=2024-06-18T11:03:42.708-07:00 level=INFO source=memory.go:303 msg="offload to metal" layers... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5117/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/74 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/74/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/74/comments | https://api.github.com/repos/ollama/ollama/issues/74/events | https://github.com/ollama/ollama/pull/74 | 1,801,981,185 | PR_kwDOJ0Z1Ps5VXXLW | 74 | Timings | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-13T01:20:54 | 2023-07-13T17:17:22 | 2023-07-13T17:17:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/74",
"html_url": "https://github.com/ollama/ollama/pull/74",
"diff_url": "https://github.com/ollama/ollama/pull/74.diff",
"patch_url": "https://github.com/ollama/ollama/pull/74.patch",
"merged_at": "2023-07-13T17:17:14"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/74/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/74/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3628 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3628/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3628/comments | https://api.github.com/repos/ollama/ollama/issues/3628/events | https://github.com/ollama/ollama/issues/3628 | 2,241,654,325 | I_kwDOJ0Z1Ps6FnO41 | 3,628 | Fails to pull model | {
"login": "ahmetkca",
"id": 74574469,
"node_id": "MDQ6VXNlcjc0NTc0NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/74574469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmetkca",
"html_url": "https://github.com/ahmetkca",
"followers_url": "https://api.github.com/users/ahm... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-04-13T16:51:20 | 2024-04-15T23:11:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
❯ ollama pull gemma
pulling manifest
pulling ef311de6af9d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 2.5 GB
Error: remove /Users/ahmetkca/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3628/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8130 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8130/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8130/comments | https://api.github.com/repos/ollama/ollama/issues/8130/events | https://github.com/ollama/ollama/pull/8130 | 2,743,930,230 | PR_kwDOJ0Z1Ps6FcJbC | 8,130 | llm: do not silently fail for supplied, but invalid formats | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-12-17T04:55:01 | 2024-12-17T15:54:57 | 2024-12-17T05:57:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8130",
"html_url": "https://github.com/ollama/ollama/pull/8130",
"diff_url": "https://github.com/ollama/ollama/pull/8130.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8130.patch",
"merged_at": "2024-12-17T05:57:49"
} | Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
It also fixed a bug where the server would silently fail when clients
requested invalid formats. It also, unfortunately, introduced a bug
where the server would reject requests with an empty format, which
should be allowed.
The change in #812... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8130/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5495 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5495/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5495/comments | https://api.github.com/repos/ollama/ollama/issues/5495/events | https://github.com/ollama/ollama/issues/5495 | 2,391,846,622 | I_kwDOJ0Z1Ps6OkK7e | 5,495 | The quality of the results returned by the embedding model become worse | {
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | open | false | null | [] | null | 6 | 2024-07-05T05:28:44 | 2024-12-08T15:26:38 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The quality of the results returned by the embedding model now is much worse than the previous version.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.48 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5495/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2279 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2279/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2279/comments | https://api.github.com/repos/ollama/ollama/issues/2279/events | https://github.com/ollama/ollama/pull/2279 | 2,108,320,414 | PR_kwDOJ0Z1Ps5lefy5 | 2,279 | Add support for libcudart.so for CUDA devices (Adds Jetson support) | {
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/foll... | [] | closed | false | null | [] | null | 42 | 2024-01-30T16:50:18 | 2024-03-30T15:58:30 | 2024-03-25T19:46:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2279",
"html_url": "https://github.com/ollama/ollama/pull/2279",
"diff_url": "https://github.com/ollama/ollama/pull/2279.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2279.patch",
"merged_at": "2024-03-25T19:46:28"
} | Added libcudart.so support to gpu.go for CUDA devices that are missing libnvidia-ml.so. CUDA libraries split into nvml (libnvidia-ml.so) and cudart (libcudart.so), can work with either. Tested on Jetson device and on Windows 11 in WSL2.
Devices used to test:
Jetson Orin Nano 8Gb
Jetpack 5.1.2, L4T 35.4.1
CUDA 11-... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2279/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2279/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4981 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4981/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4981/comments | https://api.github.com/repos/ollama/ollama/issues/4981/events | https://github.com/ollama/ollama/issues/4981 | 2,346,543,830 | I_kwDOJ0Z1Ps6L3WrW | 4,981 | Error Pulling any model - "Error: pull model manifest: 200: stream error: stream ID 3; NO_ERROR; received from peer" | {
"login": "ziptron",
"id": 17092430,
"node_id": "MDQ6VXNlcjE3MDkyNDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/17092430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziptron",
"html_url": "https://github.com/ziptron",
"followers_url": "https://api.github.com/users/ziptro... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-06-11T14:10:14 | 2024-09-24T15:57:56 | 2024-09-24T15:57:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm running Ollama on Windows Server. The setup seemed to work a few days ago, I was able to pull several models. Today I see an error that shows up immediately after a try to download a model by typing Ollama Run {model name} in powershell:
"Error: pull model manifest: 200: stream error: st... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4981/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1669 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1669/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1669/comments | https://api.github.com/repos/ollama/ollama/issues/1669/events | https://github.com/ollama/ollama/issues/1669 | 2,053,674,871 | I_kwDOJ0Z1Ps56aJd3 | 1,669 | Feature Request: Add RSS feed to Blog | {
"login": "puresick",
"id": 2714266,
"node_id": "MDQ6VXNlcjI3MTQyNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2714266?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puresick",
"html_url": "https://github.com/puresick",
"followers_url": "https://api.github.com/users/pures... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | null | [] | null | 14 | 2023-12-22T09:57:23 | 2025-01-04T20:08:04 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi! I am not sure if this is the right place for feature requests for the [Blog](https://ollama.ai/blog), but I did not find any other place where this would have been applicable — if I am wrong here I am sorry!
Regarding the feature request: It would be great to have a RSS feed for the blog to keep up with updates ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1669/reactions",
"total_count": 43,
"+1": 7,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 35,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1669/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2233 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2233/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2233/comments | https://api.github.com/repos/ollama/ollama/issues/2233/events | https://github.com/ollama/ollama/pull/2233 | 2,103,740,329 | PR_kwDOJ0Z1Ps5lPBCd | 2,233 | Support building from source with CUDA CC 3.5 and 3.7 support | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 13 | 2024-01-27T19:06:36 | 2024-11-20T23:09:29 | 2024-11-20T23:09:24 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2233",
"html_url": "https://github.com/ollama/ollama/pull/2233",
"diff_url": "https://github.com/ollama/ollama/pull/2233.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2233.patch",
"merged_at": null
} | They don't perform much better than CPU, but this adds support for these older cards for users who build locally.
Fixes #1756 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2233/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2233/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4958 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4958/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4958/comments | https://api.github.com/repos/ollama/ollama/issues/4958/events | https://github.com/ollama/ollama/issues/4958 | 2,342,504,398 | I_kwDOJ0Z1Ps6Ln8fO | 4,958 | Cuda 12 runner | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6430601766,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-06-09T21:43:20 | 2024-08-19T18:14:25 | 2024-08-19T18:14:25 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | A CUDA 12+ build of a runner is required for CUDA graphs to be enabled. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4958/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4958/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4984 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4984/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4984/comments | https://api.github.com/repos/ollama/ollama/issues/4984/events | https://github.com/ollama/ollama/issues/4984 | 2,347,076,913 | I_kwDOJ0Z1Ps6L5Y0x | 4,984 | Ollama not using GPU after OS Reboot | {
"login": "lukasmwerner",
"id": 55150634,
"node_id": "MDQ6VXNlcjU1MTUwNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/55150634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukasmwerner",
"html_url": "https://github.com/lukasmwerner",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 15 | 2024-06-11T18:51:30 | 2024-11-14T19:40:52 | 2024-06-13T20:26:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After installing ollama from ollama.com it is able to use my GPU but after rebooting it no longer is able to find the GPU giving the message:
```
CUDA driver version: 12-5
time=2024-06-11T11:46:56.544-07:00 level=DEBUG source=gpu.go:148 msg="detected GPUs" library="C:\\Program Files (x86)\\N... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4984/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5506 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5506/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5506/comments | https://api.github.com/repos/ollama/ollama/issues/5506/events | https://github.com/ollama/ollama/pull/5506 | 2,393,201,999 | PR_kwDOJ0Z1Ps50keRP | 5,506 | Refine scheduler unit tests for reliability | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-05T22:31:26 | 2024-07-20T22:48:43 | 2024-07-20T22:48:40 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5506",
"html_url": "https://github.com/ollama/ollama/pull/5506",
"diff_url": "https://github.com/ollama/ollama/pull/5506.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5506.patch",
"merged_at": "2024-07-20T22:48:40"
} | This breaks up some of the test scenarios to create a more reliable set of tests, as well as adding a little more coverage. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5506/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4186 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4186/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4186/comments | https://api.github.com/repos/ollama/ollama/issues/4186/events | https://github.com/ollama/ollama/issues/4186 | 2,279,747,256 | I_kwDOJ0Z1Ps6H4i64 | 4,186 | Tokenize and Detokenize API For Token Count | {
"login": "sslx",
"id": 6382550,
"node_id": "MDQ6VXNlcjYzODI1NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6382550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sslx",
"html_url": "https://github.com/sslx",
"followers_url": "https://api.github.com/users/sslx/followers",
... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-05-05T21:46:21 | 2024-07-05T16:20:33 | 2024-06-04T22:42:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For rag purposes, I'd love to find out the token count for text before feeding to a model for a response.
Could you connect api points for tokenize and detokenize on llama.cpp?
Thanks! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4186/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4186/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4782 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4782/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4782/comments | https://api.github.com/repos/ollama/ollama/issues/4782/events | https://github.com/ollama/ollama/pull/4782 | 2,329,642,569 | PR_kwDOJ0Z1Ps5xOIJj | 4,782 | Added messages confirming arm64 support (NEON and SVE) | {
"login": "bindatype",
"id": 6185719,
"node_id": "MDQ6VXNlcjYxODU3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6185719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bindatype",
"html_url": "https://github.com/bindatype",
"followers_url": "https://api.github.com/users/bi... | [] | open | false | null | [] | null | 0 | 2024-06-02T12:53:06 | 2024-06-02T13:12:10 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4782",
"html_url": "https://github.com/ollama/ollama/pull/4782",
"diff_url": "https://github.com/ollama/ollama/pull/4782.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4782.patch",
"merged_at": null
} | Added messages confirming arm64 support (NEON and SVE) to go along with AVX messages in gpu/cpu_common.go. Currently, only AVX is checked but that doesn't apply to arm64 builds and the default message `CPU does not have vector extensions` is displayed even if there is NEON or SVE support. This fix addresses that issue.... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4782/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2009 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2009/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2009/comments | https://api.github.com/repos/ollama/ollama/issues/2009/events | https://github.com/ollama/ollama/issues/2009 | 2,082,958,848 | I_kwDOJ0Z1Ps58J24A | 2,009 | Import pytorch adapter `.bin` files | {
"login": "PhilipAmadasun",
"id": 55031054,
"node_id": "MDQ6VXNlcjU1MDMxMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipAmadasun",
"html_url": "https://github.com/PhilipAmadasun",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 2 | 2024-01-16T03:32:37 | 2024-07-10T19:38:06 | 2024-07-10T18:32:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Has anyone on here successfully created a fine-tuned mistral model with:
```
curl http://server.local:11434/api/create -d '{
"name": "test_mistral",
"modelfile": "FROM mistral\nADAPTER /home/robot/adapter_model.bin"
}'
```
Apparently .bin files aren't in pytorch format so it doesn't work does anyone actu... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2009/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2606 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2606/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2606/comments | https://api.github.com/repos/ollama/ollama/issues/2606/events | https://github.com/ollama/ollama/issues/2606 | 2,143,640,895 | I_kwDOJ0Z1Ps5_xV0_ | 2,606 | `Ollama run` Error | {
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/ia... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA... | closed | false | null | [] | null | 6 | 2024-02-20T06:35:05 | 2024-05-05T22:10:09 | 2024-05-05T22:10:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2606/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7521 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7521/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7521/comments | https://api.github.com/repos/ollama/ollama/issues/7521/events | https://github.com/ollama/ollama/pull/7521 | 2,637,088,862 | PR_kwDOJ0Z1Ps6BAghe | 7,521 | Add GoLamify in Libraries section | {
"login": "prasad89",
"id": 67261499,
"node_id": "MDQ6VXNlcjY3MjYxNDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/67261499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prasad89",
"html_url": "https://github.com/prasad89",
"followers_url": "https://api.github.com/users/pra... | [] | closed | false | null | [] | null | 1 | 2024-11-06T05:35:10 | 2024-11-11T06:38:19 | 2024-11-11T06:38:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7521",
"html_url": "https://github.com/ollama/ollama/pull/7521",
"diff_url": "https://github.com/ollama/ollama/pull/7521.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7521.patch",
"merged_at": "2024-11-11T06:38:19"
} | ### GoLamify Package
This PR adds the [GoLamify](https://github.com/prasad89/golamify), a Go package designed to simplify integration of Go projects with Ollama.
The GoLamify package provides an easy and efficient way to connect Go applications with Ollama services, allowing for seamless interaction and enhanced... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7521/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7550 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7550/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7550/comments | https://api.github.com/repos/ollama/ollama/issues/7550/events | https://github.com/ollama/ollama/issues/7550 | 2,640,528,471 | I_kwDOJ0Z1Ps6dY0RX | 7,550 | ollama runner process has terminated: exit status 127 | {
"login": "SimpleYj",
"id": 38721053,
"node_id": "MDQ6VXNlcjM4NzIxMDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/38721053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SimpleYj",
"html_url": "https://github.com/SimpleYj",
"followers_url": "https://api.github.com/users/Sim... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-11-07T10:13:31 | 2024-11-07T21:55:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama will report this error when running any model. The ollama-linux-amd64.tgz file is directly upgraded to version 0.3.14.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.0 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7550/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6994 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6994/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6994/comments | https://api.github.com/repos/ollama/ollama/issues/6994/events | https://github.com/ollama/ollama/issues/6994 | 2,552,038,549 | I_kwDOJ0Z1Ps6YHQSV | 6,994 | Docker container cannot load model | {
"login": "utopeadia",
"id": 98788152,
"node_id": "U_kgDOBeNjOA",
"avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utopeadia",
"html_url": "https://github.com/utopeadia",
"followers_url": "https://api.github.com/users/utopeadi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-27T05:35:54 | 2024-09-27T05:57:53 | 2024-09-27T05:57:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Whether using ollama run or curl to use the model, it is impossible to load the model into GPU memory
`docker logs ollama` for starting and loading the ollama model are as follows
```bash
2024/09/27 05:29:20 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL:... | {
"login": "utopeadia",
"id": 98788152,
"node_id": "U_kgDOBeNjOA",
"avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utopeadia",
"html_url": "https://github.com/utopeadia",
"followers_url": "https://api.github.com/users/utopeadi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6994/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2724 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2724/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2724/comments | https://api.github.com/repos/ollama/ollama/issues/2724/events | https://github.com/ollama/ollama/issues/2724 | 2,152,213,008 | I_kwDOJ0Z1Ps6ASCoQ | 2,724 | Error running GEMMA:7b on Ollama via Docker | {
"login": "wangshuai67",
"id": 13214849,
"node_id": "MDQ6VXNlcjEzMjE0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13214849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangshuai67",
"html_url": "https://github.com/wangshuai67",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 5 | 2024-02-24T08:38:01 | 2024-02-26T15:38:06 | 2024-02-26T15:38:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
Body:
**Description:**
I encountered an error while running GEMMA:7b on Ollama using Docker. Whenever I attempt to run the GEMMA:7b image, an error occurs.
**Steps to Reproduce:**
1. Deploy Ollama on Docker.
2. Run the GEMMA:7b image using the appropriate command.
3. See the error message that is displayed... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2724/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2724/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7725 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7725/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7725/comments | https://api.github.com/repos/ollama/ollama/issues/7725/events | https://github.com/ollama/ollama/issues/7725 | 2,668,264,963 | I_kwDOJ0Z1Ps6fCn4D | 7,725 | How to check the actual location where the model file is saved, and the directory queried by 'ollama list' | {
"login": "supersaiyan2019",
"id": 130198547,
"node_id": "U_kgDOB8KsEw",
"avatar_url": "https://avatars.githubusercontent.com/u/130198547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/supersaiyan2019",
"html_url": "https://github.com/supersaiyan2019",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677675697,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgU-sQ... | closed | false | null | [] | null | 2 | 2024-11-18T11:36:36 | 2024-11-18T12:35:16 | 2024-11-18T12:33:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
encountered an error while using the new model minicpm-v #6751,still this issue...
Since installing minicpm-v, my ollama version has always stayed at 0.3.6. My problem #6751 has never been solved. I have completely deleted ollama, restarted windows, and reinstalled ollama. As long as minicpm-... | {
"login": "supersaiyan2019",
"id": 130198547,
"node_id": "U_kgDOB8KsEw",
"avatar_url": "https://avatars.githubusercontent.com/u/130198547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/supersaiyan2019",
"html_url": "https://github.com/supersaiyan2019",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7725/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2179 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2179/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2179/comments | https://api.github.com/repos/ollama/ollama/issues/2179/events | https://github.com/ollama/ollama/pull/2179 | 2,099,212,292 | PR_kwDOJ0Z1Ps5lAMwY | 2,179 | add `--upgrade-all` flag to refresh any stale models | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | open | false | null | [] | null | 7 | 2024-01-24T22:22:22 | 2024-04-16T22:58:11 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2179",
"html_url": "https://github.com/ollama/ollama/pull/2179",
"diff_url": "https://github.com/ollama/ollama/pull/2179.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2179.patch",
"merged_at": null
} | This change allows you to run `ollama pull --upgrade-all` which will check each of your local models and upgrade any that are out of date. It uses Etags to check if there is a newer manifest, and then pulls that model if it has been updated.
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2179/reactions",
"total_count": 23,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 23,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2179/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4171 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4171/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4171/comments | https://api.github.com/repos/ollama/ollama/issues/4171/events | https://github.com/ollama/ollama/issues/4171 | 2,279,561,818 | I_kwDOJ0Z1Ps6H31pa | 4,171 | Inconsistent or unresponsive response in llama v0.1.33 using llava model | {
"login": "iwannabewater",
"id": 82285305,
"node_id": "MDQ6VXNlcjgyMjg1MzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/82285305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iwannabewater",
"html_url": "https://github.com/iwannabewater",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-05-05T14:33:16 | 2024-05-05T15:16:14 | 2024-05-05T15:16:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Environment:**
Operating System: Ubuntu 22.04
Hardware: NVIDIA RTX 4090 GPU and Intel Xeon Gold 6326 CPU
ollama Version: v0.1.33
Model Used: llava:34b-v1.6-q4_0
**Description:**
I am experiencing issues with the llava model in ollama v0.1.33, where it fails to respond appropriately t... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4171/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4171/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5022 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5022/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5022/comments | https://api.github.com/repos/ollama/ollama/issues/5022/events | https://github.com/ollama/ollama/issues/5022 | 2,351,247,042 | I_kwDOJ0Z1Ps6MJS7C | 5,022 | GPU VRAM estimate not accounting for flash attetion | {
"login": "theasp",
"id": 7775024,
"node_id": "MDQ6VXNlcjc3NzUwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7775024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theasp",
"html_url": "https://github.com/theasp",
"followers_url": "https://api.github.com/users/theasp/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-06-13T14:04:05 | 2024-10-18T09:42:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I'm using a q6_K quant of of codestral-22 with a 18k context and flash attention enabled. I'm trying to get a higher context configured, but I always have VRAM left. It appears that the estimate does not account for the use of flash attention as I still have 2882 GB left.
```
NAME ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5022/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5022/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1373 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1373/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1373/comments | https://api.github.com/repos/ollama/ollama/issues/1373/events | https://github.com/ollama/ollama/issues/1373 | 2,024,194,473 | I_kwDOJ0Z1Ps54psGp | 1,373 | Configuring/building from git cloned repo does not produce an ollama executable. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [] | closed | false | null | [] | null | 1 | 2023-12-04T15:59:25 | 2023-12-04T16:31:24 | 2023-12-04T16:31:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Following the instructions in README.md file
go generate ./...
go build .
does not seem to end with an ollama executable code in the folder.
I am missing something? How does one build it and then install it?
From "go build ." I get the stuff below. Nothing has been changed in the code.
../go/pkg/mod/git... | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1373/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/999 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/999/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/999/comments | https://api.github.com/repos/ollama/ollama/issues/999/events | https://github.com/ollama/ollama/pull/999 | 1,977,324,402 | PR_kwDOJ0Z1Ps5emHHy | 999 | add hass-ollama-conversation to community integrations | {
"login": "ej52",
"id": 6298706,
"node_id": "MDQ6VXNlcjYyOTg3MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6298706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ej52",
"html_url": "https://github.com/ej52",
"followers_url": "https://api.github.com/users/ej52/followers",
... | [] | closed | false | null | [] | null | 0 | 2023-11-04T13:01:46 | 2023-11-06T18:50:35 | 2023-11-06T18:50:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/999",
"html_url": "https://github.com/ollama/ollama/pull/999",
"diff_url": "https://github.com/ollama/ollama/pull/999.diff",
"patch_url": "https://github.com/ollama/ollama/pull/999.patch",
"merged_at": "2023-11-06T18:50:35"
} | Add custom home assistant integration [hass-ollama-conversation](https://github.com/ej52/hass-ollama-conversation) | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/999/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7423 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7423/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7423/comments | https://api.github.com/repos/ollama/ollama/issues/7423/events | https://github.com/ollama/ollama/issues/7423 | 2,624,652,456 | I_kwDOJ0Z1Ps6ccQSo | 7,423 | "model requires more system memory" When Running in Docker Container and Making Continue Plugin Request from Inside Intellij | {
"login": "nathan-hook",
"id": 10638625,
"node_id": "MDQ6VXNlcjEwNjM4NjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/10638625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathan-hook",
"html_url": "https://github.com/nathan-hook",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-10-30T16:26:08 | 2024-12-03T16:50:35 | 2024-12-02T14:47:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I hope that this a PEBCAK issue and that there is quick environment setting, but with my searching I couldn't find one.
## TL;DR
When using the [Continue Plugin](https://plugins.jetbrains.com/plugin/22707-continue) in my Intellij and then configuring it to talk to my local Docker container... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7423/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2131 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2131/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2131/comments | https://api.github.com/repos/ollama/ollama/issues/2131/events | https://github.com/ollama/ollama/pull/2131 | 2,092,841,329 | PR_kwDOJ0Z1Ps5kqiMH | 2,131 | Probe GPUs before backend init | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-21T23:59:49 | 2024-01-22T00:13:51 | 2024-01-22T00:13:47 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2131",
"html_url": "https://github.com/ollama/ollama/pull/2131",
"diff_url": "https://github.com/ollama/ollama/pull/2131.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2131.patch",
"merged_at": "2024-01-22T00:13:47"
} | Detect potential error scenarios so we can fallback to CPU mode without hitting asserts.
This won't fix the underlying errors we're seeing in #1940 and #1877 but it should hopefully allow us to detect the non-working scenario and fallback to CPU. We still need to understand why `cudaGetDevice` is failing on these s... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2131/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1415 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1415/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1415/comments | https://api.github.com/repos/ollama/ollama/issues/1415/events | https://github.com/ollama/ollama/issues/1415 | 2,030,673,808 | I_kwDOJ0Z1Ps55CZ-Q | 1,415 | Override SYSTEM parameter by commandline | {
"login": "marco-trovato",
"id": 18162107,
"node_id": "MDQ6VXNlcjE4MTYyMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18162107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marco-trovato",
"html_url": "https://github.com/marco-trovato",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2023-12-07T12:37:53 | 2023-12-15T19:06:25 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | According to the [documentation](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md), the only way to change the SYSTEM is to create a new model with modelfile using an existing LLM model already downloaded as template:
`ollama create choose-a-model-name -f <location of the file e.g. ./Modelfile>'`
B... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1415/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2045 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2045/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2045/comments | https://api.github.com/repos/ollama/ollama/issues/2045/events | https://github.com/ollama/ollama/pull/2045 | 2,087,605,927 | PR_kwDOJ0Z1Ps5kY3RI | 2,045 | docker-compose: added initial compose yaml | {
"login": "stevenbecht",
"id": 9442836,
"node_id": "MDQ6VXNlcjk0NDI4MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9442836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenbecht",
"html_url": "https://github.com/stevenbecht",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 2 | 2024-01-18T06:27:29 | 2024-02-21T00:34:03 | 2024-02-21T00:34:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2045",
"html_url": "https://github.com/ollama/ollama/pull/2045",
"diff_url": "https://github.com/ollama/ollama/pull/2045.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2045.patch",
"merged_at": null
} | Created initial docker-compose.yaml based on jamesbraza:docker-compose (#1379). We can use bash sockets to test if server is listening. | {
"login": "stevenbecht",
"id": 9442836,
"node_id": "MDQ6VXNlcjk0NDI4MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9442836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenbecht",
"html_url": "https://github.com/stevenbecht",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2045/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6072 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6072/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6072/comments | https://api.github.com/repos/ollama/ollama/issues/6072/events | https://github.com/ollama/ollama/issues/6072 | 2,437,660,658 | I_kwDOJ0Z1Ps6RS7_y | 6,072 | Unable to get Ollama and OpenwebUI working at all | {
"login": "nicholhai",
"id": 96297412,
"node_id": "U_kgDOBb1hxA",
"avatar_url": "https://avatars.githubusercontent.com/u/96297412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicholhai",
"html_url": "https://github.com/nicholhai",
"followers_url": "https://api.github.com/users/nicholha... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-07-30T11:58:52 | 2024-10-09T18:25:19 | 2024-09-04T01:57:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello All,
Does anyone have instructions on getting Ollama and WebUI working on a tower computer with the following specs: Intel Core i7-13700F 2.1GHz, GeForce RTX 4060 Ti 64GB, 64GB DDR5. **I tried all the following on Ubuntu Server 24.04 OS but can install any OS necessary**
I have it ru... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6072/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5759 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5759/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5759/comments | https://api.github.com/repos/ollama/ollama/issues/5759/events | https://github.com/ollama/ollama/issues/5759 | 2,414,944,510 | I_kwDOJ0Z1Ps6P8SD- | 5,759 | service hang after some requests to /api/embeddings | {
"login": "JerryKwan",
"id": 990113,
"node_id": "MDQ6VXNlcjk5MDExMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/990113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JerryKwan",
"html_url": "https://github.com/JerryKwan",
"followers_url": "https://api.github.com/users/Jerr... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-07-18T01:14:49 | 2024-10-24T03:03:34 | 2024-10-24T03:03:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The service seems like hang after some requests to /api/embeddings, and need to restart to recover
Here are some logs
```
[GIN] 2024/07/18 - 00:52:55 | 200 | 2.824880868s | 10.255.56.113 | POST "/api/embeddings"
time=2024-07-18T00:52:55.388Z level=INFO source=routes.go:298 msg="embed... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5759/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5759/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4675 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4675/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4675/comments | https://api.github.com/repos/ollama/ollama/issues/4675/events | https://github.com/ollama/ollama/issues/4675 | 2,320,328,981 | I_kwDOJ0Z1Ps6KTWkV | 4,675 | phi3: Error: llama runner process has terminated: exit status 0xc0000409 | {
"login": "FreemanFeng",
"id": 1662126,
"node_id": "MDQ6VXNlcjE2NjIxMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1662126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreemanFeng",
"html_url": "https://github.com/FreemanFeng",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-05-28T07:25:09 | 2024-06-09T17:14:00 | 2024-06-09T17:14:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama run phi3:medium-128k
ollama run phi3:3.8-mini-128k-instruct-q4_0
above two models will cause issue
Error: llama runner process has terminated: exit status 0xc0000409
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.1.38 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4675/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4675/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5305 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5305/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5305/comments | https://api.github.com/repos/ollama/ollama/issues/5305/events | https://github.com/ollama/ollama/issues/5305 | 2,375,863,845 | I_kwDOJ0Z1Ps6NnM4l | 5,305 | Application should skip the CLI tool install page during first run if they have already been installed. (macOS) | {
"login": "seanchristians",
"id": 25487785,
"node_id": "MDQ6VXNlcjI1NDg3Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/25487785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanchristians",
"html_url": "https://github.com/seanchristians",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-06-26T17:04:01 | 2024-09-06T17:55:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm deploying Ollama for some of the users in my organization who do not have local administrator rights. I wrote a script to symlink the ollama executable to /usr/local/bin/ollama for the user during install.
However, when they start the app, it still asks them to install the command line to... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5305/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7022 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7022/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7022/comments | https://api.github.com/repos/ollama/ollama/issues/7022/events | https://github.com/ollama/ollama/issues/7022 | 2,554,344,085 | I_kwDOJ0Z1Ps6YQDKV | 7,022 | Can we have a native integrated gpu support ? | {
"login": "user7z",
"id": 161214583,
"node_id": "U_kgDOCZvwdw",
"avatar_url": "https://avatars.githubusercontent.com/u/161214583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/user7z",
"html_url": "https://github.com/user7z",
"followers_url": "https://api.github.com/users/user7z/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-09-28T15:43:18 | 2024-09-29T01:02:29 | 2024-09-28T22:42:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Its would be great to have ollama native support for igpus , for laptop use , it well free the cpu threads for other tasks , the igpu is that little device that we dont make use of it , despite performance , one wouls have his cpu for other tasks , llm-cpp & oneapi is not the solution in my opinion , specially for igpu... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7022/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8438 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8438/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8438/comments | https://api.github.com/repos/ollama/ollama/issues/8438/events | https://github.com/ollama/ollama/pull/8438 | 2,789,135,889 | PR_kwDOJ0Z1Ps6Hz_Xm | 8,438 | docs: fixed path to examples | {
"login": "Gloryjaw",
"id": 108608120,
"node_id": "U_kgDOBnk6eA",
"avatar_url": "https://avatars.githubusercontent.com/u/108608120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gloryjaw",
"html_url": "https://github.com/Gloryjaw",
"followers_url": "https://api.github.com/users/Gloryjaw/... | [] | closed | false | null | [] | null | 0 | 2025-01-15T08:25:19 | 2025-01-15T19:49:12 | 2025-01-15T19:49:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8438",
"html_url": "https://github.com/ollama/ollama/pull/8438",
"diff_url": "https://github.com/ollama/ollama/pull/8438.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8438.patch",
"merged_at": "2025-01-15T19:49:12"
} | Fixed path from example folder (which doesn't exist) to examples.md | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8438/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7918 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7918/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7918/comments | https://api.github.com/repos/ollama/ollama/issues/7918/events | https://github.com/ollama/ollama/issues/7918 | 2,715,200,067 | I_kwDOJ0Z1Ps6h1qpD | 7,918 | Request to add semikong-8b to ollama | {
"login": "luoLojic",
"id": 153160666,
"node_id": "U_kgDOCSEL2g",
"avatar_url": "https://avatars.githubusercontent.com/u/153160666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luoLojic",
"html_url": "https://github.com/luoLojic",
"followers_url": "https://api.github.com/users/luoLojic/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-12-03T14:33:50 | 2024-12-14T15:39:10 | 2024-12-14T15:39:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I would like to deploy the semikong-8b model locally. Semikong is a large model fine-tuned from Llama focused on the semiconductor domain. You can find the model on Hugging Face at this link: “[https://huggingface.co/pentagoniac/SEMIKONG-8b-GPTQ](https://huggingface.co/pentagoniac/SEMIKONG-8b-GPTQ) and the GitHub repos... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7918/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/5897 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5897/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5897/comments | https://api.github.com/repos/ollama/ollama/issues/5897/events | https://github.com/ollama/ollama/issues/5897 | 2,426,329,835 | I_kwDOJ0Z1Ps6Qntrr | 5,897 | Error: llama3.1 runner process has terminated: signal: aborted | {
"login": "harnalashok",
"id": 47495816,
"node_id": "MDQ6VXNlcjQ3NDk1ODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/47495816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harnalashok",
"html_url": "https://github.com/harnalashok",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-07-24T00:08:01 | 2024-07-24T19:59:25 | 2024-07-24T19:59:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have downloaded ollama3.1:8b using ollama. I am getting the following error while running llama3.1. llama3 runs fine on the same syste,.:
Error: llama runner process has terminated: signal: aborted
### OS
Windows 11 wsl2 Ubuntu
### GPU
GeForce RTX 4070
### CPU
_No respo... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5897/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1627 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1627/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1627/comments | https://api.github.com/repos/ollama/ollama/issues/1627/events | https://github.com/ollama/ollama/issues/1627 | 2,050,301,561 | I_kwDOJ0Z1Ps56NR55 | 1,627 | Can't run dolphin-mixtral, llama runner process has terminated | {
"login": "mbruhler",
"id": 21124163,
"node_id": "MDQ6VXNlcjIxMTI0MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/21124163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbruhler",
"html_url": "https://github.com/mbruhler",
"followers_url": "https://api.github.com/users/mbr... | [] | closed | false | null | [] | null | 4 | 2023-12-20T10:47:26 | 2024-01-08T21:42:04 | 2024-01-08T21:42:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I have trouble running dolphin mixtral using ollama
When I type `ollama run dolphin-mixtral` the message "llama runner process has terminated" appears
This is the log:
```
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: fr... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1627/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1629 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1629/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1629/comments | https://api.github.com/repos/ollama/ollama/issues/1629/events | https://github.com/ollama/ollama/issues/1629 | 2,050,479,541 | I_kwDOJ0Z1Ps56N9W1 | 1,629 | [Bug] Allocation problems when trying to use phi model | {
"login": "valentimarco",
"id": 26926690,
"node_id": "MDQ6VXNlcjI2OTI2Njkw",
"avatar_url": "https://avatars.githubusercontent.com/u/26926690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valentimarco",
"html_url": "https://github.com/valentimarco",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 5 | 2023-12-20T12:43:11 | 2024-01-12T05:56:55 | 2024-01-12T05:56:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, i saw the new phi model on the registry and i wanted to try on my little server. The specs are below:
- R5 2600
- ram 32 gb
- 128 gb ssd sata
- nvidia gtx 960 4GB (this is a special version from MSI)
- Ollama:latest docker version
I used before ollama with llama2:7b that was slow (ofc the vram was at the li... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1629/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4328 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4328/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4328/comments | https://api.github.com/repos/ollama/ollama/issues/4328/events | https://github.com/ollama/ollama/pull/4328 | 2,290,455,248 | PR_kwDOJ0Z1Ps5vIy6D | 4,328 | count memory up to NumGPU if set by user | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-10T21:51:32 | 2024-05-14T20:47:45 | 2024-05-14T20:47:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4328",
"html_url": "https://github.com/ollama/ollama/pull/4328",
"diff_url": "https://github.com/ollama/ollama/pull/4328.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4328.patch",
"merged_at": "2024-05-14T20:47:45"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4328/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6112 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6112/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6112/comments | https://api.github.com/repos/ollama/ollama/issues/6112/events | https://github.com/ollama/ollama/pull/6112 | 2,441,451,506 | PR_kwDOJ0Z1Ps53EQnT | 6,112 | Add Braina AI as an Ollama Desktop GUI | {
"login": "wallacelance",
"id": 177184683,
"node_id": "U_kgDOCo-fqw",
"avatar_url": "https://avatars.githubusercontent.com/u/177184683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wallacelance",
"html_url": "https://github.com/wallacelance",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-08-01T04:38:53 | 2024-09-06T02:39:19 | 2024-09-06T02:22:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6112",
"html_url": "https://github.com/ollama/ollama/pull/6112",
"diff_url": "https://github.com/ollama/ollama/pull/6112.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6112.patch",
"merged_at": null
} | ### Overview
[Braina](https://www.brainasoft.com/braina/) supports Ollama natively on Windows. It automatically synchronizes with Ollama model lists, and allows users to use advanced features such as Voice (Both Speech to Text and Text to Speech), Web Search, File and Webpage attachments, Custom Prompts etc.
### Sc... | {
"login": "wallacelance",
"id": 177184683,
"node_id": "U_kgDOCo-fqw",
"avatar_url": "https://avatars.githubusercontent.com/u/177184683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wallacelance",
"html_url": "https://github.com/wallacelance",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6112/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6112/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/144 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/144/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/144/comments | https://api.github.com/repos/ollama/ollama/issues/144/events | https://github.com/ollama/ollama/pull/144 | 1,814,557,715 | PR_kwDOJ0Z1Ps5WCFQ7 | 144 | remove unused code | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-07-20T18:18:26 | 2023-07-24T19:30:57 | 2023-07-20T18:57:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/144",
"html_url": "https://github.com/ollama/ollama/pull/144",
"diff_url": "https://github.com/ollama/ollama/pull/144.diff",
"patch_url": "https://github.com/ollama/ollama/pull/144.patch",
"merged_at": "2023-07-20T18:57:30"
} | cleaning up some unused code I noticed | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/144/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8424 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8424/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8424/comments | https://api.github.com/repos/ollama/ollama/issues/8424/events | https://github.com/ollama/ollama/issues/8424 | 2,787,551,152 | I_kwDOJ0Z1Ps6mJqew | 8,424 | requesting support new model: MiniCPM-o-2_6 | {
"login": "utopeadia",
"id": 98788152,
"node_id": "U_kgDOBeNjOA",
"avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utopeadia",
"html_url": "https://github.com/utopeadia",
"followers_url": "https://api.github.com/users/utopeadi... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 6 | 2025-01-14T15:47:24 | 2025-01-20T12:12:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Model URL: https://huggingface.co/openbmb/MiniCPM-o-2_6 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8424/reactions",
"total_count": 34,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 3,
"eyes": 4
} | https://api.github.com/repos/ollama/ollama/issues/8424/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8071 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8071/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8071/comments | https://api.github.com/repos/ollama/ollama/issues/8071/events | https://github.com/ollama/ollama/pull/8071 | 2,736,185,344 | PR_kwDOJ0Z1Ps6FCKJr | 8,071 | llama: parse JSON schema using nlohmann::ordered_json | {
"login": "iscy",
"id": 294710,
"node_id": "MDQ6VXNlcjI5NDcxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/294710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iscy",
"html_url": "https://github.com/iscy",
"followers_url": "https://api.github.com/users/iscy/followers",
... | [] | closed | false | null | [] | null | 1 | 2024-12-12T15:11:18 | 2024-12-12T17:57:29 | 2024-12-12T17:57:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8071",
"html_url": "https://github.com/ollama/ollama/pull/8071",
"diff_url": "https://github.com/ollama/ollama/pull/8071.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8071.patch",
"merged_at": "2024-12-12T17:57:29"
} | PR #8002 has handled the JSON within Go to ensure we could keep the schema as-is, without affecting the order of the properties. However, when parsed within the cpp wrapper, `nlohmann::json` was used instead of relying on `nlohmann::ordered_json`. This PR simply changes the parser for the ordered one in order to mainta... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8071/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4799 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4799/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4799/comments | https://api.github.com/repos/ollama/ollama/issues/4799/events | https://github.com/ollama/ollama/issues/4799 | 2,331,629,024 | I_kwDOJ0Z1Ps6K-dXg | 4,799 | ollama(commits: d4a8610) run deepseek-v2:16b Error: llama runner process has terminated: signal: aborted (core dumped) | {
"login": "zhqfdn",
"id": 25156863,
"node_id": "MDQ6VXNlcjI1MTU2ODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/25156863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhqfdn",
"html_url": "https://github.com/zhqfdn",
"followers_url": "https://api.github.com/users/zhqfdn/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 22 | 2024-06-03T16:54:31 | 2024-06-18T23:31:00 | 2024-06-18T23:31:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Jun 04 00:46:12 localhost.localdomain ollama[114642]: llama_model_loader: - type f32: 108 tensors
Jun 04 00:46:12 localhost.localdomain ollama[114642]: llama_model_loader: - type q4_0: 268 tensors
Jun 04 00:46:12 localhost.localdomain ollama[114642]: llama_model_loader: - type q6_K: 1 te... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4799/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3568 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3568/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3568/comments | https://api.github.com/repos/ollama/ollama/issues/3568/events | https://github.com/ollama/ollama/issues/3568 | 2,234,544,997 | I_kwDOJ0Z1Ps6FMHNl | 3,568 | ollama crashed at 0.1.31 - CUDA out of memory | {
"login": "abnormalboy",
"id": 77949946,
"node_id": "MDQ6VXNlcjc3OTQ5OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/77949946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abnormalboy",
"html_url": "https://github.com/abnormalboy",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 6 | 2024-04-10T00:53:35 | 2024-05-05T00:26:21 | 2024-05-05T00:26:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When i use langchain in python , the ollama crashed . What model i use is "gemma:7b". when i use "llama2:7b" ollama is normal work. Is my memory is not support? My computer GPU memory is 8GB.
```pyhon
from langchain.llms.ollama import Ollama
from langchain_core.prompts import ChatPromptTem... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3568/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3047 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3047/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3047/comments | https://api.github.com/repos/ollama/ollama/issues/3047/events | https://github.com/ollama/ollama/issues/3047 | 2,177,892,096 | I_kwDOJ0Z1Ps6Bz_8A | 3,047 | Ollama logging for ConnectionResetError | {
"login": "Bardo-Konrad",
"id": 1641761,
"node_id": "MDQ6VXNlcjE2NDE3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1641761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bardo-Konrad",
"html_url": "https://github.com/Bardo-Konrad",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 3 | 2024-03-10T20:30:30 | 2024-03-12T07:21:34 | 2024-03-12T07:21:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I access ollama using the python library.
It communicates well but after some exchanges I always get the following. It seems that I need to reset ollama via python or maybe context length is surpassed, how do I figure it out?
```
Traceback (most recent call last):
File "c:\Lib\site-packages\urllib3\connection... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3047/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5337 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5337/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5337/comments | https://api.github.com/repos/ollama/ollama/issues/5337/events | https://github.com/ollama/ollama/issues/5337 | 2,378,812,185 | I_kwDOJ0Z1Ps6NycsZ | 5,337 | How can I set the parameter "num_return_sequences" to get multiple answers within one prompt? | {
"login": "superjessie",
"id": 29222783,
"node_id": "MDQ6VXNlcjI5MjIyNzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/29222783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/superjessie",
"html_url": "https://github.com/superjessie",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-06-27T18:16:58 | 2024-06-29T13:46:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The parameter "num_return_sequences" is a parameter in model.generate(), while I did not figure out how to set it when using ollama to run LLMs.
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5337/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7862 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7862/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7862/comments | https://api.github.com/repos/ollama/ollama/issues/7862/events | https://github.com/ollama/ollama/issues/7862 | 2,698,619,827 | I_kwDOJ0Z1Ps6g2auz | 7,862 | no ssh key found | {
"login": "14919598",
"id": 185652779,
"node_id": "U_kgDOCxDWKw",
"avatar_url": "https://avatars.githubusercontent.com/u/185652779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/14919598",
"html_url": "https://github.com/14919598",
"followers_url": "https://api.github.com/users/14919598/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-11-27T13:48:07 | 2024-12-14T15:34:37 | 2024-12-14T15:34:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am trying to pull and run the tulu 8b model,but it says:pulling manifest
Error: pull model manifest: ssh: no key found,I don't know what's wrong.
 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7861/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7861/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7149 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7149/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7149/comments | https://api.github.com/repos/ollama/ollama/issues/7149/events | https://github.com/ollama/ollama/pull/7149 | 2,576,011,282 | PR_kwDOJ0Z1Ps5-Felo | 7,149 | Create ezaii.go | {
"login": "sahandmohammadrehzaii",
"id": 139042771,
"node_id": "U_kgDOCEmf0w",
"avatar_url": "https://avatars.githubusercontent.com/u/139042771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahandmohammadrehzaii",
"html_url": "https://github.com/sahandmohammadrehzaii",
"followers_url": ... | [] | closed | false | null | [] | null | 1 | 2024-10-09T13:52:13 | 2024-10-09T18:18:52 | 2024-10-09T18:18:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7149",
"html_url": "https://github.com/ollama/ollama/pull/7149",
"diff_url": "https://github.com/ollama/ollama/pull/7149.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7149.patch",
"merged_at": null
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7149/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2961 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2961/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2961/comments | https://api.github.com/repos/ollama/ollama/issues/2961/events | https://github.com/ollama/ollama/pull/2961 | 2,172,443,038 | PR_kwDOJ0Z1Ps5o49sr | 2,961 | cmd: document environment variables for serve command | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-03-06T21:17:52 | 2024-03-06T21:48:47 | 2024-03-06T21:48:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2961",
"html_url": "https://github.com/ollama/ollama/pull/2961",
"diff_url": "https://github.com/ollama/ollama/pull/2961.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2961.patch",
"merged_at": "2024-03-06T21:48:46"
} | Updates #2944 | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2961/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2940 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2940/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2940/comments | https://api.github.com/repos/ollama/ollama/issues/2940/events | https://github.com/ollama/ollama/issues/2940 | 2,169,811,531 | I_kwDOJ0Z1Ps6BVLJL | 2,940 | OLLAMA_MODELS env variable in bashrc doesnt work | {
"login": "harsham05",
"id": 8755540,
"node_id": "MDQ6VXNlcjg3NTU1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8755540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harsham05",
"html_url": "https://github.com/harsham05",
"followers_url": "https://api.github.com/users/ha... | [] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 6 | 2024-03-05T17:47:27 | 2024-07-05T11:52:51 | 2024-03-12T01:22:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ive added the following to my .bashrc but Ollama doesnt seem to storing them there.
`export OLLAMA_MODELS=/path/to/models/` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2940/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1387 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1387/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1387/comments | https://api.github.com/repos/ollama/ollama/issues/1387/events | https://github.com/ollama/ollama/issues/1387 | 2,025,182,162 | I_kwDOJ0Z1Ps54tdPS | 1,387 | ollama push {model} - 401 Couldn't Authorize | {
"login": "josiahbryan",
"id": 4821548,
"node_id": "MDQ6VXNlcjQ4MjE1NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4821548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josiahbryan",
"html_url": "https://github.com/josiahbryan",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 1 | 2023-12-05T03:02:41 | 2023-12-05T19:30:33 | 2023-12-05T19:30:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Even after adding the contents of ` ~/.ollama/id_ed25519.pub` to the SSH keys section of my Ollama `josiahbryan` account, I still got:
```
% ollama push josiahbryan/dragon-mistral-7b-v0-q4
retrieving manifest
Error: on pull registry responded with code 401: {"message":"Couldn't authorize"}
```
Suggestions? | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1387/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1387/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/910 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/910/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/910/comments | https://api.github.com/repos/ollama/ollama/issues/910/events | https://github.com/ollama/ollama/issues/910 | 1,962,673,276 | I_kwDOJ0Z1Ps50_AR8 | 910 | invalid URL escape | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 0 | 2023-10-26T04:18:34 | 2023-10-26T19:24:13 | 2023-10-26T19:24:13 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Need to escape:
```
OLLAMA_HOST=https://redacted.fly.dev/ ollama run llama2:13b
Error: parse "https://redacted.fly.dev%2F:11434": invalid URL escape "%2F"
``` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/910/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/910/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4327 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4327/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4327/comments | https://api.github.com/repos/ollama/ollama/issues/4327/events | https://github.com/ollama/ollama/pull/4327 | 2,290,454,200 | PR_kwDOJ0Z1Ps5vIyrx | 4,327 | Ollama `ps` command for showing currently loaded models | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-10T21:50:12 | 2024-05-14T00:17:37 | 2024-05-14T00:17:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4327",
"html_url": "https://github.com/ollama/ollama/pull/4327",
"diff_url": "https://github.com/ollama/ollama/pull/4327.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4327.patch",
"merged_at": "2024-05-14T00:17:36"
} | This change adds a rudimentary `ps` command which makes use of the new scheduler changes in the server. The UX also
The UX for this depends on whether you're using the CPU, GPU, or a hybrid of both and looks like:
```
NAME ID SIZE PROCESSOR UNTIL
mistral:latest 61e88e884507 ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4327/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4327/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6026 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6026/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6026/comments | https://api.github.com/repos/ollama/ollama/issues/6026/events | https://github.com/ollama/ollama/issues/6026 | 2,433,920,762 | I_kwDOJ0Z1Ps6REq76 | 6,026 | The 1k context limit in Open-WebUI request is causing low-quality responses. | {
"login": "anrgct",
"id": 16172523,
"node_id": "MDQ6VXNlcjE2MTcyNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16172523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anrgct",
"html_url": "https://github.com/anrgct",
"followers_url": "https://api.github.com/users/anrgct/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2024-07-28T12:57:42 | 2024-08-10T15:38:43 | 2024-08-10T15:38:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using open-webui, I've noticed that long contextual messages sent to ollama consistently result in poor responses. After investigating the issue, it appears that the `/api/chat` and `/v1/chat/completions` endpoints are defaulting to a 1k context limit. This means that when the content excee... | {
"login": "anrgct",
"id": 16172523,
"node_id": "MDQ6VXNlcjE2MTcyNTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16172523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anrgct",
"html_url": "https://github.com/anrgct",
"followers_url": "https://api.github.com/users/anrgct/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6026/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6019 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6019/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6019/comments | https://api.github.com/repos/ollama/ollama/issues/6019/events | https://github.com/ollama/ollama/pull/6019 | 2,433,619,028 | PR_kwDOJ0Z1Ps52ptV- | 6,019 | Update README.md / Added my mobile app to the list | {
"login": "Calvicii",
"id": 80085756,
"node_id": "MDQ6VXNlcjgwMDg1NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/80085756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Calvicii",
"html_url": "https://github.com/Calvicii",
"followers_url": "https://api.github.com/users/Cal... | [] | closed | false | null | [] | null | 0 | 2024-07-27T20:38:37 | 2024-07-27T20:39:34 | 2024-07-27T20:39:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6019",
"html_url": "https://github.com/ollama/ollama/pull/6019",
"diff_url": "https://github.com/ollama/ollama/pull/6019.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6019.patch",
"merged_at": null
} | I have this functional app that acts as a client for Ollama. | {
"login": "Calvicii",
"id": 80085756,
"node_id": "MDQ6VXNlcjgwMDg1NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/80085756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Calvicii",
"html_url": "https://github.com/Calvicii",
"followers_url": "https://api.github.com/users/Cal... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6019/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3882 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3882/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3882/comments | https://api.github.com/repos/ollama/ollama/issues/3882/events | https://github.com/ollama/ollama/pull/3882 | 2,261,738,601 | PR_kwDOJ0Z1Ps5toGtE | 3,882 | AMD gfx patch rev is hex | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-24T16:44:29 | 2024-04-24T18:07:52 | 2024-04-24T18:07:49 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3882",
"html_url": "https://github.com/ollama/ollama/pull/3882",
"diff_url": "https://github.com/ollama/ollama/pull/3882.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3882.patch",
"merged_at": "2024-04-24T18:07:49"
} | Correctly handle gfx90a discovery
Fixes #3809 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3882/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8003 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8003/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8003/comments | https://api.github.com/repos/ollama/ollama/issues/8003/events | https://github.com/ollama/ollama/issues/8003 | 2,725,718,623 | I_kwDOJ0Z1Ps6idypf | 8,003 | Allow for forcing an order of properties in structured JSON response | {
"login": "scd31",
"id": 57571338,
"node_id": "MDQ6VXNlcjU3NTcxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/57571338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scd31",
"html_url": "https://github.com/scd31",
"followers_url": "https://api.github.com/users/scd31/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-12-09T01:28:04 | 2024-12-09T12:04:51 | 2024-12-09T12:04:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When having an LLM respond with JSON I often do something along the lines of `{reasoning: "...", actual_property_i_care_about: "..."}`. The idea is that the `reasoning` property isn't used on my end but gives the LLM the ability to think first, like with CoT. Of course, this requires the LLM to populate the `reasoning`... | {
"login": "scd31",
"id": 57571338,
"node_id": "MDQ6VXNlcjU3NTcxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/57571338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scd31",
"html_url": "https://github.com/scd31",
"followers_url": "https://api.github.com/users/scd31/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8003/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5949 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5949/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5949/comments | https://api.github.com/repos/ollama/ollama/issues/5949/events | https://github.com/ollama/ollama/issues/5949 | 2,429,779,374 | I_kwDOJ0Z1Ps6Q032u | 5,949 | Out of Memory Error when using Meta-Llama-3.1-8B-Instruct-Q8_0.gguf model with Ollama ROCm with num_ctx=120000 | {
"login": "renbuarl",
"id": 176577927,
"node_id": "U_kgDOCoZdhw",
"avatar_url": "https://avatars.githubusercontent.com/u/176577927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renbuarl",
"html_url": "https://github.com/renbuarl",
"followers_url": "https://api.github.com/users/renbuarl/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 15 | 2024-07-25T11:58:25 | 2024-10-17T17:37:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
OS: Linux 6.5.0-44-generic #44~22.04.1-Ubuntu
GPU:
AMD Radeon RX 7900 XTX (24 GiB VRAM)
AMD Radeon RX 7900 XTX (24 GiB VRAM)
AMD Radeon RX 7900 XTX (24 GiB VRAM)
Ollama version: 0.2.8
ROCm module version: 6.7.0
amdgpu-install_6.1.60103-1_all.deb
Model: Meta-Llam... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5949/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5949/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1980 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1980/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1980/comments | https://api.github.com/repos/ollama/ollama/issues/1980/events | https://github.com/ollama/ollama/issues/1980 | 2,080,495,348 | I_kwDOJ0Z1Ps58Adb0 | 1,980 | Make update script skip execution if current version is latest (improvement) | {
"login": "atassis",
"id": 5769345,
"node_id": "MDQ6VXNlcjU3NjkzNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5769345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atassis",
"html_url": "https://github.com/atassis",
"followers_url": "https://api.github.com/users/atassis/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-01-13T22:19:56 | 2024-07-24T21:48:58 | 2024-07-24T21:48:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | You can put a `version` file in the root of download directory and check if current installed ollama has an identical version, for example. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1980/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3212/comments | https://api.github.com/repos/ollama/ollama/issues/3212/events | https://github.com/ollama/ollama/issues/3212 | 2,191,147,373 | I_kwDOJ0Z1Ps6CmkFt | 3,212 | ollama pull modelName Error | {
"login": "ZPLSSSTD",
"id": 21329959,
"node_id": "MDQ6VXNlcjIxMzI5OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/21329959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZPLSSSTD",
"html_url": "https://github.com/ZPLSSSTD",
"followers_url": "https://api.github.com/users/ZPL... | [
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] | closed | false | null | [] | null | 2 | 2024-03-18T03:24:55 | 2024-03-28T20:52:24 | 2024-03-28T20:52:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I once accidentally installed Ollama: 7b successfully.Afterwards, I executed the command olama pull llama2. But there was an error, and the error message is as follows
`
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=qKzQl7GvJl7HVA-mW_-3Ow&scope=repository%!A... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3212/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3298 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3298/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3298/comments | https://api.github.com/repos/ollama/ollama/issues/3298/events | https://github.com/ollama/ollama/issues/3298 | 2,203,068,766 | I_kwDOJ0Z1Ps6DUCle | 3,298 | Vision with llava-1.6-7B is unusable via CLI | {
"login": "olafgeibig",
"id": 295644,
"node_id": "MDQ6VXNlcjI5NTY0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/295644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olafgeibig",
"html_url": "https://github.com/olafgeibig",
"followers_url": "https://api.github.com/users/o... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 21 | 2024-03-22T18:24:39 | 2024-10-11T19:45:57 | 2024-05-10T23:22:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The image recognition is very poor. It can't describe the picture properly and it also can't extract text. It seems to process a very downscaled image because it complains about the text being too small and it makes assumptions about image elements that seem to be likely but aren't true. It ha... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3298/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3298/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8254 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8254/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8254/comments | https://api.github.com/repos/ollama/ollama/issues/8254/events | https://github.com/ollama/ollama/issues/8254 | 2,760,469,386 | I_kwDOJ0Z1Ps6kiWuK | 8,254 | ollama not use GPU: when using NVIDIA GPU, it detected amdgpu driver and then use CPU to compute | {
"login": "Roc136",
"id": 57868577,
"node_id": "MDQ6VXNlcjU3ODY4NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/57868577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Roc136",
"html_url": "https://github.com/Roc136",
"followers_url": "https://api.github.com/users/Roc136/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-12-27T06:35:01 | 2024-12-28T12:10:22 | 2024-12-28T12:10:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm running ollama on a device with NVIDIA A100 80G GPU and Intel(R) Xeon(R) Gold 5320 CPU. I built Ollama using the command `make CUSTOM_CPU_FLAGS=""`, started it with `ollama serve`, and ran `ollama run llama2` to load the Llama2 model.
Problem:
Ollama is running on the CPU instead of the ... | {
"login": "Roc136",
"id": 57868577,
"node_id": "MDQ6VXNlcjU3ODY4NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/57868577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Roc136",
"html_url": "https://github.com/Roc136",
"followers_url": "https://api.github.com/users/Roc136/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8254/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4403 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4403/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4403/comments | https://api.github.com/repos/ollama/ollama/issues/4403/events | https://github.com/ollama/ollama/issues/4403 | 2,292,851,077 | I_kwDOJ0Z1Ps6IqiGF | 4,403 | 为什么同样的脚本(Qwen1.5/examples/web_demo.py)去执行Qwen1.5-32B-Chat-GPTQ-Int4的推理时,4090 24G 比 V100 32G 回答速度快了5倍,这是什么原因,是显卡性能的问题,还是代码还有哪个配置没有打开,导致V100的计算能力没有发挥出来? | {
"login": "lbl1120",
"id": 152936427,
"node_id": "U_kgDOCR2f6w",
"avatar_url": "https://avatars.githubusercontent.com/u/152936427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbl1120",
"html_url": "https://github.com/lbl1120",
"followers_url": "https://api.github.com/users/lbl1120/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-13T13:34:08 | 2024-05-13T17:34:08 | 2024-05-13T17:34:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 为什么同样的脚本(Qwen1.5/examples/web_demo.py)去执行Qwen1.5-32B-Chat-GPTQ-Int4的推理时,4090 24G 比 V100 32G 回答速度快了5倍,这是什么原因,是显卡性能的问题,还是代码还有哪个配置没有打开,导致V100的计算能力没有发挥出来?
![Uploading 屏幕截图 2024-05-13 212724.png…]()
| {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4403/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3724 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3724/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3724/comments | https://api.github.com/repos/ollama/ollama/issues/3724/events | https://github.com/ollama/ollama/pull/3724 | 2,249,756,328 | PR_kwDOJ0Z1Ps5tADl6 | 3,724 | types/model: accept former `:` as a separator in digest | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-04-18T05:07:15 | 2024-04-18T21:17:47 | 2024-04-18T21:17:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3724",
"html_url": "https://github.com/ollama/ollama/pull/3724",
"diff_url": "https://github.com/ollama/ollama/pull/3724.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3724.patch",
"merged_at": "2024-04-18T21:17:46"
} | This also converges the old sep `:` to the new sep `-`. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3724/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4504 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4504/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4504/comments | https://api.github.com/repos/ollama/ollama/issues/4504/events | https://github.com/ollama/ollama/issues/4504 | 2,303,534,980 | I_kwDOJ0Z1Ps6JTSeE | 4,504 | on https://www.ollama.com/library add sort filter by model strengths | {
"login": "arjunkrishna",
"id": 5271912,
"node_id": "MDQ6VXNlcjUyNzE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5271912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunkrishna",
"html_url": "https://github.com/arjunkrishna",
"followers_url": "https://api.github.com... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-05-17T20:29:10 | 2024-05-17T20:29:10 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
on https://www.ollama.com/library it would be great to have some additional categories where you can sort models by their strengths based on various benchmarks. That way novices like me can figure out which models are good at what right from the ollama webpage.
Thanks,
Arjun | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4504/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4504/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5192 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5192/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5192/comments | https://api.github.com/repos/ollama/ollama/issues/5192/events | https://github.com/ollama/ollama/pull/5192 | 2,364,863,406 | PR_kwDOJ0Z1Ps5zGKOF | 5,192 | handle asymmetric embedding KVs | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-06-20T16:47:12 | 2024-06-20T17:46:25 | 2024-06-20T17:46:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5192",
"html_url": "https://github.com/ollama/ollama/pull/5192",
"diff_url": "https://github.com/ollama/ollama/pull/5192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5192.patch",
"merged_at": "2024-06-20T17:46:24"
} | KV size assumed a symmetric K and V embedding sizes which isn't always the case, e.g. deepseek v2
smoke tested memory usage against llama2, llama3, gemma, phi3, qwen2, and deepseek v2 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5192/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5838 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5838/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5838/comments | https://api.github.com/repos/ollama/ollama/issues/5838/events | https://github.com/ollama/ollama/issues/5838 | 2,421,764,768 | I_kwDOJ0Z1Ps6QWTKg | 5,838 | ollama CORS check is case-sensitive | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | closed | false | null | [] | null | 0 | 2024-07-22T02:24:06 | 2024-12-10T21:43:23 | 2024-12-10T21:43:23 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama uses `github.com/gin-contrib/cors` to check `Host` header for allowed origins (`OLLAMA_ORIGINS`). If the value of the `Host` is not all lowercase, the check fails.
```
$ curl -D - -s -H Host:localhost localhost:11434/api/version
HTTP/1.1 200 OK
Content-Type: application/json; char... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5838/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6695 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6695/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6695/comments | https://api.github.com/repos/ollama/ollama/issues/6695/events | https://github.com/ollama/ollama/issues/6695 | 2,512,185,141 | I_kwDOJ0Z1Ps6VvOc1 | 6,695 | Q6_K is slower than Q8_0 | {
"login": "napa3um",
"id": 665538,
"node_id": "MDQ6VXNlcjY2NTUzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/665538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/napa3um",
"html_url": "https://github.com/napa3um",
"followers_url": "https://api.github.com/users/napa3um/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-08T04:06:41 | 2024-12-02T22:00:37 | 2024-12-02T22:00:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
gemma2:9b-instruct-**q6_K** : gemma2:9b-instruct-**q8_0** = **21**t/s : **25**t/s
mistral-nemo:12b-instruct-2407-**q6_K** : mistral-nemo:12b-instruct-2407-**q8_0** = **17**t/s : **21**t/s
It used to be different.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.9 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6695/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/594 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/594/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/594/comments | https://api.github.com/repos/ollama/ollama/issues/594/events | https://github.com/ollama/ollama/pull/594 | 1,912,373,419 | PR_kwDOJ0Z1Ps5bK89D | 594 | exit on unknown distro | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-25T22:30:03 | 2023-09-25T22:30:59 | 2023-09-25T22:30:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/594",
"html_url": "https://github.com/ollama/ollama/pull/594",
"diff_url": "https://github.com/ollama/ollama/pull/594.diff",
"patch_url": "https://github.com/ollama/ollama/pull/594.patch",
"merged_at": "2023-09-25T22:30:58"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/594/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8616 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8616/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8616/comments | https://api.github.com/repos/ollama/ollama/issues/8616/events | https://github.com/ollama/ollama/issues/8616 | 2,813,973,892 | I_kwDOJ0Z1Ps6nudWE | 8,616 | Ollama: torch.OutOfMemoryError: CUDA out of memory | {
"login": "kennethwork101",
"id": 147571330,
"node_id": "U_kgDOCMvCgg",
"avatar_url": "https://avatars.githubusercontent.com/u/147571330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kennethwork101",
"html_url": "https://github.com/kennethwork101",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-27T20:21:48 | 2025-01-27T20:21:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Running some tests using pytest with the following 6 models. What I find is that if I run all tests with each model before go on to the next model, the tests mostly worked fine. 123/126 passed. But if I run each test against all 6 models sequentially and then go to the next test then I see han... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8616/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4217 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4217/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4217/comments | https://api.github.com/repos/ollama/ollama/issues/4217/events | https://github.com/ollama/ollama/issues/4217 | 2,282,031,116 | I_kwDOJ0Z1Ps6IBQgM | 4,217 | how to load adapter | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-07T00:51:35 | 2024-05-10T03:23:03 | 2024-05-07T16:43:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
how to load adapter
modelfile is the following:
FROM ./sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4217/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4659 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4659/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4659/comments | https://api.github.com/repos/ollama/ollama/issues/4659/events | https://github.com/ollama/ollama/issues/4659 | 2,318,675,815 | I_kwDOJ0Z1Ps6KNC9n | 4,659 | no gpu detected with RTX 3060Ti | {
"login": "NoIDidntHackU",
"id": 112739711,
"node_id": "U_kgDOBrhFfw",
"avatar_url": "https://avatars.githubusercontent.com/u/112739711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoIDidntHackU",
"html_url": "https://github.com/NoIDidntHackU",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-05-27T09:20:48 | 2024-05-28T10:38:05 | 2024-05-28T10:37:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i have an RTX 3060 Ti and when I ran "curl -fsSL https://ollama.com/install.sh | sh" in ubuntu on WSL (using WSL for webUI) it installs fine, but at the end of install it says this:
" >>> Install complete. Run "ollama" from the command line.
WARNING: No NVIDIA/AMD GPU detected. Ollama will ... | {
"login": "NoIDidntHackU",
"id": 112739711,
"node_id": "U_kgDOBrhFfw",
"avatar_url": "https://avatars.githubusercontent.com/u/112739711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoIDidntHackU",
"html_url": "https://github.com/NoIDidntHackU",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4659/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5893 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5893/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5893/comments | https://api.github.com/repos/ollama/ollama/issues/5893/events | https://github.com/ollama/ollama/pull/5893 | 2,426,203,697 | PR_kwDOJ0Z1Ps52RPTz | 5,893 | Fix Embed Test Flakes | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-07-23T22:05:50 | 2024-07-24T18:15:48 | 2024-07-24T18:15:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5893",
"html_url": "https://github.com/ollama/ollama/pull/5893",
"diff_url": "https://github.com/ollama/ollama/pull/5893.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5893.patch",
"merged_at": "2024-07-24T18:15:46"
} | different results on different taters
e.g.
=== RUN TestAllMiniLMEmbeddings
2024/07/23 17:05:36 INFO server connection host=tater21 port=55426
2024/07/23 17:05:36 INFO checking status of model model=all-minilm
2024/07/23 17:05:36 INFO model already present model=all-minilm
embed_test.go:42: expected 0.0664... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5893/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8029 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8029/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8029/comments | https://api.github.com/repos/ollama/ollama/issues/8029/events | https://github.com/ollama/ollama/pull/8029 | 2,730,798,743 | PR_kwDOJ0Z1Ps6Evhpy | 8,029 | Prevent model thrashing from unset num_ctx | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 0 | 2024-12-10T17:54:00 | 2025-01-03T05:26:28 | null | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8029",
"html_url": "https://github.com/ollama/ollama/pull/8029",
"diff_url": "https://github.com/ollama/ollama/pull/8029.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8029.patch",
"merged_at": null
} | TLDR: a model shouldn't be evicted due to num_ctx change if the client doesn't care about context size.
Client A loads a model with a context window different to the default or the value configured in the Modelfile:
```console
$ curl localhost:11434/api/generate -d '{"model":"llama3.2","options":{"num_ctx":65536}}... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8029/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8029/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3553 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3553/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3553/comments | https://api.github.com/repos/ollama/ollama/issues/3553/events | https://github.com/ollama/ollama/issues/3553 | 2,233,054,415 | I_kwDOJ0Z1Ps6FGbTP | 3,553 | Embedding endpoint not available on windows. | {
"login": "elblogbruno",
"id": 10481058,
"node_id": "MDQ6VXNlcjEwNDgxMDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/10481058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elblogbruno",
"html_url": "https://github.com/elblogbruno",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-04-09T09:51:12 | 2024-08-15T16:37:10 | 2024-04-09T10:09:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I installed latest version of windows of ollama v0.1.31 and I can't seem to be able to use the new embedding functionalities.
For example , this url http://localhost:11434/api/embeddings gives me 404 not found.
The above exception was the direct cause of the following exception:
```
Tr... | {
"login": "elblogbruno",
"id": 10481058,
"node_id": "MDQ6VXNlcjEwNDgxMDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/10481058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elblogbruno",
"html_url": "https://github.com/elblogbruno",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3553/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7979 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7979/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7979/comments | https://api.github.com/repos/ollama/ollama/issues/7979/events | https://github.com/ollama/ollama/pull/7979 | 2,724,003,265 | PR_kwDOJ0Z1Ps6EYMhx | 7,979 | bugfix: "null" value for format | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-12-06T22:02:27 | 2024-12-11T06:07:51 | 2024-12-06T22:13:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7979",
"html_url": "https://github.com/ollama/ollama/pull/7979",
"diff_url": "https://github.com/ollama/ollama/pull/7979.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7979.patch",
"merged_at": "2024-12-06T22:13:16"
} | Fixes https://github.com/ollama/ollama/issues/7977 | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7979/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3235 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3235/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3235/comments | https://api.github.com/repos/ollama/ollama/issues/3235/events | https://github.com/ollama/ollama/issues/3235 | 2,194,117,935 | I_kwDOJ0Z1Ps6Cx5Uv | 3,235 | Cannot install on Fedora 39 Silverblue: error: Packages not found: ./ollama-linux-amd64 | {
"login": "jkemp814",
"id": 12059343,
"node_id": "MDQ6VXNlcjEyMDU5MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/12059343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkemp814",
"html_url": "https://github.com/jkemp814",
"followers_url": "https://api.github.com/users/jke... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2024-03-19T06:11:36 | 2024-12-05T20:38:14 | 2024-03-21T14:10:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Package from the releases page will not install with `rpm-ostree install ./ollama-linux-amd64`
Also when using the install script: `curl -fsSL https://ollama.com/install.sh | sh` it does not create `ollama` folder under `/usr/share`
When installing in a toolbox it cannot find the GPU.
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3235/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3235/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.