url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/1437 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1437/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1437/comments | https://api.github.com/repos/ollama/ollama/issues/1437/events | https://github.com/ollama/ollama/issues/1437 | 2,032,991,819 | I_kwDOJ0Z1Ps55LP5L | 1,437 | Update Script and Documentation for non-systemd Linux systems | {
"login": "NikeshKhatiwada",
"id": 55629421,
"node_id": "MDQ6VXNlcjU1NjI5NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/55629421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeshKhatiwada",
"html_url": "https://github.com/NikeshKhatiwada",
"followers_url": "https://api... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2023-12-08T16:41:34 | 2024-03-12T16:36:53 | 2024-03-12T16:36:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried default installation script in Alpine Linux (WSL) and though it was apparently installed, I couldn't use ollama command. Also, manual install guide needs alternative steps for non-systemd sytems. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1437/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4466 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4466/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4466/comments | https://api.github.com/repos/ollama/ollama/issues/4466/events | https://github.com/ollama/ollama/issues/4466 | 2,299,176,383 | I_kwDOJ0Z1Ps6JCqW_ | 4,466 | Add new model error | {
"login": "momo8zero",
"id": 22167486,
"node_id": "MDQ6VXNlcjIyMTY3NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22167486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/momo8zero",
"html_url": "https://github.com/momo8zero",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": ... | closed | false | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.... | null | 2 | 2024-05-16T02:08:53 | 2024-07-25T22:56:32 | 2024-07-25T22:56:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hi, im developer
I want add a new Model , but tips
```
Error: Models based on 'LlamaForCausalLM' are not yet supported
```
Do you have any solutions? thanks! | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4466/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3113 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3113/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3113/comments | https://api.github.com/repos/ollama/ollama/issues/3113/events | https://github.com/ollama/ollama/issues/3113 | 2,184,295,914 | I_kwDOJ0Z1Ps6CMbXq | 3,113 | Integrated Intel GPU support | {
"login": "clvgt12",
"id": 15834506,
"node_id": "MDQ6VXNlcjE1ODM0NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/15834506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clvgt12",
"html_url": "https://github.com/clvgt12",
"followers_url": "https://api.github.com/users/clvgt1... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677491450,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 19 | 2024-03-13T15:27:19 | 2024-12-08T09:07:17 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
Please consider adapting Ollama to use Intel Integrated Graphics Processors (such as the Intel Iris Xe Graphics cores) in the future.
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3113/reactions",
"total_count": 40,
"+1": 40,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3113/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2164 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2164/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2164/comments | https://api.github.com/repos/ollama/ollama/issues/2164/events | https://github.com/ollama/ollama/pull/2164 | 2,097,053,692 | PR_kwDOJ0Z1Ps5k4303 | 2,164 | Add LangChain4J | {
"login": "eddumelendez",
"id": 1810547,
"node_id": "MDQ6VXNlcjE4MTA1NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1810547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eddumelendez",
"html_url": "https://github.com/eddumelendez",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 1 | 2024-01-23T21:55:39 | 2024-02-20T03:20:45 | 2024-02-20T02:17:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2164",
"html_url": "https://github.com/ollama/ollama/pull/2164",
"diff_url": "https://github.com/ollama/ollama/pull/2164.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2164.patch",
"merged_at": "2024-02-20T02:17:32"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2164/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2164/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7278 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7278/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7278/comments | https://api.github.com/repos/ollama/ollama/issues/7278/events | https://github.com/ollama/ollama/issues/7278 | 2,600,617,156 | I_kwDOJ0Z1Ps6bAkTE | 7,278 | llama3.2:latest not running and giving Error: llama runner process no longer running: -1 | {
"login": "ishu121992",
"id": 11437477,
"node_id": "MDQ6VXNlcjExNDM3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11437477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishu121992",
"html_url": "https://github.com/ishu121992",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-10-20T16:14:09 | 2024-10-21T19:34:56 | 2024-10-21T19:34:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been using Ollama for a while and have never encountered this error while running any other llms (including llama3.1).
Below is the snapshot of server log with error:

Key issue seems to be r... | {
"login": "ishu121992",
"id": 11437477,
"node_id": "MDQ6VXNlcjExNDM3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11437477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishu121992",
"html_url": "https://github.com/ishu121992",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7278/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7762 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7762/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7762/comments | https://api.github.com/repos/ollama/ollama/issues/7762/events | https://github.com/ollama/ollama/issues/7762 | 2,676,164,130 | I_kwDOJ0Z1Ps6fgwYi | 7,762 | What happened with the recent update? | {
"login": "JTMarsh556",
"id": 163940208,
"node_id": "U_kgDOCcWHcA",
"avatar_url": "https://avatars.githubusercontent.com/u/163940208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JTMarsh556",
"html_url": "https://github.com/JTMarsh556",
"followers_url": "https://api.github.com/users/JTM... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 21 | 2024-11-20T14:57:51 | 2024-11-21T00:47:26 | 2024-11-20T20:49:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I just updated this morning and applications that worked flawlessly no longer work. It is like RAG was decimated. The LLMs are just providing generic garbage answers like they always do without RAG.
What happened and how can we fix it?
Until then how can revert back?
### OS
... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7762/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7762/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/581 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/581/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/581/comments | https://api.github.com/repos/ollama/ollama/issues/581/events | https://github.com/ollama/ollama/issues/581 | 1,910,030,624 | I_kwDOJ0Z1Ps5x2MEg | 581 | How to use `num_predict`? | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 6 | 2023-09-23T23:20:10 | 2023-09-27T01:47:32 | 2023-09-27T01:47:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | From https://github.com/jmorganca/ollama/issues/318#issuecomment-1710181439, I see `num_predict` exists, and am trying to figure out how to use it.
Where are the docs on parameters like this?
More specifically, I am trying to figure out how to specify `num_predict` (and similar parameters) to the Ollama server pr... | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/581/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1805 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1805/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1805/comments | https://api.github.com/repos/ollama/ollama/issues/1805/events | https://github.com/ollama/ollama/issues/1805 | 2,067,289,116 | I_kwDOJ0Z1Ps57OFQc | 1,805 | which model to use for what's the root of 256256? | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | [] | closed | false | null | [] | null | 6 | 2024-01-05T12:40:07 | 2024-01-12T07:19:35 | 2024-01-12T07:17:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1805/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4111 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4111/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4111/comments | https://api.github.com/repos/ollama/ollama/issues/4111/events | https://github.com/ollama/ollama/pull/4111 | 2,276,709,145 | PR_kwDOJ0Z1Ps5ubC1w | 4,111 | Update README.md | {
"login": "bernardo-bruning",
"id": 4602873,
"node_id": "MDQ6VXNlcjQ2MDI4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4602873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bernardo-bruning",
"html_url": "https://github.com/bernardo-bruning",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | 0 | 2024-05-03T00:33:51 | 2024-05-05T21:45:32 | 2024-05-05T21:45:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4111",
"html_url": "https://github.com/ollama/ollama/pull/4111",
"diff_url": "https://github.com/ollama/ollama/pull/4111.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4111.patch",
"merged_at": "2024-05-05T21:45:32"
} | Includes a proxy plugin for ollama to work like github copilot. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4111/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6009 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6009/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6009/comments | https://api.github.com/repos/ollama/ollama/issues/6009/events | https://github.com/ollama/ollama/issues/6009 | 2,433,335,019 | I_kwDOJ0Z1Ps6RCb7r | 6,009 | when trying to download multiple models at same time it cancels automatically | {
"login": "hemangjoshi37a",
"id": 12392345,
"node_id": "MDQ6VXNlcjEyMzkyMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/12392345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemangjoshi37a",
"html_url": "https://github.com/hemangjoshi37a",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-27T07:21:24 | 2024-07-27T07:58:34 | 2024-07-27T07:58:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when trying to download multiple models at same time it cancels automatically
### OS
Linux, Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
latest docker container | {
"login": "hemangjoshi37a",
"id": 12392345,
"node_id": "MDQ6VXNlcjEyMzkyMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/12392345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemangjoshi37a",
"html_url": "https://github.com/hemangjoshi37a",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6009/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1113 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1113/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1113/comments | https://api.github.com/repos/ollama/ollama/issues/1113/events | https://github.com/ollama/ollama/issues/1113 | 1,991,251,754 | I_kwDOJ0Z1Ps52sBcq | 1,113 | I am trying to Create Model File But I am getting permission Denied Error. | {
"login": "Sridatta0808",
"id": 10744330,
"node_id": "MDQ6VXNlcjEwNzQ0MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/10744330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sridatta0808",
"html_url": "https://github.com/Sridatta0808",
"followers_url": "https://api.github.c... | [] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 2 | 2023-11-13T18:48:01 | 2023-11-16T00:41:15 | 2023-11-16T00:41:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Project Structure :
bin/
src/
models/
requirements.txt
Readme.md
Steps Followed:
$ nano Modelfile - > Inserted -> FROM ./models/mistral-7b-instruct-v0.1.Q3_K_M.gguf
$ ollama create example -f Modelfile
-> Returns Following Error :
couldn't open modelfile '/home/sridatta/projects/basic_llm/langchain/Mod... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1113/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1287 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1287/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1287/comments | https://api.github.com/repos/ollama/ollama/issues/1287/events | https://github.com/ollama/ollama/pull/1287 | 2,012,839,670 | PR_kwDOJ0Z1Ps5geAMX | 1,287 | ignore jetbrain ides | {
"login": "rootedbox",
"id": 3997890,
"node_id": "MDQ6VXNlcjM5OTc4OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3997890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rootedbox",
"html_url": "https://github.com/rootedbox",
"followers_url": "https://api.github.com/users/ro... | [] | closed | false | null | [] | null | 0 | 2023-11-27T18:18:07 | 2023-11-27T20:57:45 | 2023-11-27T20:57:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1287",
"html_url": "https://github.com/ollama/ollama/pull/1287",
"diff_url": "https://github.com/ollama/ollama/pull/1287.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1287.patch",
"merged_at": "2023-11-27T20:57:45"
} | ignore jetbrain ides | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1287/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8213 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8213/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8213/comments | https://api.github.com/repos/ollama/ollama/issues/8213/events | https://github.com/ollama/ollama/issues/8213 | 2,755,058,207 | I_kwDOJ0Z1Ps6kNtof | 8,213 | do embedding request: Post \"http://127.0.0.1:57955/embedding\": read tcp 127.0.0.1:57957->127.0.0.1:57955: wsarecv: An existing connection was forcibly closed by the remote host. | {
"login": "conflictpeng",
"id": 75059708,
"node_id": "MDQ6VXNlcjc1MDU5NzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/75059708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conflictpeng",
"html_url": "https://github.com/conflictpeng",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-12-23T02:14:09 | 2024-12-23T09:10:29 | 2024-12-23T09:10:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
这个好想在处理pdf交大的文件的时候是不行的。
### OS
Windows
### GPU
Nvidia
### CPU
Intel, AMD
### Ollama version
0.5.1 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8213/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/809 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/809/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/809/comments | https://api.github.com/repos/ollama/ollama/issues/809/events | https://github.com/ollama/ollama/pull/809 | 1,946,147,820 | PR_kwDOJ0Z1Ps5c8y8H | 809 | fix: regression unsupported metal types | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-10-16T21:40:27 | 2023-10-17T15:40:41 | 2023-10-17T15:40:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/809",
"html_url": "https://github.com/ollama/ollama/pull/809",
"diff_url": "https://github.com/ollama/ollama/pull/809.diff",
"patch_url": "https://github.com/ollama/ollama/pull/809.patch",
"merged_at": "2023-10-17T15:40:40"
} | omitting `--n-gpu-layers` means use metal on macos which isn't correct since ollama uses `num_gpu=0` to explicitly disable gpu for file types that are not implemented in metal | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/809/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7/comments | https://api.github.com/repos/ollama/ollama/issues/7/events | https://github.com/ollama/ollama/pull/7 | 1,777,858,042 | PR_kwDOJ0Z1Ps5UFYG5 | 7 | add prompt templates as j2 templates | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-06-27T22:50:30 | 2023-06-28T16:27:28 | 2023-06-28T14:37:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7",
"html_url": "https://github.com/ollama/ollama/pull/7",
"diff_url": "https://github.com/ollama/ollama/pull/7.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7.patch",
"merged_at": "2023-06-28T14:37:03"
} | easier to read and maintain since diffs are much more obvious. this also provides future opportunity for users to define their own prompt templates | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/86 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/86/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/86/comments | https://api.github.com/repos/ollama/ollama/issues/86/events | https://github.com/ollama/ollama/pull/86 | 1,808,245,303 | PR_kwDOJ0Z1Ps5Vsk0U | 86 | welcome screen improvements | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | [] | closed | false | null | [] | null | 0 | 2023-07-17T17:30:06 | 2023-07-17T17:44:57 | 2023-07-17T17:44:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/86",
"html_url": "https://github.com/ollama/ollama/pull/86",
"diff_url": "https://github.com/ollama/ollama/pull/86.diff",
"patch_url": "https://github.com/ollama/ollama/pull/86.patch",
"merged_at": "2023-07-17T17:44:53"
} | - make window draggable
- improve copy command experience on the finish page | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/86/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/86/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6736 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6736/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6736/comments | https://api.github.com/repos/ollama/ollama/issues/6736/events | https://github.com/ollama/ollama/pull/6736 | 2,517,978,310 | PR_kwDOJ0Z1Ps57ESYG | 6,736 | Verify permissions for AMD GPU | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 3 | 2024-09-10T22:03:00 | 2024-10-23T16:49:46 | 2024-09-11T18:38:25 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6736",
"html_url": "https://github.com/ollama/ollama/pull/6736",
"diff_url": "https://github.com/ollama/ollama/pull/6736.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6736.patch",
"merged_at": "2024-09-11T18:38:25"
} | This adds back a check which was lost many releases back to verify /dev/kfd permissions which when lacking, can lead to confusing failure modes of:
"rocBLAS error: Could not initialize Tensile host: No devices found"
This implementation does not hard fail the serve command but instead will fall back to CPU with a... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6736/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6898 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6898/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6898/comments | https://api.github.com/repos/ollama/ollama/issues/6898/events | https://github.com/ollama/ollama/pull/6898 | 2,539,825,980 | PR_kwDOJ0Z1Ps58ORcE | 6,898 | CI: win arm adjustments | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-09-20T23:53:51 | 2024-09-20T23:58:58 | 2024-09-20T23:58:56 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6898",
"html_url": "https://github.com/ollama/ollama/pull/6898",
"diff_url": "https://github.com/ollama/ollama/pull/6898.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6898.patch",
"merged_at": "2024-09-20T23:58:56"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6898/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3812 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3812/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3812/comments | https://api.github.com/repos/ollama/ollama/issues/3812/events | https://github.com/ollama/ollama/issues/3812 | 2,255,670,278 | I_kwDOJ0Z1Ps6GcswG | 3,812 | 希望Ollama运行后直接打开界面 | {
"login": "elarbor",
"id": 43592730,
"node_id": "MDQ6VXNlcjQzNTkyNzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/43592730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elarbor",
"html_url": "https://github.com/elarbor",
"followers_url": "https://api.github.com/users/elarbo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-04-22T06:14:15 | 2024-05-01T23:59:00 | 2024-05-01T23:59:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 页面只有“Ollama is running”,希望可以提供运行后可以直接打开webUI
<img width="470" alt="image" src="https://github.com/ollama/ollama/assets/43592730/96392aca-b563-4da8-b823-bf6b82641f51">
<img width="1920" alt="image" src="https://github.com/ollama/ollama/assets/43592730/5e117f86-e436-451f-a00f-a531373c25b3">
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3812/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3791 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3791/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3791/comments | https://api.github.com/repos/ollama/ollama/issues/3791/events | https://github.com/ollama/ollama/issues/3791 | 2,254,883,036 | I_kwDOJ0Z1Ps6GZsjc | 3,791 | Rename files with prefix "sha256:" to "sha256_" | {
"login": "ker2xu",
"id": 31959917,
"node_id": "MDQ6VXNlcjMxOTU5OTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31959917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ker2xu",
"html_url": "https://github.com/ker2xu",
"followers_url": "https://api.github.com/users/ker2xu/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-04-21T03:36:51 | 2024-04-21T13:24:57 | 2024-04-21T03:38:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Some network file systems do not handle ":" well and interpret the string followed by ":" as foreign host, leading to Permission error (due to incorrect and non-existing locations/hosts).
It is also not good to use special symbol like ":" instead of "_", which is accepted by all OS and file sys... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3791/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7393 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7393/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7393/comments | https://api.github.com/repos/ollama/ollama/issues/7393/events | https://github.com/ollama/ollama/issues/7393 | 2,617,327,451 | I_kwDOJ0Z1Ps6cAT9b | 7,393 | EOF error on pull with different model | {
"login": "bdytx5",
"id": 32812705,
"node_id": "MDQ6VXNlcjMyODEyNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/32812705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bdytx5",
"html_url": "https://github.com/bdytx5",
"followers_url": "https://api.github.com/users/bdytx5/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-10-28T05:16:48 | 2024-11-05T22:21:46 | 2024-11-05T22:21:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
brett@brett:~$ ollama pull llama3.2
Error: registry.ollama.ai/library/phi3:latest: EOF
really confused. This is not an out of memory error. Tried reseting the systemctl stuff also ... https://github.com/ollama/ollama/issues/1859
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ol... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7393/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6330 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6330/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6330/comments | https://api.github.com/repos/ollama/ollama/issues/6330/events | https://github.com/ollama/ollama/issues/6330 | 2,462,388,075 | I_kwDOJ0Z1Ps6SxQ9r | 6,330 | Finetuned LLAMA 3.1 8B Instruct is giving random output | {
"login": "krisbianprabowo",
"id": 32126694,
"node_id": "MDQ6VXNlcjMyMTI2Njk0",
"avatar_url": "https://avatars.githubusercontent.com/u/32126694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krisbianprabowo",
"html_url": "https://github.com/krisbianprabowo",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-13T04:59:41 | 2024-08-13T07:21:39 | 2024-08-13T05:40:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello so i tried to running my finetuned model that based on Llama 3.1 8B instruct model.
It's look like it's giving a random output if you check it below:
<img width="1371" alt="Screen Shot 2024-08-13 at 11 41 33" src="https://github.com/user-attachments/assets/b4757e3b-e18d-411f-ad20-de689... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6330/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6330/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3177 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3177/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3177/comments | https://api.github.com/repos/ollama/ollama/issues/3177/events | https://github.com/ollama/ollama/issues/3177 | 2,189,717,372 | I_kwDOJ0Z1Ps6ChG98 | 3,177 | GPU utilization & Context Length and Max Tokens & Command-line windows crash & Server connection failed | {
"login": "HWiwoiiii",
"id": 103039908,
"node_id": "U_kgDOBiRDpA",
"avatar_url": "https://avatars.githubusercontent.com/u/103039908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HWiwoiiii",
"html_url": "https://github.com/HWiwoiiii",
"followers_url": "https://api.github.com/users/HWiwoi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-03-16T02:40:48 | 2024-04-28T19:07:15 | 2024-04-28T19:07:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
1. How to setup Ollama for models to use my GPU?
I'm using Windows with a 32GB DDR4 2667MHz memory (16GB + 16GB) and an NVIDIA GeForce RTX 2080 Super with Max-Q Design (8GB / Dell). Intel(R) UHD Graphics (1GB / Dell). However, the Ollama doesn't seem to utilize my GPU even when I have both my... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3177/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7291 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7291/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7291/comments | https://api.github.com/repos/ollama/ollama/issues/7291/events | https://github.com/ollama/ollama/issues/7291 | 2,602,033,810 | I_kwDOJ0Z1Ps6bF-KS | 7,291 | ollama._types.ResponseError | {
"login": "1214summer",
"id": 116168346,
"node_id": "U_kgDOBuyWmg",
"avatar_url": "https://avatars.githubusercontent.com/u/116168346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1214summer",
"html_url": "https://github.com/1214summer",
"followers_url": "https://api.github.com/users/121... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706485628,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1ejfA... | closed | false | null | [] | null | 4 | 2024-10-21T10:06:16 | 2024-12-02T07:59:58 | 2024-12-02T07:59:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
~~ python test.py
import ollama
res=ollama.chat(model="qwen2.5:0.5b",stream=False,messages=[{"role": "user","content": "who are you"}],options={"temperature":0})
print(res)
Traceback (most recent call last):
File "xxxxx", line 2, in <module>
res=ollama.chat(model="qwen2.5:0.5b",str... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7291/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6095 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6095/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6095/comments | https://api.github.com/repos/ollama/ollama/issues/6095/events | https://github.com/ollama/ollama/issues/6095 | 2,439,738,630 | I_kwDOJ0Z1Ps6Ra3UG | 6,095 | Keeps switching between cached and wired memory | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2024-07-31T10:47:36 | 2024-08-11T23:57:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I offloaded 47 out of 127 layers of Llama 3.1 405b q2 on an M3 Max with 64GB of RAM.
When I run the inference, the memory usage shows only about 8GB, while the cached memory is 56GB. This state persists most of the time, likely indicating that the CPU is in use and data is streaming directly ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6095/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5032 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5032/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5032/comments | https://api.github.com/repos/ollama/ollama/issues/5032/events | https://github.com/ollama/ollama/pull/5032 | 2,351,980,783 | PR_kwDOJ0Z1Ps5yaJW- | 5,032 | Actually skip PhysX on windows | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-06-13T20:17:42 | 2024-06-19T00:59:10 | 2024-06-13T20:26:09 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5032",
"html_url": "https://github.com/ollama/ollama/pull/5032",
"diff_url": "https://github.com/ollama/ollama/pull/5032.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5032.patch",
"merged_at": "2024-06-13T20:26:09"
} | Fixes #4984 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5032/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7388 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7388/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7388/comments | https://api.github.com/repos/ollama/ollama/issues/7388/events | https://github.com/ollama/ollama/issues/7388 | 2,616,961,185 | I_kwDOJ0Z1Ps6b-6ih | 7,388 | Llama3.2-vision - fails to process png files | {
"login": "pitimespi",
"id": 534183,
"node_id": "MDQ6VXNlcjUzNDE4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/534183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pitimespi",
"html_url": "https://github.com/pitimespi",
"followers_url": "https://api.github.com/users/piti... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 21 | 2024-10-27T23:45:21 | 2024-10-29T04:26:24 | 2024-10-29T04:26:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Couldn't process image: "invalid image type: application/octet-stream"
Error: invalid image type: application/octet-stream
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
3.2-vision 0.4.0-rc5 | {
"login": "pitimespi",
"id": 534183,
"node_id": "MDQ6VXNlcjUzNDE4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/534183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pitimespi",
"html_url": "https://github.com/pitimespi",
"followers_url": "https://api.github.com/users/piti... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7388/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3339 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3339/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3339/comments | https://api.github.com/repos/ollama/ollama/issues/3339/events | https://github.com/ollama/ollama/issues/3339 | 2,205,400,860 | I_kwDOJ0Z1Ps6Dc78c | 3,339 | ollama tries to contact registry even when adding a local model | {
"login": "noahhaon",
"id": 170715,
"node_id": "MDQ6VXNlcjE3MDcxNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/170715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahhaon",
"html_url": "https://github.com/noahhaon",
"followers_url": "https://api.github.com/users/noahhao... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-03-25T10:28:11 | 2024-03-25T11:47:11 | 2024-03-25T11:46:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have a Modelfile which has a full path to the model I wish to load, however I get an error when running `ollama create` regarding an invalid certificate from registry.ollama.ai (see #3336)
For stability and privacy reasons, it would be best if ollama does not try to connect to external reso... | {
"login": "noahhaon",
"id": 170715,
"node_id": "MDQ6VXNlcjE3MDcxNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/170715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahhaon",
"html_url": "https://github.com/noahhaon",
"followers_url": "https://api.github.com/users/noahhao... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3339/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3860 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3860/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3860/comments | https://api.github.com/repos/ollama/ollama/issues/3860/events | https://github.com/ollama/ollama/issues/3860 | 2,260,184,173 | I_kwDOJ0Z1Ps6Gt6xt | 3,860 | Serial generation performance regression from v0.1.32 on main | {
"login": "brycereitano",
"id": 1928691,
"node_id": "MDQ6VXNlcjE5Mjg2OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1928691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brycereitano",
"html_url": "https://github.com/brycereitano",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-04-24T02:49:59 | 2024-04-28T18:27:35 | 2024-04-25T16:24:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
In effort to test the latest code which includes the recently merged concurrency branch (#3418), I noticed a performance regression when prompting a model already loaded in VRAM. This appears on latest main (2ac3dd6853a45237ac049d0a4982becf91ca8c45) branch and I haven't been able to identify the... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3860/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3860/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2161 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2161/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2161/comments | https://api.github.com/repos/ollama/ollama/issues/2161/events | https://github.com/ollama/ollama/issues/2161 | 2,096,835,108 | I_kwDOJ0Z1Ps58-yok | 2,161 | Provide Docker images with pre-downloaded models | {
"login": "eddumelendez",
"id": 1810547,
"node_id": "MDQ6VXNlcjE4MTA1NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1810547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eddumelendez",
"html_url": "https://github.com/eddumelendez",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 3 | 2024-01-23T19:28:00 | 2024-10-31T12:53:26 | 2024-03-11T19:12:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Libraries and frameworks have been built around Ollama such as [LangChain4J](https://github.com/langchain4j/langchain4j) and pulling models are part of the process to make use of it.
Currently, in order to test the library integration there is a setup done using [Testcontainers](https://testcontainers.com/) to star... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2161/reactions",
"total_count": 10,
"+1": 8,
"-1": 2,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2161/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/6638 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6638/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6638/comments | https://api.github.com/repos/ollama/ollama/issues/6638/events | https://github.com/ollama/ollama/issues/6638 | 2,505,948,271 | I_kwDOJ0Z1Ps6VXbxv | 6,638 | Llama 3.1 8b not generating answers since past few days | {
"login": "ToshiKBhat",
"id": 97841687,
"node_id": "U_kgDOBdTyFw",
"avatar_url": "https://avatars.githubusercontent.com/u/97841687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToshiKBhat",
"html_url": "https://github.com/ToshiKBhat",
"followers_url": "https://api.github.com/users/Toshi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 9 | 2024-09-04T17:41:11 | 2024-11-17T09:33:35 | 2024-11-17T09:33:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The llama 3.1 8b model was generating answers in my RAG app until a few days back. Now it says i cannot help with that even when i use a simple system prompt - you are a helpful assistant , use the context provided to you to answer the user questions.
The 70b model seems to work fine, I also no... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6638/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1226 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1226/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1226/comments | https://api.github.com/repos/ollama/ollama/issues/1226/events | https://github.com/ollama/ollama/issues/1226 | 2,004,922,004 | I_kwDOJ0Z1Ps53gK6U | 1,226 | Support for Intel neural-chat | {
"login": "erima2020",
"id": 63055709,
"node_id": "MDQ6VXNlcjYzMDU1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/63055709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erima2020",
"html_url": "https://github.com/erima2020",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 0 | 2023-11-21T18:39:10 | 2023-11-21T20:06:12 | 2023-11-21T20:06:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
Thank you for the great software ! I have found zephyr particularly good. I was wondering whether support for Intel neural-chat (e.g., 3.1) was planned.
Best wishes,
Eric | {
"login": "erima2020",
"id": 63055709,
"node_id": "MDQ6VXNlcjYzMDU1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/63055709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erima2020",
"html_url": "https://github.com/erima2020",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1226/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/215 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/215/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/215/comments | https://api.github.com/repos/ollama/ollama/issues/215/events | https://github.com/ollama/ollama/issues/215 | 1,821,331,068 | I_kwDOJ0Z1Ps5sj058 | 215 | Function calling | {
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.gith... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2023-07-25T23:41:25 | 2023-12-09T02:02:59 | 2023-12-04T18:53:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Trying to get structured/consistent responses out of LLMs can be pretty brutal
OpenAI recently rolled out [Function Calling](https://openai.com/blog/function-calling-and-other-api-updates) to get the models to stick to pre-defined schemas
it would be excellent if you could specify something like this (ins/outs) i... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/215/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/215/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/785 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/785/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/785/comments | https://api.github.com/repos/ollama/ollama/issues/785/events | https://github.com/ollama/ollama/pull/785 | 1,942,706,455 | PR_kwDOJ0Z1Ps5cxvmg | 785 | check update response | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-13T22:04:48 | 2023-10-13T22:05:47 | 2023-10-13T22:05:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/785",
"html_url": "https://github.com/ollama/ollama/pull/785",
"diff_url": "https://github.com/ollama/ollama/pull/785.diff",
"patch_url": "https://github.com/ollama/ollama/pull/785.patch",
"merged_at": "2023-10-13T22:05:46"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/785/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6207 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6207/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6207/comments | https://api.github.com/repos/ollama/ollama/issues/6207/events | https://github.com/ollama/ollama/pull/6207 | 2,451,401,133 | PR_kwDOJ0Z1Ps53mXV- | 6,207 | Ensure sparse files on windows during download | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-08-06T17:47:50 | 2024-08-06T18:06:09 | 2024-08-06T18:06:06 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6207",
"html_url": "https://github.com/ollama/ollama/pull/6207",
"diff_url": "https://github.com/ollama/ollama/pull/6207.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6207.patch",
"merged_at": "2024-08-06T18:06:06"
} | The file.Truncate call on windows will write the whole file unless you set the sparse flag, leading to heavy I/O at the beginning of download. This should improve our I/O behavior on windows and put less stress on the users disk.
Fixes #5852 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6207/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7305 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7305/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7305/comments | https://api.github.com/repos/ollama/ollama/issues/7305/events | https://github.com/ollama/ollama/pull/7305 | 2,603,736,770 | PR_kwDOJ0Z1Ps5_YRFV | 7,305 | Fix rocm windows build and clean up dependency gathering | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-10-21T21:48:58 | 2024-10-22T19:54:18 | 2024-10-22T19:54:16 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7305",
"html_url": "https://github.com/ollama/ollama/pull/7305",
"diff_url": "https://github.com/ollama/ollama/pull/7305.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7305.patch",
"merged_at": "2024-10-22T19:54:15"
} | On windows ensure windows version define is properly set for rocm. Remove duplicate rocm arch flags.
Resolve wildcards in the targets so parallel builds don't race. Use readlink to resolve rocm dependencies since wildcards omit libelf. Keep windows rocm deps aligned with unified packaging model
Fixes #7279
Fixes ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7305/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8471 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8471/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8471/comments | https://api.github.com/repos/ollama/ollama/issues/8471/events | https://github.com/ollama/ollama/issues/8471 | 2,795,871,839 | I_kwDOJ0Z1Ps6mpZ5f | 8,471 | command-7b:7b-12-2024-fp16 chat completion results in 500 error | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2025-01-17T16:36:14 | 2025-01-28T21:18:27 | 2025-01-28T21:18:27 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
GPU Nvidia RTX 4070 TI Super 16 GB
System RAM: 96 GB
When I issue a chat completion request for the model `command-7b:7b-12-2024-fp16` I get a 500 response error from Ollama under the following conditions.
500 response error when using
Context: 2048
Max Predict: 2048
The following is the Olla... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8471/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8038 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8038/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8038/comments | https://api.github.com/repos/ollama/ollama/issues/8038/events | https://github.com/ollama/ollama/issues/8038 | 2,731,808,477 | I_kwDOJ0Z1Ps6i1Bbd | 8,038 | undefined reference to `ggml_backend_cuda_reg' | {
"login": "regularRandom",
"id": 14252934,
"node_id": "MDQ6VXNlcjE0MjUyOTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/14252934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regularRandom",
"html_url": "https://github.com/regularRandom",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-12-11T04:45:17 | 2024-12-14T04:09:22 | 2024-12-14T04:09:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
0.5.2-rc0 build fails with the following error message:
> /usr/bin/ld: /tmp/go-link-3808825391/000013.o: in function `ggml_backend_registry::ggml_backend_registry()':
> /_/github.com/ollama/ollama/llama/ggml-backend-reg.cpp:164: undefined reference to `ggml_backend_cuda_reg'
> collect2: err... | {
"login": "regularRandom",
"id": 14252934,
"node_id": "MDQ6VXNlcjE0MjUyOTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/14252934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regularRandom",
"html_url": "https://github.com/regularRandom",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8038/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3233 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3233/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3233/comments | https://api.github.com/repos/ollama/ollama/issues/3233/events | https://github.com/ollama/ollama/issues/3233 | 2,193,952,020 | I_kwDOJ0Z1Ps6CxQ0U | 3,233 | Add FinTral model | {
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-03-19T03:50:56 | 2024-07-12T23:13:33 | 2024-07-12T23:13:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
pls add FinTral | {
"login": "tqangxl",
"id": 9669944,
"node_id": "MDQ6VXNlcjk2Njk5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9669944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqangxl",
"html_url": "https://github.com/tqangxl",
"followers_url": "https://api.github.com/users/tqangxl/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3233/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/573 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/573/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/573/comments | https://api.github.com/repos/ollama/ollama/issues/573/events | https://github.com/ollama/ollama/issues/573 | 1,909,134,882 | I_kwDOJ0Z1Ps5xyxYi | 573 | "Invalid file magic" with falcon models | {
"login": "vadim0x60",
"id": 3543310,
"node_id": "MDQ6VXNlcjM1NDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3543310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vadim0x60",
"html_url": "https://github.com/vadim0x60",
"followers_url": "https://api.github.com/users/va... | [] | closed | false | null | [] | null | 3 | 2023-09-22T15:39:55 | 2023-09-25T13:40:35 | 2023-09-25T13:40:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This happens every time I try to interact with a falcon model:
```
❯ ollama run falcon:40b
>>> hi
Error: invalid file magic
```
Hardware is Apple silicon with 96GB of RAM | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/573/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4394 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4394/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4394/comments | https://api.github.com/repos/ollama/ollama/issues/4394/events | https://github.com/ollama/ollama/issues/4394 | 2,292,273,782 | I_kwDOJ0Z1Ps6IoVJ2 | 4,394 | Modelfile containing "home" in its name breaks model execution | {
"login": "leon-rgb",
"id": 56979997,
"node_id": "MDQ6VXNlcjU2OTc5OTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/56979997?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leon-rgb",
"html_url": "https://github.com/leon-rgb",
"followers_url": "https://api.github.com/users/leo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-13T09:21:01 | 2024-05-14T02:04:17 | 2024-05-14T02:04:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### What have I done?
Created a model through `ollama create sh-llama -f ./home_modelfile`.
Got the usual output.
Trying to run the model also works `ollama run sh-llama`
But when giving an input no output is generated.
### Solution
Renaming the `home_modelfile` to anything that doesn't ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4394/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4877 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4877/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4877/comments | https://api.github.com/repos/ollama/ollama/issues/4877/events | https://github.com/ollama/ollama/pull/4877 | 2,338,945,967 | PR_kwDOJ0Z1Ps5xuAbh | 4,877 | Update README.md to add Shinkai Desktop | {
"login": "nicarq",
"id": 1622112,
"node_id": "MDQ6VXNlcjE2MjIxMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1622112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicarq",
"html_url": "https://github.com/nicarq",
"followers_url": "https://api.github.com/users/nicarq/foll... | [] | closed | false | null | [] | null | 1 | 2024-06-06T18:50:13 | 2024-11-21T08:16:19 | 2024-11-21T08:16:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4877",
"html_url": "https://github.com/ollama/ollama/pull/4877",
"diff_url": "https://github.com/ollama/ollama/pull/4877.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4877.patch",
"merged_at": "2024-11-21T08:16:18"
} | Adding Shinkai Desktop to the list of Apps using Ollama (opensource), free and a two click install! no docker required | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4877/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3575 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3575/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3575/comments | https://api.github.com/repos/ollama/ollama/issues/3575/events | https://github.com/ollama/ollama/issues/3575 | 2,235,514,582 | I_kwDOJ0Z1Ps6FPz7W | 3,575 | Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. | {
"login": "Coder-Vishali",
"id": 60731083,
"node_id": "MDQ6VXNlcjYwNzMxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/60731083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Coder-Vishali",
"html_url": "https://github.com/Coder-Vishali",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 11 | 2024-04-10T12:46:02 | 2025-01-22T05:58:27 | 2024-04-11T10:49:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I execute ollama serve, I face the below issue:
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

**
... | {
"login": "Coder-Vishali",
"id": 60731083,
"node_id": "MDQ6VXNlcjYwNzMxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/60731083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Coder-Vishali",
"html_url": "https://github.com/Coder-Vishali",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3575/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5736 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5736/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5736/comments | https://api.github.com/repos/ollama/ollama/issues/5736/events | https://github.com/ollama/ollama/issues/5736 | 2,412,334,627 | I_kwDOJ0Z1Ps6PyU4j | 5,736 | bug: Open WebUI RAG Malfunction with Ollama Versions Post 0.2.1 | {
"login": "silentoplayz",
"id": 50341825,
"node_id": "MDQ6VXNlcjUwMzQxODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/50341825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silentoplayz",
"html_url": "https://github.com/silentoplayz",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 16 | 2024-07-17T01:02:28 | 2024-12-08T01:10:07 | 2024-07-28T02:51:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Summary:**
Retrieval-Augmented Generation (RAG) functionality within Open WebUI breaks when using Ollama versions later than 0.2.1 for local models. While external models (e.g., GroqCloud's LLama 3 8B) function correctly with RAG, local models fail to utilize the selected document, return... | {
"login": "silentoplayz",
"id": 50341825,
"node_id": "MDQ6VXNlcjUwMzQxODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/50341825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silentoplayz",
"html_url": "https://github.com/silentoplayz",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5736/reactions",
"total_count": 6,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/5736/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6057 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6057/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6057/comments | https://api.github.com/repos/ollama/ollama/issues/6057/events | https://github.com/ollama/ollama/issues/6057 | 2,435,778,412 | I_kwDOJ0Z1Ps6RLwds | 6,057 | Ollama create from Model failed | {
"login": "rentianxiang",
"id": 45681984,
"node_id": "MDQ6VXNlcjQ1NjgxOTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/45681984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rentianxiang",
"html_url": "https://github.com/rentianxiang",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-29T15:50:17 | 2024-09-02T00:10:26 | 2024-09-02T00:10:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have downloaded llama3.1:70b model directly from llama.meta.com, and I am trying to import it into Ollama.
It stopped at processing tensors.
I have tried multiple times today, all failed at this stage.
Did I do anything wrong?
This is kinda related to another issue I have raised, since I... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6057/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6342 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6342/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6342/comments | https://api.github.com/repos/ollama/ollama/issues/6342/events | https://github.com/ollama/ollama/issues/6342 | 2,463,951,443 | I_kwDOJ0Z1Ps6S3OpT | 6,342 | Windows Defender | {
"login": "Eniti-Codes",
"id": 106023124,
"node_id": "U_kgDOBlHI1A",
"avatar_url": "https://avatars.githubusercontent.com/u/106023124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eniti-Codes",
"html_url": "https://github.com/Eniti-Codes",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-08-13T18:21:41 | 2024-08-13T18:34:17 | 2024-08-13T18:34:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
So for some reason, with the newest update, Windows Defender thinks this is a Trojan.
<img width="210" alt="ApplicationFrameHost_ye04kXZxFA" src="https://github.com/user-attachments/assets/85525c54-a0ca-40e5-a764-f4227bf73ba3">
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama versi... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6342/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7623 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7623/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7623/comments | https://api.github.com/repos/ollama/ollama/issues/7623/events | https://github.com/ollama/ollama/issues/7623 | 2,650,674,272 | I_kwDOJ0Z1Ps6d_hRg | 7,623 | ollama 70B model on 10x32G vram rtx5000 - loading to 256G ram and cpu | {
"login": "paolss",
"id": 18089673,
"node_id": "MDQ6VXNlcjE4MDg5Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/18089673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paolss",
"html_url": "https://github.com/paolss",
"followers_url": "https://api.github.com/users/paolss/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2024-11-11T23:54:20 | 2024-12-13T11:42:30 | 2024-12-13T11:42:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
as in topic. something changed for rly bad in ollama - was trying to load 70B model that was working before update and now its not... all because it want to load to ram and use cpu not 10x32G rtx5000
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
_No response_ | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7623/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7623/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/126 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/126/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/126/comments | https://api.github.com/repos/ollama/ollama/issues/126/events | https://github.com/ollama/ollama/pull/126 | 1,812,177,085 | PR_kwDOJ0Z1Ps5V55P5 | 126 | add llama2:13b model to the readme | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-19T15:16:40 | 2023-07-19T15:21:29 | 2023-07-19T15:21:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/126",
"html_url": "https://github.com/ollama/ollama/pull/126",
"diff_url": "https://github.com/ollama/ollama/pull/126.diff",
"patch_url": "https://github.com/ollama/ollama/pull/126.patch",
"merged_at": "2023-07-19T15:21:29"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/126/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3261 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3261/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3261/comments | https://api.github.com/repos/ollama/ollama/issues/3261/events | https://github.com/ollama/ollama/issues/3261 | 2,196,395,575 | I_kwDOJ0Z1Ps6C6lY3 | 3,261 | 404 while installing NVIDIA repository | {
"login": "gnumoksha",
"id": 696797,
"node_id": "MDQ6VXNlcjY5Njc5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/696797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gnumoksha",
"html_url": "https://github.com/gnumoksha",
"followers_url": "https://api.github.com/users/gnum... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | null | [] | null | 3 | 2024-03-20T00:57:56 | 2024-07-29T21:24:21 | 2024-07-29T21:24:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```console
$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%#=#=# ######################################################################## 100.0%
>>> Ins... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3261/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3261/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5145 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5145/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5145/comments | https://api.github.com/repos/ollama/ollama/issues/5145/events | https://github.com/ollama/ollama/pull/5145 | 2,362,708,795 | PR_kwDOJ0Z1Ps5y-05J | 5,145 | Fix bad symbol load detection | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-06-19T15:56:58 | 2024-06-19T16:12:35 | 2024-06-19T16:12:33 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5145",
"html_url": "https://github.com/ollama/ollama/pull/5145",
"diff_url": "https://github.com/ollama/ollama/pull/5145.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5145.patch",
"merged_at": "2024-06-19T16:12:33"
} | pointer deref's weren't correct on a few libraries, which explains some crashes on older systems or miswired symlinks for discovery libraries.
Fixes #4982 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5145/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1140 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1140/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1140/comments | https://api.github.com/repos/ollama/ollama/issues/1140/events | https://github.com/ollama/ollama/issues/1140 | 1,995,273,108 | I_kwDOJ0Z1Ps527XOU | 1,140 | Model push is not working | {
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/follow... | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 18 | 2023-11-15T18:02:57 | 2024-12-12T01:58:22 | 2023-11-16T21:44:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried many times to push my models to ollama but always I get this error
```
retrieving manifest
Error: max retries exceeded
```
| {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1140/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1140/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2147 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2147/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2147/comments | https://api.github.com/repos/ollama/ollama/issues/2147/events | https://github.com/ollama/ollama/issues/2147 | 2,094,834,807 | I_kwDOJ0Z1Ps583KR3 | 2,147 | permission denied when setting OLLAMA_MODELS in service file | {
"login": "lasseedfast",
"id": 8794658,
"node_id": "MDQ6VXNlcjg3OTQ2NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8794658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lasseedfast",
"html_url": "https://github.com/lasseedfast",
"followers_url": "https://api.github.com/us... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 14 | 2024-01-22T21:59:56 | 2024-12-30T23:00:24 | 2024-03-12T18:45:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm trying to set MODEL_FILE env variable in /etc/systemd/system/ollama.service.d but the logs shows that the service tries to create the directory:
```
Jan 22 21:25:41 airig systemd[1]: ollama.service: Scheduled restart job, restart counter is at 151.
Jan 22 21:25:41 airig systemd[1]: Stopped ollama.service - Oll... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2147/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2147/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/12 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/12/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/12/comments | https://api.github.com/repos/ollama/ollama/issues/12/events | https://github.com/ollama/ollama/pull/12 | 1,779,528,163 | PR_kwDOJ0Z1Ps5ULEGP | 12 | add prompt templates as j2 templates | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-06-28T18:51:07 | 2023-06-28T18:53:54 | 2023-06-28T18:53:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/12",
"html_url": "https://github.com/ollama/ollama/pull/12",
"diff_url": "https://github.com/ollama/ollama/pull/12.diff",
"patch_url": "https://github.com/ollama/ollama/pull/12.patch",
"merged_at": "2023-06-28T18:53:50"
} | #7 is missing from main | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/12/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/12/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5882 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5882/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5882/comments | https://api.github.com/repos/ollama/ollama/issues/5882/events | https://github.com/ollama/ollama/issues/5882 | 2,425,791,666 | I_kwDOJ0Z1Ps6QlqSy | 5,882 | Generate actionable error message when a model meets insufficient GPU memory or RAM | {
"login": "sagarrandive",
"id": 11855008,
"node_id": "MDQ6VXNlcjExODU1MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/11855008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagarrandive",
"html_url": "https://github.com/sagarrandive",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-07-23T17:57:26 | 2024-08-11T18:30:21 | 2024-08-11T18:30:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When model is too large for the GPU or RAM of the underlying compute, it would be helpful if Ollama generates a message explicitly calling out that the model is too large for the memory. Currently that is not the case. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5882/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5882/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8227 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8227/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8227/comments | https://api.github.com/repos/ollama/ollama/issues/8227/events | https://github.com/ollama/ollama/pull/8227 | 2,757,337,721 | PR_kwDOJ0Z1Ps6GJWri | 8,227 | README.md inclusion of a project alpaca | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [] | closed | false | null | [] | null | 0 | 2024-12-24T07:32:36 | 2024-12-25T04:05:36 | 2024-12-25T04:05:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8227",
"html_url": "https://github.com/ollama/ollama/pull/8227",
"diff_url": "https://github.com/ollama/ollama/pull/8227.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8227.patch",
"merged_at": "2024-12-25T04:05:36"
} | Alpaca An Ollama client application for linux and macos made with GTK4 and Adwaita https://github.com/ollama/ollama/issues/8220 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8227/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/380 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/380/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/380/comments | https://api.github.com/repos/ollama/ollama/issues/380/events | https://github.com/ollama/ollama/issues/380 | 1,857,036,496 | I_kwDOJ0Z1Ps5usCDQ | 380 | Increase Inference Throughput by Employing Parallelism | {
"login": "gusanmaz",
"id": 2552975,
"node_id": "MDQ6VXNlcjI1NTI5NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2552975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gusanmaz",
"html_url": "https://github.com/gusanmaz",
"followers_url": "https://api.github.com/users/gusan... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2023-08-18T17:13:46 | 2024-05-12T15:11:01 | 2023-12-22T03:33:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am running llama2 model for inference on Mac Mini M2 Pro using Langchain. According to System Monitor ollama process doesn't consume significant CPU but around 95% GPU and around 3GB memory. When I run 2 instances of the almost same code, inference speed decreases around 2-fold.
The code I am running looks like th... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/380/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2532 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2532/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2532/comments | https://api.github.com/repos/ollama/ollama/issues/2532/events | https://github.com/ollama/ollama/pull/2532 | 2,137,750,643 | PR_kwDOJ0Z1Ps5nCp-i | 2,532 | add gguf file types | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-02-16T02:12:04 | 2024-02-21T00:06:30 | 2024-02-21T00:06:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2532",
"html_url": "https://github.com/ollama/ollama/pull/2532",
"diff_url": "https://github.com/ollama/ollama/pull/2532.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2532.patch",
"merged_at": "2024-02-21T00:06:29"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2532/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1243 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1243/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1243/comments | https://api.github.com/repos/ollama/ollama/issues/1243/events | https://github.com/ollama/ollama/issues/1243 | 2,006,806,790 | I_kwDOJ0Z1Ps53nXEG | 1,243 | Set system prompt in `ollama run` | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396210,
"node_id": ... | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 5 | 2023-11-22T17:29:45 | 2023-12-07T05:25:44 | 2023-12-04T21:32:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | We can see the system prompt with /show system, but have no way to set it. I would be nice to be able to set it from the command line. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1243/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1243/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3633 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3633/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3633/comments | https://api.github.com/repos/ollama/ollama/issues/3633/events | https://github.com/ollama/ollama/pull/3633 | 2,241,764,299 | PR_kwDOJ0Z1Ps5sk88V | 3,633 | Update README.md with Discord-Ollama project | {
"login": "JT2M0L3Y",
"id": 67881240,
"node_id": "MDQ6VXNlcjY3ODgxMjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/67881240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JT2M0L3Y",
"html_url": "https://github.com/JT2M0L3Y",
"followers_url": "https://api.github.com/users/JT2... | [] | closed | false | null | [] | null | 0 | 2024-04-13T20:16:45 | 2024-04-23T00:14:20 | 2024-04-23T00:14:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3633",
"html_url": "https://github.com/ollama/ollama/pull/3633",
"diff_url": "https://github.com/ollama/ollama/pull/3633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3633.patch",
"merged_at": "2024-04-23T00:14:20"
} | Generalized TypeScript Discord Bot that integrates the Ollama-js library so any model from Ollama can be built or pulled into use. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3633/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6935 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6935/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6935/comments | https://api.github.com/repos/ollama/ollama/issues/6935/events | https://github.com/ollama/ollama/issues/6935 | 2,545,440,166 | I_kwDOJ0Z1Ps6XuFWm | 6,935 | Typo in Linux uninstallation docs | {
"login": "jasondunsmore",
"id": 53437,
"node_id": "MDQ6VXNlcjUzNDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/53437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasondunsmore",
"html_url": "https://github.com/jasondunsmore",
"followers_url": "https://api.github.com/user... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw... | closed | false | null | [] | null | 2 | 2024-09-24T13:37:16 | 2024-10-24T03:48:34 | 2024-10-24T03:48:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
At the bottom of docs/linux.md, it should say `sudo rm -r /usr/lib/ollama/` instead of `sudo rm -r /usr/share/ollama`.
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6935/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/145 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/145/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/145/comments | https://api.github.com/repos/ollama/ollama/issues/145/events | https://github.com/ollama/ollama/pull/145 | 1,814,620,135 | PR_kwDOJ0Z1Ps5WCTDi | 145 | verify blob digest | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-20T18:54:21 | 2023-07-20T19:14:25 | 2023-07-20T19:14:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/145",
"html_url": "https://github.com/ollama/ollama/pull/145",
"diff_url": "https://github.com/ollama/ollama/pull/145.diff",
"patch_url": "https://github.com/ollama/ollama/pull/145.patch",
"merged_at": "2023-07-20T19:14:21"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/145/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6446 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6446/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6446/comments | https://api.github.com/repos/ollama/ollama/issues/6446/events | https://github.com/ollama/ollama/issues/6446 | 2,476,169,712 | I_kwDOJ0Z1Ps6Tl1nw | 6,446 | Model Library: Ability to update model manifest via editor | {
"login": "MaxJa4",
"id": 74194322,
"node_id": "MDQ6VXNlcjc0MTk0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/74194322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaxJa4",
"html_url": "https://github.com/MaxJa4",
"followers_url": "https://api.github.com/users/MaxJa4/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-08-20T17:26:06 | 2024-08-20T17:26:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # TLDR
Direct updates of small, text-based files like parameters, template, license and system message **within the Ollama library** to avoid the time-consuming and bandwidth-heavy process of pulling, updating and pushing all tags of a model. This would simplify updates without affecting already existing possibiliti... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6446/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6446/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5119 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5119/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5119/comments | https://api.github.com/repos/ollama/ollama/issues/5119/events | https://github.com/ollama/ollama/pull/5119 | 2,360,455,078 | PR_kwDOJ0Z1Ps5y3Ch2 | 5,119 | Add a few missing server settings and sort the list | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-06-18T18:28:48 | 2024-07-29T21:26:38 | 2024-07-29T21:26:37 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5119",
"html_url": "https://github.com/ollama/ollama/pull/5119",
"diff_url": "https://github.com/ollama/ollama/pull/5119.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5119.patch",
"merged_at": null
} | Fixes #5093 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5119/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3308 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3308/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3308/comments | https://api.github.com/repos/ollama/ollama/issues/3308/events | https://github.com/ollama/ollama/pull/3308 | 2,203,833,834 | PR_kwDOJ0Z1Ps5qjzUJ | 3,308 | Bump llama.cpp to b2510 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-03-23T11:24:46 | 2024-03-25T19:56:18 | 2024-03-25T19:56:12 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3308",
"html_url": "https://github.com/ollama/ollama/pull/3308",
"diff_url": "https://github.com/ollama/ollama/pull/3308.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3308.patch",
"merged_at": "2024-03-25T19:56:12"
} | ~latest release, after the cuda refactoring | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3308/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4662 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4662/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4662/comments | https://api.github.com/repos/ollama/ollama/issues/4662/events | https://github.com/ollama/ollama/issues/4662 | 2,319,068,140 | I_kwDOJ0Z1Ps6KOivs | 4,662 | Can Ollama be ran with Nemo Guardrails | {
"login": "ShreyasChhetri",
"id": 143403865,
"node_id": "U_kgDOCIwrWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/143403865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShreyasChhetri",
"html_url": "https://github.com/ShreyasChhetri",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 3 | 2024-05-27T12:41:02 | 2024-10-23T03:07:29 | 2024-10-23T03:07:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I want to run ollama Mistral 7B model with Nemo Guardrails but when I'm using it in my config file models:
- type: main
engine: ollama
model: mistral
parameters:
base_url: http://127.0.0.1:11434
like this I'm not able to access it
Error while execution self_check_in... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4662/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1074 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1074/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1074/comments | https://api.github.com/repos/ollama/ollama/issues/1074/events | https://github.com/ollama/ollama/pull/1074 | 1,987,788,391 | PR_kwDOJ0Z1Ps5fJlqm | 1,074 | Log Analysis Example | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 1 | 2023-11-10T14:57:55 | 2023-11-17T00:33:08 | 2023-11-17T00:33:07 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1074",
"html_url": "https://github.com/ollama/ollama/pull/1074",
"diff_url": "https://github.com/ollama/ollama/pull/1074.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1074.patch",
"merged_at": "2023-11-17T00:33:07"
} | At kubecon and other events and on discord, we have been asked how to analyse logs using ollama. This is a simple example of one approach to this. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1074/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7190 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7190/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7190/comments | https://api.github.com/repos/ollama/ollama/issues/7190/events | https://github.com/ollama/ollama/issues/7190 | 2,583,782,889 | I_kwDOJ0Z1Ps6aAWXp | 7,190 | ollama was built for Mac OS X 12 instead of 11 | {
"login": "josergc",
"id": 7774952,
"node_id": "MDQ6VXNlcjc3NzQ5NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7774952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josergc",
"html_url": "https://github.com/josergc",
"followers_url": "https://api.github.com/users/josergc/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-10-13T08:36:08 | 2024-10-13T17:47:43 | 2024-10-13T17:47:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
In the website it is stated: Requires macOS 11 Big Sur or later.
I'm using version 11.7.10, it installs fine and when it is launched through command like, it is requiring Mac OS X 12.0
Cannot get version
```
% ollama --version
dyld: Symbol not found: __ZTTNSt3__114basic_ifstreamIcNS_11... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7190/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2297 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2297/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2297/comments | https://api.github.com/repos/ollama/ollama/issues/2297/events | https://github.com/ollama/ollama/issues/2297 | 2,111,239,234 | I_kwDOJ0Z1Ps591vRC | 2,297 | Wingman-AI, please add the new extension | {
"login": "RussellCanfield",
"id": 17344904,
"node_id": "MDQ6VXNlcjE3MzQ0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17344904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RussellCanfield",
"html_url": "https://github.com/RussellCanfield",
"followers_url": "https://api... | [] | closed | false | null | [] | null | 3 | 2024-02-01T00:39:44 | 2024-02-01T19:17:24 | 2024-02-01T19:17:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, we’re huge fans of Ollama!
We built a VSCode extension around it called Wingman:
https://marketplace.visualstudio.com/items?itemName=WingMan.wing-man
Would we be able to add it your readme? Do you accept PRs?
Thanks! | {
"login": "RussellCanfield",
"id": 17344904,
"node_id": "MDQ6VXNlcjE3MzQ0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17344904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RussellCanfield",
"html_url": "https://github.com/RussellCanfield",
"followers_url": "https://api... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2297/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3088 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3088/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3088/comments | https://api.github.com/repos/ollama/ollama/issues/3088/events | https://github.com/ollama/ollama/pull/3088 | 2,182,870,941 | PR_kwDOJ0Z1Ps5pcg1z | 3,088 | Fix iGPU detection for linux | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-03-13T00:01:25 | 2024-03-13T01:47:30 | 2024-03-13T00:20:28 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3088",
"html_url": "https://github.com/ollama/ollama/pull/3088",
"diff_url": "https://github.com/ollama/ollama/pull/3088.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3088.patch",
"merged_at": "2024-03-13T00:20:28"
} | This fixes a few bugs in the new sysfs discovery logic. iGPUs are now correctly identified by their <1G VRAM reported. the sysfs IDs are off by one compared to what HIP wants due to the CPU being reported in amdgpu, but HIP only cares about GPUs.
Tested on a Ryzen 9 7900X system with an RX 7900 XTX. The amdgpu ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3088/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/599 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/599/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/599/comments | https://api.github.com/repos/ollama/ollama/issues/599/events | https://github.com/ollama/ollama/pull/599 | 1,912,511,182 | PR_kwDOJ0Z1Ps5bLZ1F | 599 | update install.sh | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-26T01:10:06 | 2023-09-26T01:24:14 | 2023-09-26T01:24:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/599",
"html_url": "https://github.com/ollama/ollama/pull/599",
"diff_url": "https://github.com/ollama/ollama/pull/599.diff",
"patch_url": "https://github.com/ollama/ollama/pull/599.patch",
"merged_at": "2023-09-26T01:24:13"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/599/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3716 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3716/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3716/comments | https://api.github.com/repos/ollama/ollama/issues/3716/events | https://github.com/ollama/ollama/issues/3716 | 2,249,471,075 | I_kwDOJ0Z1Ps6GFDRj | 3,716 | I can't push a model | {
"login": "jonathanhecl",
"id": 1691623,
"node_id": "MDQ6VXNlcjE2OTE2MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1691623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanhecl",
"html_url": "https://github.com/jonathanhecl",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-04-17T23:58:23 | 2024-04-18T02:13:45 | 2024-04-18T00:07:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
> ollama push command-r-plus
retrieving manifest
pushing 503c8cac166f... 100% ▕████████████████████████████████████████████████████████▏ 59 GB
pushing f0624a2393a5... 100% ▕████████████████████████████████████████████████████████▏ 13 KB
pushing 42499e38acdf... 100% ▕██████████████████... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3716/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1136 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1136/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1136/comments | https://api.github.com/repos/ollama/ollama/issues/1136/events | https://github.com/ollama/ollama/issues/1136 | 1,994,569,908 | I_kwDOJ0Z1Ps524ri0 | 1,136 | please support neural-chat-7b-v3-1 | {
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/follow... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2023-11-15T11:12:17 | 2023-11-16T23:40:13 | 2023-11-16T23:40:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Definitely one the most amazing models I've ever seen [neural-chat-7b-v3-1 ](https://huggingface.co/Intel/neural-chat-7b-v3-1)
please include it to the models page.
I used it thorugh this command
```bash
ollama run fakezeta/neural-chat-7b-v3-1:Q5_K_M
``` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1136/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6851 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6851/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6851/comments | https://api.github.com/repos/ollama/ollama/issues/6851/events | https://github.com/ollama/ollama/issues/6851 | 2,532,501,475 | I_kwDOJ0Z1Ps6W8ufj | 6,851 | 数据隐私问题 | {
"login": "deict",
"id": 112455517,
"node_id": "U_kgDOBrPvXQ",
"avatar_url": "https://avatars.githubusercontent.com/u/112455517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deict",
"html_url": "https://github.com/deict",
"followers_url": "https://api.github.com/users/deict/followers",
... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-09-18T01:38:54 | 2024-09-21T00:38:55 | 2024-09-21T00:38:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 如果我在windows本地部署ollama,并使用ollama3.1模型,我问的问题,这些数据会被你们接收存储吗 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6851/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1606 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1606/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1606/comments | https://api.github.com/repos/ollama/ollama/issues/1606/events | https://github.com/ollama/ollama/pull/1606 | 2,048,760,694 | PR_kwDOJ0Z1Ps5iYBNf | 1,606 | Added support for specifying an arbitrary GBNF compatible grammar | {
"login": "clevcode",
"id": 1842180,
"node_id": "MDQ6VXNlcjE4NDIxODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1842180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clevcode",
"html_url": "https://github.com/clevcode",
"followers_url": "https://api.github.com/users/clevc... | [] | closed | false | null | [] | null | 26 | 2023-12-19T14:21:53 | 2024-12-05T00:43:26 | 2024-12-05T00:43:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1606",
"html_url": "https://github.com/ollama/ollama/pull/1606",
"diff_url": "https://github.com/ollama/ollama/pull/1606.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1606.patch",
"merged_at": null
} | in the Modelfile, for models running on the llama.cpp backend
Note that this is basically just the same PR as the one submitted by SyrupThinker in September (#565), and that has been mentioned in issue #1507 and #808 since then.
There are plenty of users that would appreciate this feature, so I really hope that i... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1606/reactions",
"total_count": 71,
"+1": 40,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 21,
"rocket": 4,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/1606/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1782 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1782/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1782/comments | https://api.github.com/repos/ollama/ollama/issues/1782/events | https://github.com/ollama/ollama/issues/1782 | 2,065,339,126 | I_kwDOJ0Z1Ps57GpL2 | 1,782 | Model kept unloading no matter what | {
"login": "Opaatia",
"id": 118029983,
"node_id": "U_kgDOBwj-nw",
"avatar_url": "https://avatars.githubusercontent.com/u/118029983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Opaatia",
"html_url": "https://github.com/Opaatia",
"followers_url": "https://api.github.com/users/Opaatia/foll... | [] | closed | false | null | [] | null | 11 | 2024-01-04T09:54:19 | 2024-01-28T22:33:09 | 2024-01-28T22:33:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Greeting, I have modified the ollama/server/routes.go to set the following variable:
```go
var defaultSessionDuration = 1440 * time.Minute
```
However when running the ollama, it kept unloading the **exact same** model over and over for every single API invocation for /api/generate endpoint and this is visible ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1782/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1782/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5256 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5256/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5256/comments | https://api.github.com/repos/ollama/ollama/issues/5256/events | https://github.com/ollama/ollama/issues/5256 | 2,370,267,702 | I_kwDOJ0Z1Ps6NR2o2 | 5,256 | openai ChatCompletionRequest missing tools field? | {
"login": "bingo789",
"id": 40812718,
"node_id": "MDQ6VXNlcjQwODEyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40812718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bingo789",
"html_url": "https://github.com/bingo789",
"followers_url": "https://api.github.com/users/bin... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-24T13:31:21 | 2024-07-24T19:07:04 | 2024-07-24T19:07:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when i use ChatOpenAI client in langchain, ollama server receive:
```
{
"messages": [{
"content": "what is the weather in Boston?",
"role": "user"
}],
"model": "llama3:8b-instruct-q4_0",
"n": 1,
"stream": false,
"temperature": 0.0,
"tools": [{
"type": "function",
"f... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5256/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5256/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4519 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4519/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4519/comments | https://api.github.com/repos/ollama/ollama/issues/4519/events | https://github.com/ollama/ollama/issues/4519 | 2,304,610,082 | I_kwDOJ0Z1Ps6JXY8i | 4,519 | ollama run codellama:34b issue | {
"login": "Iliceth",
"id": 68381834,
"node_id": "MDQ6VXNlcjY4MzgxODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/68381834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iliceth",
"html_url": "https://github.com/Iliceth",
"followers_url": "https://api.github.com/users/Ilicet... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 15 | 2024-05-19T13:19:02 | 2024-06-19T16:28:50 | 2024-06-19T16:28:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Every model I tested with ollama runs fine, but when trying: `ollama run codellama:34b`, I get `Error: llama runner process has terminated: signal: segmentation fault (core dumped)`Tried the 13B-version then, works fine.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.37 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4519/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4519/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8145 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8145/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8145/comments | https://api.github.com/repos/ollama/ollama/issues/8145/events | https://github.com/ollama/ollama/pull/8145 | 2,746,269,027 | PR_kwDOJ0Z1Ps6FkOyb | 8,145 | Embedding Normalization Options | {
"login": "gabe-l-hart",
"id": 1254484,
"node_id": "MDQ6VXNlcjEyNTQ0ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1254484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabe-l-hart",
"html_url": "https://github.com/gabe-l-hart",
"followers_url": "https://api.github.com/us... | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 0 | 2024-12-17T22:48:19 | 2024-12-23T15:39:38 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8145",
"html_url": "https://github.com/ollama/ollama/pull/8145",
"diff_url": "https://github.com/ollama/ollama/pull/8145.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8145.patch",
"merged_at": null
} | ## Description
This PR introduces the new API parameter `normalize` for the `/api/embed` and `/api/embeddings` endpoints that allow the user to explicitly enable/disable normalization. The default behavior of both endpoints is preserved (normalization for `embed`, no normalization for `embeddings`), but this allows ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8145/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2721 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2721/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2721/comments | https://api.github.com/repos/ollama/ollama/issues/2721/events | https://github.com/ollama/ollama/issues/2721 | 2,152,084,864 | I_kwDOJ0Z1Ps6ARjWA | 2,721 | Add latest tag for docker image with ROCm support | {
"login": "robertvazan",
"id": 3514517,
"node_id": "MDQ6VXNlcjM1MTQ1MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3514517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robertvazan",
"html_url": "https://github.com/robertvazan",
"followers_url": "https://api.github.com/us... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-02-24T03:23:16 | 2024-02-27T19:29:09 | 2024-02-27T19:29:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Using docker image with ROCm support requires specifying version in the tag, e.g. 0.1.27-rocm. Please add rocm tag that always points to the latest version with ROCm, so that we can upgrade by running docker pull. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2721/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3954 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3954/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3954/comments | https://api.github.com/repos/ollama/ollama/issues/3954/events | https://github.com/ollama/ollama/pull/3954 | 2,266,407,759 | PR_kwDOJ0Z1Ps5t4F4E | 3,954 | Put back non-avx CPU build for windows | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-26T19:44:47 | 2024-04-26T20:09:24 | 2024-04-26T20:09:04 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3954",
"html_url": "https://github.com/ollama/ollama/pull/3954",
"diff_url": "https://github.com/ollama/ollama/pull/3954.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3954.patch",
"merged_at": "2024-04-26T20:09:04"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3954/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/96 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/96/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/96/comments | https://api.github.com/repos/ollama/ollama/issues/96/events | https://github.com/ollama/ollama/pull/96 | 1,808,836,936 | PR_kwDOJ0Z1Ps5Vukbl | 96 | add modelpaths | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-18T00:35:39 | 2023-07-18T05:44:21 | 2023-07-18T05:44:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/96",
"html_url": "https://github.com/ollama/ollama/pull/96",
"diff_url": "https://github.com/ollama/ollama/pull/96.diff",
"patch_url": "https://github.com/ollama/ollama/pull/96.patch",
"merged_at": "2023-07-18T05:44:21"
} | This change adds ModelPath{} which takes care of figuring out the various URL and file paths to a given model. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/96/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/96/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3533 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3533/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3533/comments | https://api.github.com/repos/ollama/ollama/issues/3533/events | https://github.com/ollama/ollama/issues/3533 | 2,230,380,627 | I_kwDOJ0Z1Ps6E8OhT | 3,533 | Suggestion: AnomalibGPT | {
"login": "monkeycc",
"id": 6490927,
"node_id": "MDQ6VXNlcjY0OTA5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6490927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monkeycc",
"html_url": "https://github.com/monkeycc",
"followers_url": "https://api.github.com/users/monke... | [] | open | false | null | [] | null | 0 | 2024-04-08T06:38:04 | 2024-04-19T15:41:11 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
Discovering an interesting defect detection big language model
https://github.com/CASIA-IVA-Lab/AnomalyGPT

.It works fine until I am tring to create from Modelfile using this command:
`ollama create 7b-32k-instruct -f ./Modelfile`
Here is my Modelfile:
```
FROM ./q4_0.bin
TEMPLATE "[IN... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/887/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8004 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8004/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8004/comments | https://api.github.com/repos/ollama/ollama/issues/8004/events | https://github.com/ollama/ollama/issues/8004 | 2,725,725,090 | I_kwDOJ0Z1Ps6id0Oi | 8,004 | QwQ 32B Preview: Q4_K_M better than Q8_0 at coding | {
"login": "leikareipa",
"id": 18671947,
"node_id": "MDQ6VXNlcjE4NjcxOTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/18671947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leikareipa",
"html_url": "https://github.com/leikareipa",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 21 | 2024-12-09T01:34:20 | 2024-12-29T20:43:10 | 2024-12-29T20:08:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It seems the Q4 version of QwQ Preview may consistently produce better coding responses than the Q8 version, even though I'd expect the opposite.
Both versions were downloaded via Ollama and tested varyingly with Ollama 0.5.1 and I think 0.4.7, can't remember at what point I updated. Contex... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8004/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1717 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1717/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1717/comments | https://api.github.com/repos/ollama/ollama/issues/1717/events | https://github.com/ollama/ollama/issues/1717 | 2,056,045,538 | I_kwDOJ0Z1Ps56jMPi | 1,717 | [Feature request] update models from CLI | {
"login": "ThatOneCalculator",
"id": 44733677,
"node_id": "MDQ6VXNlcjQ0NzMzNjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/44733677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThatOneCalculator",
"html_url": "https://github.com/ThatOneCalculator",
"followers_url": "https... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2023-12-26T05:31:16 | 2024-01-25T22:46:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When an update is available to an already installed model, something like `ollama pull` (without an argument) or `ollama update` would be great! | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1717/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8557 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8557/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8557/comments | https://api.github.com/repos/ollama/ollama/issues/8557/events | https://github.com/ollama/ollama/issues/8557 | 2,808,386,818 | I_kwDOJ0Z1Ps6nZJUC | 8,557 | Please separate deepseek-r1 from deepseek-r1-Distill! | {
"login": "win10ogod",
"id": 125795763,
"node_id": "U_kgDOB399sw",
"avatar_url": "https://avatars.githubusercontent.com/u/125795763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/win10ogod",
"html_url": "https://github.com/win10ogod",
"followers_url": "https://api.github.com/users/win10o... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2025-01-24T03:04:57 | 2025-01-28T15:27:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please separate deepseek-r1 from deepseek-r1-Distill!
This is not the same model and the architecture is different!
The model on the ollama official website is a perfect obfuscation! | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8557/reactions",
"total_count": 15,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8557/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5799 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5799/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5799/comments | https://api.github.com/repos/ollama/ollama/issues/5799/events | https://github.com/ollama/ollama/pull/5799 | 2,419,631,795 | PR_kwDOJ0Z1Ps517xJn | 5,799 | README: Added LLMStack to the list of UI integrations | {
"login": "ajhai",
"id": 431988,
"node_id": "MDQ6VXNlcjQzMTk4OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajhai",
"html_url": "https://github.com/ajhai",
"followers_url": "https://api.github.com/users/ajhai/followers"... | [] | closed | false | null | [] | null | 0 | 2024-07-19T18:52:41 | 2024-07-23T18:50:32 | 2024-07-23T18:40:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5799",
"html_url": "https://github.com/ollama/ollama/pull/5799",
"diff_url": "https://github.com/ollama/ollama/pull/5799.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5799.patch",
"merged_at": "2024-07-23T18:40:23"
} | Also wrote a quick guide to show to how to use `ollama` with `LLMStack` at https://docs.trypromptly.com/guides/using-llama3-with-ollama. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5799/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5799/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4538 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4538/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4538/comments | https://api.github.com/repos/ollama/ollama/issues/4538/events | https://github.com/ollama/ollama/issues/4538 | 2,306,057,386 | I_kwDOJ0Z1Ps6Jc6Sq | 4,538 | Error: no safetensors or torch files found | {
"login": "SreeHaran",
"id": 62993067,
"node_id": "MDQ6VXNlcjYyOTkzMDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/62993067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SreeHaran",
"html_url": "https://github.com/SreeHaran",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 2 | 2024-05-20T13:54:22 | 2024-06-04T13:43:46 | 2024-06-04T13:43:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to create model using Modelfile. My Modefile looks same as the example given
```
FROM llama3
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer ... | {
"login": "SreeHaran",
"id": 62993067,
"node_id": "MDQ6VXNlcjYyOTkzMDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/62993067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SreeHaran",
"html_url": "https://github.com/SreeHaran",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4538/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2765 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2765/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2765/comments | https://api.github.com/repos/ollama/ollama/issues/2765/events | https://github.com/ollama/ollama/pull/2765 | 2,154,407,092 | PR_kwDOJ0Z1Ps5n7YWk | 2,765 | Added BoltAI as a desktop UI for Ollama | {
"login": "longseespace",
"id": 187720,
"node_id": "MDQ6VXNlcjE4NzcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/187720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longseespace",
"html_url": "https://github.com/longseespace",
"followers_url": "https://api.github.com/u... | [] | closed | false | null | [] | null | 5 | 2024-02-26T15:06:55 | 2024-07-31T12:23:12 | 2024-07-31T12:23:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2765",
"html_url": "https://github.com/ollama/ollama/pull/2765",
"diff_url": "https://github.com/ollama/ollama/pull/2765.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2765.patch",
"merged_at": null
} | ### Overview
BoltAI supports Ollama natively. It automatically synchronize with Ollama model lists, and allows users to use advanced features such as AI Command and AI Inline.
### Screenshots
**Main Chat UI**
` runs instruction-finetuned model | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 5 | 2024-07-24T16:04:46 | 2024-07-29T08:21:03 | 2024-07-29T08:21:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I just have noticed that `ollama run llama3` runs ***Llama 3 8B Instruct*** (the instruction-finetuned variant) instead of ***Llama 3 8B***:
https://ollama.com/library/llama3/blobs/6a0746a1ec1a
These are different models:
**Llama 3 8B**: https://huggingface.co/meta-llama/Meta-Llama-3-8B
**Llama 3 8B Instruct**: ... | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5919/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8396 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8396/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8396/comments | https://api.github.com/repos/ollama/ollama/issues/8396/events | https://github.com/ollama/ollama/issues/8396 | 2,782,462,306 | I_kwDOJ0Z1Ps6l2QFi | 8,396 | Error: could not connect to ollama app, is it running? | {
"login": "Eyion",
"id": 26318038,
"node_id": "MDQ6VXNlcjI2MzE4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/26318038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eyion",
"html_url": "https://github.com/Eyion",
"followers_url": "https://api.github.com/users/Eyion/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2025-01-12T12:33:25 | 2025-01-15T23:58:42 | 2025-01-15T23:58:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I type "ollama -v", it shows "Warning: could not connect to a running Ollama instance
Warning: client version is 0.5.4"
When I type "ollama run qwen2.5:7b", it shows "Error: could not connect to ollama app, is it running?"
When I type "ollama serve", it shows "Error: listen tcp 127... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8396/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1055 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1055/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1055/comments | https://api.github.com/repos/ollama/ollama/issues/1055/events | https://github.com/ollama/ollama/pull/1055 | 1,985,521,157 | PR_kwDOJ0Z1Ps5fB3QB | 1,055 | Fixed incorrect base model name | {
"login": "dansreis",
"id": 9052608,
"node_id": "MDQ6VXNlcjkwNTI2MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9052608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dansreis",
"html_url": "https://github.com/dansreis",
"followers_url": "https://api.github.com/users/dansr... | [] | closed | false | null | [] | null | 2 | 2023-11-09T12:28:31 | 2023-11-13T17:46:20 | 2023-11-13T16:42:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1055",
"html_url": "https://github.com/ollama/ollama/pull/1055",
"diff_url": "https://github.com/ollama/ollama/pull/1055.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1055.patch",
"merged_at": "2023-11-13T16:42:55"
} | Added tag version to 'GetNamespaceRepository' method in order to set the correct model used model tag version.
(This PR fixes bug/issue: #946 ) | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1055/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3445 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3445/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3445/comments | https://api.github.com/repos/ollama/ollama/issues/3445/events | https://github.com/ollama/ollama/pull/3445 | 2,219,441,851 | PR_kwDOJ0Z1Ps5rYWX6 | 3,445 | Add CI full build capability | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-04-02T02:34:26 | 2024-11-21T18:22:46 | 2024-11-21T18:22:46 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3445",
"html_url": "https://github.com/ollama/ollama/pull/3445",
"diff_url": "https://github.com/ollama/ollama/pull/3445.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3445.patch",
"merged_at": null
} | For labeled PRs, generate a full build for testing | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3445/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2992 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2992/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2992/comments | https://api.github.com/repos/ollama/ollama/issues/2992/events | https://github.com/ollama/ollama/issues/2992 | 2,174,938,351 | I_kwDOJ0Z1Ps6Bouzv | 2,992 | Support Roberta embedding models | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": ... | closed | false | null | [] | null | 3 | 2024-03-07T22:33:40 | 2024-06-13T17:47:19 | 2024-06-13T17:47:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | can ollama support multi-language embedding model, like "paraphrase-multilingual-mpnet-base-v2"
https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2
much appreciated | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2992/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2992/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7612 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7612/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7612/comments | https://api.github.com/repos/ollama/ollama/issues/7612/events | https://github.com/ollama/ollama/issues/7612 | 2,648,195,355 | I_kwDOJ0Z1Ps6d2EEb | 7,612 | Feature for Filter models by type option | {
"login": "Abubakkar13",
"id": 45032674,
"node_id": "MDQ6VXNlcjQ1MDMyNjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/45032674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abubakkar13",
"html_url": "https://github.com/Abubakkar13",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-11-11T05:27:50 | 2024-11-11T05:34:27 | 2024-11-11T05:34:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Feature Request:
Model Filtering by Type
Objective: Add a model filtering feature that allows users to filter available models by predefined categories, specifically:
1. Normal Models
2. Tool Models
3. Vision Models
4. Embedding Models
(New type if needed in future)
This feature in ollama site will imp... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7612/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4524 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4524/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4524/comments | https://api.github.com/repos/ollama/ollama/issues/4524/events | https://github.com/ollama/ollama/issues/4524 | 2,304,734,344 | I_kwDOJ0Z1Ps6JX3SI | 4,524 | Default Stop Sequence is not working when user provides additional stop sequences | {
"login": "Nanthagopal-Eswaran",
"id": 115451020,
"node_id": "U_kgDOBuGkjA",
"avatar_url": "https://avatars.githubusercontent.com/u/115451020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nanthagopal-Eswaran",
"html_url": "https://github.com/Nanthagopal-Eswaran",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-05-19T18:47:36 | 2024-05-19T19:24:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I want to stop generation at stop sequence "\nObservation".
My expectation here is generation will stop at either "\nObservation" or models default stop sequence "<|eot_id|>" (in llama3 case).
But it does not happen and model keep on generating answer.
Request:
![image](https://gith... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4524/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6124 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6124/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6124/comments | https://api.github.com/repos/ollama/ollama/issues/6124/events | https://github.com/ollama/ollama/issues/6124 | 2,442,856,881 | I_kwDOJ0Z1Ps6Rmwmx | 6,124 | Do not generate a history | {
"login": "zipfile6209",
"id": 141074644,
"node_id": "U_kgDOCGig1A",
"avatar_url": "https://avatars.githubusercontent.com/u/141074644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zipfile6209",
"html_url": "https://github.com/zipfile6209",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-08-01T16:15:35 | 2024-08-01T16:21:35 | 2024-08-01T16:21:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I would like to request that history can be disabled, as it can be undesirable to write sensitive data to disk for no reason.
I would like to point out that ollama is a bit aggressive about it, the usual tricks like turning it into a symbolic link, giving ownership to root or removing all permissions didn't work :-/ | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6124/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1555 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1555/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1555/comments | https://api.github.com/repos/ollama/ollama/issues/1555/events | https://github.com/ollama/ollama/issues/1555 | 2,044,488,739 | I_kwDOJ0Z1Ps553Gwj | 1,555 | GGUF in Docker? | {
"login": "jimmyjam-50066",
"id": 153751346,
"node_id": "U_kgDOCSoPMg",
"avatar_url": "https://avatars.githubusercontent.com/u/153751346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimmyjam-50066",
"html_url": "https://github.com/jimmyjam-50066",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-12-15T23:25:18 | 2024-01-22T23:52:38 | 2024-01-22T23:52:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | To support GGUF files in Docker, could we have a script in the docker that will take the argument and create the Model file for ollama to use?
example with solar-10.7b being the target local model name:
```
docker exec ollama_cat pull_gguf_from_url.sh solar-10.7b https://huggingface.co/TheBloke/SOLAR-10.7B-Instr... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1555/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.