url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2314 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2314/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2314/comments | https://api.github.com/repos/ollama/ollama/issues/2314/events | https://github.com/ollama/ollama/issues/2314 | 2,113,544,278 | I_kwDOJ0Z1Ps59-iBW | 2,314 | add official Java SDK | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | [] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 3 | 2024-02-01T21:47:04 | 2024-03-11T23:28:31 | 2024-03-11T23:28:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # :information_source: Context
A few months ago, `langchain` got its sdk ported to java through [`langchain4j`](https://github.com/langchain4j/langchain4j). By doing this, its opened a lot of entreprise ready stuff and innovations on the java stack, then on native one... then on k8s, even for java developers.
Fo... | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2314/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7354 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7354/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7354/comments | https://api.github.com/repos/ollama/ollama/issues/7354/events | https://github.com/ollama/ollama/issues/7354 | 2,613,657,050 | I_kwDOJ0Z1Ps6byT3a | 7,354 | CMD ["serve"] is wrong in Dockerfile? | {
"login": "YinYongHongYork",
"id": 114273033,
"node_id": "U_kgDOBs-rCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/114273033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YinYongHongYork",
"html_url": "https://github.com/YinYongHongYork",
"followers_url": "https://api.githu... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjg... | closed | false | null | [] | null | 2 | 2024-10-25T10:01:50 | 2024-10-29T17:41:47 | 2024-10-29T17:41:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
CMD ["serve"]
### OS
Docker
### GPU
Other
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7354/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2707 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2707/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2707/comments | https://api.github.com/repos/ollama/ollama/issues/2707/events | https://github.com/ollama/ollama/issues/2707 | 2,150,788,939 | I_kwDOJ0Z1Ps6AMm9L | 2,707 | Ollama hangs when used with flowise | {
"login": "wurstache",
"id": 3034620,
"node_id": "MDQ6VXNlcjMwMzQ2MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3034620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wurstache",
"html_url": "https://github.com/wurstache",
"followers_url": "https://api.github.com/users/wu... | [] | closed | false | null | [] | null | 8 | 2024-02-23T10:22:03 | 2024-07-23T23:34:33 | 2024-03-13T03:39:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I'm testing embeddings with ollama and flowise. All works fine with open AI and some times it works with ollama. It would take a lot of time to run with ollama so I used a 1 page PDF and went to bed.
This morning I got the embedding but ollama is still running like hell as if it replays the last API reques... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2707/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3877 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3877/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3877/comments | https://api.github.com/repos/ollama/ollama/issues/3877/events | https://github.com/ollama/ollama/issues/3877 | 2,261,430,257 | I_kwDOJ0Z1Ps6Gyq_x | 3,877 | Support for Fedora 40 with rocm | {
"login": "oatmealm",
"id": 68159077,
"node_id": "MDQ6VXNlcjY4MTU5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/68159077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oatmealm",
"html_url": "https://github.com/oatmealm",
"followers_url": "https://api.github.com/users/oat... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 10 | 2024-04-24T14:22:52 | 2024-10-17T15:31:01 | 2024-05-06T21:33:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Since F40 has rocm6 now, it'd be useful if it can be picked up. I have this to block the installer from downloading the libararies:
```
lrwxrwxrwx. 1 root root 28 Apr 24 16:08 libamd_comgr.so.2 -> /usr/lib64/libamd_comgr.so.2
lrwxrwxrwx. 1 root root 27 Apr 24 16:08 libamdhip64.so.6 -> /usr/lib64/libamdhip64.so... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3877/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/3877/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4203 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4203/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4203/comments | https://api.github.com/repos/ollama/ollama/issues/4203/events | https://github.com/ollama/ollama/issues/4203 | 2,281,176,632 | I_kwDOJ0Z1Ps6H9_44 | 4,203 | panic: runtime error: invalid memory address or nil pointer dereference | {
"login": "inoue0426",
"id": 8393063,
"node_id": "MDQ6VXNlcjgzOTMwNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8393063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inoue0426",
"html_url": "https://github.com/inoue0426",
"followers_url": "https://api.github.com/users/in... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-06T15:38:11 | 2024-05-06T23:16:59 | 2024-05-06T23:16:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I got below issue when we run the model on M2 Mac Studio with running 800K times.
```
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":1054116,"tid":"0x1fbd37ac0","timestamp":1714759585}
{"function":"update_slots","ga_i":0,"... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4203/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/358 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/358/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/358/comments | https://api.github.com/repos/ollama/ollama/issues/358/events | https://github.com/ollama/ollama/issues/358 | 1,852,836,662 | I_kwDOJ0Z1Ps5ucAs2 | 358 | Parallel requests | {
"login": "youssef02",
"id": 26381154,
"node_id": "MDQ6VXNlcjI2MzgxMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/26381154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youssef02",
"html_url": "https://github.com/youssef02",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 82 | 2023-08-16T09:15:52 | 2025-01-29T01:48:45 | 2024-05-02T20:17:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | the app is amazing but the problem is If I want to create a multiagent from one api, I need to create a Queue system, as it can reply only to one request at a time, is there a way to improve this or do I have to implement a Queue system?
I just started here so sorry for any mistake, ;) | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/358/reactions",
"total_count": 56,
"+1": 47,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/358/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8072 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8072/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8072/comments | https://api.github.com/repos/ollama/ollama/issues/8072/events | https://github.com/ollama/ollama/pull/8072 | 2,736,352,448 | PR_kwDOJ0Z1Ps6FCuxX | 8,072 | Support Unix sockets for OLLAMA_HOST | {
"login": "doringeman",
"id": 33001019,
"node_id": "MDQ6VXNlcjMzMDAxMDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/33001019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doringeman",
"html_url": "https://github.com/doringeman",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 3 | 2024-12-12T16:22:02 | 2025-01-13T19:02:01 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8072",
"html_url": "https://github.com/ollama/ollama/pull/8072",
"diff_url": "https://github.com/ollama/ollama/pull/8072.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8072.patch",
"merged_at": null
} | Add support for Unix sockets for OLLAMA_HOST.
```
make -j8 && OLLAMA_HOST=unix:///tmp/ollama.sock ./ollama serve
OLLAMA_HOST=unix:///tmp/ollama.sock ./ollama list
curl --unix-socket /tmp/ollama.sock localhost/api/tags
```
Implements https://github.com/ollama/ollama/issues/739. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8072/reactions",
"total_count": 15,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 4,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/8072/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5440 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5440/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5440/comments | https://api.github.com/repos/ollama/ollama/issues/5440/events | https://github.com/ollama/ollama/pull/5440 | 2,386,881,258 | PR_kwDOJ0Z1Ps50O6ny | 5,440 | update named templates | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-02T18:23:38 | 2024-07-09T16:36:34 | 2024-07-09T16:36:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5440",
"html_url": "https://github.com/ollama/ollama/pull/5440",
"diff_url": "https://github.com/ollama/ollama/pull/5440.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5440.patch",
"merged_at": "2024-07-09T16:36:32"
} | update named templates with messages falling back to previous template if messages isn't available for compat | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5440/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5695 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5695/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5695/comments | https://api.github.com/repos/ollama/ollama/issues/5695/events | https://github.com/ollama/ollama/pull/5695 | 2,407,904,285 | PR_kwDOJ0Z1Ps51V5Ol | 5,695 | server: lowercase roles for compatibility with clients | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-15T05:44:15 | 2024-07-15T20:55:59 | 2024-07-15T20:55:57 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5695",
"html_url": "https://github.com/ollama/ollama/pull/5695",
"diff_url": "https://github.com/ollama/ollama/pull/5695.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5695.patch",
"merged_at": "2024-07-15T20:55:57"
} | Fixes https://github.com/ollama/ollama/issues/5687. Note: we may first want to make sure that there are no cases where roles may be uppercase. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5695/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5011 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5011/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5011/comments | https://api.github.com/repos/ollama/ollama/issues/5011/events | https://github.com/ollama/ollama/issues/5011 | 2,349,971,422 | I_kwDOJ0Z1Ps6MEbfe | 5,011 | Qwen2 Model | {
"login": "kindzhon",
"id": 44000205,
"node_id": "MDQ6VXNlcjQ0MDAwMjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/44000205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kindzhon",
"html_url": "https://github.com/kindzhon",
"followers_url": "https://api.github.com/users/kin... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-06-13T01:54:18 | 2024-06-13T18:41:06 | 2024-06-13T18:41:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Qwen2 is not in Model list.
Why not auto add Model list? Open ui has no this issues. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5011/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3840 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3840/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3840/comments | https://api.github.com/repos/ollama/ollama/issues/3840/events | https://github.com/ollama/ollama/issues/3840 | 2,258,270,176 | I_kwDOJ0Z1Ps6Gmnfg | 3,840 | Vega 56 (gfx900) fails to load model - hipMemGetInfo - error: invalid argument | {
"login": "quwassar",
"id": 70705054,
"node_id": "MDQ6VXNlcjcwNzA1MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/70705054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quwassar",
"html_url": "https://github.com/quwassar",
"followers_url": "https://api.github.com/users/quw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-04-23T08:28:38 | 2024-06-24T19:22:30 | 2024-06-21T23:56:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi! I have trouble with used video card Vega 56 AMD:
```
Apr 23 08:24:36 chat-server ollama[95121]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Apr 23 08:24:36 chat-server ollama[95121]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
Apr 23 08:24:36 chat-server ollama[95121]: ggml_cuda_init: f... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3840/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3840/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4653 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4653/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4653/comments | https://api.github.com/repos/ollama/ollama/issues/4653/events | https://github.com/ollama/ollama/pull/4653 | 2,317,977,160 | PR_kwDOJ0Z1Ps5wmeaa | 4,653 | Improve install script output on WSL2 and Linux | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-26T23:14:54 | 2024-05-28T21:41:51 | 2024-05-28T21:41:50 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4653",
"html_url": "https://github.com/ollama/ollama/pull/4653",
"diff_url": "https://github.com/ollama/ollama/pull/4653.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4653.patch",
"merged_at": "2024-05-28T21:41:50"
} | Fixes:
* Install success not showing on Linux systems with NVIDIA GPUs
* WSL2 showing no GPU detected when the NVIDIA GPU passthrough is available | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4653/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1426 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1426/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1426/comments | https://api.github.com/repos/ollama/ollama/issues/1426/events | https://github.com/ollama/ollama/pull/1426 | 2,031,745,478 | PR_kwDOJ0Z1Ps5heb_M | 1,426 | fix: only flush template in chat when current role encountered | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-12-08T00:51:04 | 2023-12-08T21:44:25 | 2023-12-08T21:44:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1426",
"html_url": "https://github.com/ollama/ollama/pull/1426",
"diff_url": "https://github.com/ollama/ollama/pull/1426.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1426.patch",
"merged_at": "2023-12-08T21:44:24"
} | There was a bug in the /chat endpoint here during templating that resulted in the prompt template being written incorrectly.
If a `user` was encountered when the `system` was already set the template would be written before the `user` content was set. This was not correct. The template should only be written when t... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1426/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1029 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1029/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1029/comments | https://api.github.com/repos/ollama/ollama/issues/1029/events | https://github.com/ollama/ollama/pull/1029 | 1,981,069,674 | PR_kwDOJ0Z1Ps5eyo2R | 1,029 | Added logseq ollama plugin | {
"login": "omagdy7",
"id": 99906646,
"node_id": "U_kgDOBfR0Vg",
"avatar_url": "https://avatars.githubusercontent.com/u/99906646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omagdy7",
"html_url": "https://github.com/omagdy7",
"followers_url": "https://api.github.com/users/omagdy7/follow... | [] | closed | false | null | [] | null | 0 | 2023-11-07T10:49:05 | 2023-11-07T17:58:13 | 2023-11-07T17:58:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1029",
"html_url": "https://github.com/ollama/ollama/pull/1029",
"diff_url": "https://github.com/ollama/ollama/pull/1029.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1029.patch",
"merged_at": "2023-11-07T17:58:13"
} | Adds a plugin I made to integrate ollama with [logseq](https://github.com/logseq/logseq) | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1029/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/244 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/244/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/244/comments | https://api.github.com/repos/ollama/ollama/issues/244/events | https://github.com/ollama/ollama/pull/244 | 1,829,534,136 | PR_kwDOJ0Z1Ps5W0Wmz | 244 | tell users to check the server error logs | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-07-31T15:50:37 | 2023-08-02T21:08:12 | 2023-08-02T21:08:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/244",
"html_url": "https://github.com/ollama/ollama/pull/244",
"diff_url": "https://github.com/ollama/ollama/pull/244.diff",
"patch_url": "https://github.com/ollama/ollama/pull/244.patch",
"merged_at": "2023-08-02T21:08:11"
} | when possible tell users to check the error logs to get more info on why their command failed | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/244/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6483 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6483/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6483/comments | https://api.github.com/repos/ollama/ollama/issues/6483/events | https://github.com/ollama/ollama/pull/6483 | 2,483,895,650 | PR_kwDOJ0Z1Ps55SYRI | 6,483 | gpu: Group GPU Library sets by variant | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-23T21:06:35 | 2024-08-23T22:12:05 | 2024-08-23T22:11:56 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6483",
"html_url": "https://github.com/ollama/ollama/pull/6483",
"diff_url": "https://github.com/ollama/ollama/pull/6483.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6483.patch",
"merged_at": "2024-08-23T22:11:56"
} | The recent cuda variant changes uncovered a bug in ByLibrary which failed to group by common variant for GPU types.
New unit test fails without the 1 line fix, and passes with.
Fixes #6479 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6483/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8565 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8565/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8565/comments | https://api.github.com/repos/ollama/ollama/issues/8565/events | https://github.com/ollama/ollama/issues/8565 | 2,809,182,773 | I_kwDOJ0Z1Ps6ncLo1 | 8,565 | Error when trying to download deepseek-r1:7b | {
"login": "makhlwf",
"id": 78276231,
"node_id": "MDQ6VXNlcjc4Mjc2MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/78276231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makhlwf",
"html_url": "https://github.com/makhlwf",
"followers_url": "https://api.github.com/users/makhlw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 11 | 2025-01-24T11:14:17 | 2025-01-29T14:53:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried using ollama run deepseek-r1:7b
It started to download for a minute then this error appeared
Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
### OS
Windows
### GPU
Nvidia
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8565/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8565/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/890 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/890/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/890/comments | https://api.github.com/repos/ollama/ollama/issues/890/events | https://github.com/ollama/ollama/pull/890 | 1,959,595,439 | PR_kwDOJ0Z1Ps5dqKhk | 890 | document linux install inline with the script file | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-24T16:03:12 | 2023-10-25T14:58:18 | 2023-10-25T14:58:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/890",
"html_url": "https://github.com/ollama/ollama/pull/890",
"diff_url": "https://github.com/ollama/ollama/pull/890.diff",
"patch_url": "https://github.com/ollama/ollama/pull/890.patch",
"merged_at": "2023-10-25T14:58:17"
} | Shell scripts are dense and hard to read. Document explicitly what the installation script is doing so that enquiring users can see exactly what changes are being made to their system. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/890/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5508 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5508/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5508/comments | https://api.github.com/repos/ollama/ollama/issues/5508/events | https://github.com/ollama/ollama/issues/5508 | 2,393,209,058 | I_kwDOJ0Z1Ps6OpXji | 5,508 | Ollama running 2 instances | {
"login": "electro199",
"id": 109358640,
"node_id": "U_kgDOBoSuMA",
"avatar_url": "https://avatars.githubusercontent.com/u/109358640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/electro199",
"html_url": "https://github.com/electro199",
"followers_url": "https://api.github.com/users/ele... | [
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-07-05T22:43:44 | 2024-08-06T22:23:51 | 2024-08-06T22:23:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When starting ollama from start menu tray it starts the service for ollama which immediately loads model under name ollama.exe using around 4.5 GB ram and when using api it starts another program called ollama_llama_server.exe using around 4 GB of ram and almost 3 GB of vram.
The main issue... | {
"login": "electro199",
"id": 109358640,
"node_id": "U_kgDOBoSuMA",
"avatar_url": "https://avatars.githubusercontent.com/u/109358640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/electro199",
"html_url": "https://github.com/electro199",
"followers_url": "https://api.github.com/users/ele... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5508/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6872 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6872/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6872/comments | https://api.github.com/repos/ollama/ollama/issues/6872/events | https://github.com/ollama/ollama/issues/6872 | 2,535,358,329 | I_kwDOJ0Z1Ps6XHn95 | 6,872 | GGGGG in pesponce | {
"login": "air55555",
"id": 2674008,
"node_id": "MDQ6VXNlcjI2NzQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2674008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/air55555",
"html_url": "https://github.com/air55555",
"followers_url": "https://api.github.com/users/air55... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-09-19T05:55:54 | 2024-09-25T07:14:55 | 2024-09-25T07:14:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
---- command log
C:\Program Files\Far Manager>ollama run llama3.1:70b >>> name
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
>>> /?
Available Commands:
/set Set session variables
/show Show model information
/load <model> Load a session or model... | {
"login": "air55555",
"id": 2674008,
"node_id": "MDQ6VXNlcjI2NzQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2674008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/air55555",
"html_url": "https://github.com/air55555",
"followers_url": "https://api.github.com/users/air55... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6872/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2242 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2242/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2242/comments | https://api.github.com/repos/ollama/ollama/issues/2242/events | https://github.com/ollama/ollama/issues/2242 | 2,104,344,623 | I_kwDOJ0Z1Ps59bcAv | 2,242 | :memo: Documentation > Add `ollama-python` code samples to `llava` model page | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | [] | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 1 | 2024-01-28T20:56:22 | 2024-03-12T21:32:17 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # :grey_question: About
[`ollama` library has been released for Python and javascript](https://twitter.com/ollama/status/1750683461256634409) and makes things even easier to achieve with local AI.
See ths tutorial on how to handle images with `ollama` library : [Ollama has a Python library!](https://youtu.be/JwYw... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2242/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1683 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1683/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1683/comments | https://api.github.com/repos/ollama/ollama/issues/1683/events | https://github.com/ollama/ollama/pull/1683 | 2,054,587,573 | PR_kwDOJ0Z1Ps5isAg_ | 1,683 | Fix windows system memory lookup | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2023-12-23T00:05:02 | 2024-01-03T17:00:52 | 2024-01-03T17:00:40 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1683",
"html_url": "https://github.com/ollama/ollama/pull/1683",
"diff_url": "https://github.com/ollama/ollama/pull/1683.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1683.patch",
"merged_at": "2024-01-03T17:00:39"
} | This refines the gpu package error handling and fixes a bug with the system memory lookup on windows. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1683/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6189 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6189/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6189/comments | https://api.github.com/repos/ollama/ollama/issues/6189/events | https://github.com/ollama/ollama/pull/6189 | 2,449,610,931 | PR_kwDOJ0Z1Ps53gOYv | 6,189 | Sort Batch Embed Results | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 1 | 2024-08-05T22:58:11 | 2024-08-05T23:55:36 | 2024-08-05T23:55:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6189",
"html_url": "https://github.com/ollama/ollama/pull/6189",
"diff_url": "https://github.com/ollama/ollama/pull/6189.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6189.patch",
"merged_at": "2024-08-05T23:55:34"
} | batch embed returns results out of order, fix by sorting results by task id
Resolves #6187 | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6189/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/690 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/690/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/690/comments | https://api.github.com/repos/ollama/ollama/issues/690/events | https://github.com/ollama/ollama/issues/690 | 1,924,386,973 | I_kwDOJ0Z1Ps5ys9Cd | 690 | Stop Ollama | {
"login": "mora-phi",
"id": 8950440,
"node_id": "MDQ6VXNlcjg5NTA0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8950440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mora-phi",
"html_url": "https://github.com/mora-phi",
"followers_url": "https://api.github.com/users/mora-... | [] | closed | false | null | [] | null | 53 | 2023-10-03T15:22:34 | 2025-01-29T08:40:53 | 2023-10-04T07:07:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
How can I stop Ollama ?
If I run with "ollama run llama2" for instance and then quit with "Ctrl-C", then go to http://127.0.0.1:11434/ in a browser, it shows "Ollama is running"
When I kill the running process with a kill -9, a new process is instantly spawned.
Therefore I don't know how to totally stop Ollama... | {
"login": "mora-phi",
"id": 8950440,
"node_id": "MDQ6VXNlcjg5NTA0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8950440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mora-phi",
"html_url": "https://github.com/mora-phi",
"followers_url": "https://api.github.com/users/mora-... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/690/reactions",
"total_count": 35,
"+1": 35,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/690/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3001 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3001/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3001/comments | https://api.github.com/repos/ollama/ollama/issues/3001/events | https://github.com/ollama/ollama/issues/3001 | 2,175,789,913 | I_kwDOJ0Z1Ps6Br-tZ | 3,001 | pull model fail:read: connection reset by peer | {
"login": "QiuZiXian",
"id": 28780269,
"node_id": "MDQ6VXNlcjI4NzgwMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/28780269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QiuZiXian",
"html_url": "https://github.com/QiuZiXian",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-03-08T10:57:03 | 2024-03-11T03:33:14 | 2024-03-11T03:33:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | env: linux
I has installed ollama success
but try ollama pull qwen:7b ,it fail like:
```bash
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=bW98ekNaedsqamdcHp8ziQ&scope=repository%!A(MISSING)library%!F(MISSING)qwen%!A(MISSING)pull&service=ollama.com&ts=1709894898": read tcp 192... | {
"login": "QiuZiXian",
"id": 28780269,
"node_id": "MDQ6VXNlcjI4NzgwMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/28780269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QiuZiXian",
"html_url": "https://github.com/QiuZiXian",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3001/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2513 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2513/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2513/comments | https://api.github.com/repos/ollama/ollama/issues/2513/events | https://github.com/ollama/ollama/issues/2513 | 2,136,609,829 | I_kwDOJ0Z1Ps5_WhQl | 2,513 | ECONNREFUSED error | {
"login": "jakobhoeg",
"id": 114422072,
"node_id": "U_kgDOBtHxOA",
"avatar_url": "https://avatars.githubusercontent.com/u/114422072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakobhoeg",
"html_url": "https://github.com/jakobhoeg",
"followers_url": "https://api.github.com/users/jakobh... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 16 | 2024-02-15T14:01:26 | 2024-12-19T21:35:03 | 2024-12-19T21:35:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Keep getting ECONNREFUSED error when trying to use Ollama for my NextJS frontend in production:
```
⨯ TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11730:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async globalThis.fetch (/var/task/... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2513/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8339 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8339/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8339/comments | https://api.github.com/repos/ollama/ollama/issues/8339/events | https://github.com/ollama/ollama/issues/8339 | 2,773,453,511 | I_kwDOJ0Z1Ps6lT4rH | 8,339 | `GIN_MODE` not able to set to `release` | {
"login": "leobenkel",
"id": 4960573,
"node_id": "MDQ6VXNlcjQ5NjA1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4960573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leobenkel",
"html_url": "https://github.com/leobenkel",
"followers_url": "https://api.github.com/users/le... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2025-01-07T18:11:06 | 2025-01-28T12:35:01 | 2025-01-16T00:01:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
in `v0.5.4` , I am not able to set the `GIN_MODE` to release:
I have
```log
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
```
in the log.
But in my `env`:
```.env
GIN_MODE=release
```
it is running in a docker image:
```Doc... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8339/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/164 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/164/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/164/comments | https://api.github.com/repos/ollama/ollama/issues/164/events | https://github.com/ollama/ollama/pull/164 | 1,816,106,038 | PR_kwDOJ0Z1Ps5WHT6H | 164 | restart server more gracefully | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | [] | closed | false | null | [] | null | 0 | 2023-07-21T16:49:58 | 2023-07-22T22:19:26 | 2023-07-22T22:19:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/164",
"html_url": "https://github.com/ollama/ollama/pull/164",
"diff_url": "https://github.com/ollama/ollama/pull/164.diff",
"patch_url": "https://github.com/ollama/ollama/pull/164.patch",
"merged_at": "2023-07-22T22:19:22"
} | fix for https://github.com/jmorganca/ollama/issues/154 | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/164/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5652 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5652/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5652/comments | https://api.github.com/repos/ollama/ollama/issues/5652/events | https://github.com/ollama/ollama/issues/5652 | 2,405,934,048 | I_kwDOJ0Z1Ps6PZ6Pg | 5,652 | A path to GPU support for Ollama in a VM/container on Apple Silicon | {
"login": "easp",
"id": 414705,
"node_id": "MDQ6VXNlcjQxNDcwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/easp",
"html_url": "https://github.com/easp",
"followers_url": "https://api.github.com/users/easp/followers",
... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-07-12T16:12:54 | 2024-07-15T19:04:38 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I came across a short discussion in the llama.cpp repo about using a [GPU on Apple Silicon from within a vm/container](https://github.com/ggerganov/llama.cpp/discussions/8042). Performance isn't as good as bare metal, but it's a significant improvement over CPU-only inference.
The meat of it is [this work](https://s... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5652/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4324 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4324/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4324/comments | https://api.github.com/repos/ollama/ollama/issues/4324/events | https://github.com/ollama/ollama/pull/4324 | 2,290,412,684 | PR_kwDOJ0Z1Ps5vIptv | 4,324 | update go deps | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 1 | 2024-05-10T21:04:59 | 2024-05-11T17:15:28 | 2024-05-11T04:39:27 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4324",
"html_url": "https://github.com/ollama/ollama/pull/4324",
"diff_url": "https://github.com/ollama/ollama/pull/4324.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4324.patch",
"merged_at": "2024-05-11T04:39:27"
} | Fixes #4297 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4324/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4643 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4643/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4643/comments | https://api.github.com/repos/ollama/ollama/issues/4643/events | https://github.com/ollama/ollama/issues/4643 | 2,317,406,597 | I_kwDOJ0Z1Ps6KINGF | 4,643 | Llama.cpp now supports distributed inference across multiple machines. | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 20 | 2024-05-26T02:57:20 | 2025-01-13T18:35:27 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Llama.cpp now supports distribution across multiple devices to boost speeds, this would be a great addition to Ollama
https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc
https://www.reddit.com/r/LocalLLaMA/comments/1cyzi9e/llamacpp_now_supports_distributed_inference/ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4643/reactions",
"total_count": 29,
"+1": 28,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/4643/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8172 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8172/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8172/comments | https://api.github.com/repos/ollama/ollama/issues/8172/events | https://github.com/ollama/ollama/pull/8172 | 2,749,943,664 | PR_kwDOJ0Z1Ps6Fwy8g | 8,172 | Updated the version of golang/crypto package to handle CVE-2024-45337 | {
"login": "Squishedmac",
"id": 88924339,
"node_id": "MDQ6VXNlcjg4OTI0MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/88924339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Squishedmac",
"html_url": "https://github.com/Squishedmac",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 4 | 2024-12-19T10:56:32 | 2024-12-20T17:30:07 | 2024-12-20T17:29:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8172",
"html_url": "https://github.com/ollama/ollama/pull/8172",
"diff_url": "https://github.com/ollama/ollama/pull/8172.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8172.patch",
"merged_at": "2024-12-20T17:29:30"
} | Fixed the CVE-2024-45337 severity on wiz scan for the docker image, changing the version of golang.org/x/crypto to the latest version should fix the issue.
This issue is a hinderance that prevents the base image form being used in any real production scenario. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8172/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4816 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4816/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4816/comments | https://api.github.com/repos/ollama/ollama/issues/4816/events | https://github.com/ollama/ollama/issues/4816 | 2,333,788,438 | I_kwDOJ0Z1Ps6LGskW | 4,816 | deepseek v2 lite and 236b gets crazy after some conversations | {
"login": "netspym",
"id": 74223710,
"node_id": "MDQ6VXNlcjc0MjIzNzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/74223710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netspym",
"html_url": "https://github.com/netspym",
"followers_url": "https://api.github.com/users/netspy... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-06-04T15:12:09 | 2024-06-05T07:24:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi Ollama team:
I loaded the modes and keep them stay in memory. After some conversations, the models start to response with strange text and codes. After restarting ollama in ubuntu, the issue is gone.
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
0.1.40 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4816/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4816/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1416 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1416/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1416/comments | https://api.github.com/repos/ollama/ollama/issues/1416/events | https://github.com/ollama/ollama/issues/1416 | 2,030,941,798 | I_kwDOJ0Z1Ps55DbZm | 1,416 | Attempting to load a model smaller than 10GiB into 12.2GiB GPU results in failing over to load into the host RAM. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2023-12-07T14:59:04 | 2024-03-12T16:47:48 | 2024-03-12T16:44:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have converted losslessmegacoder-llama2-13b-min.Q6_K.model to ollama format.
On my attempt to load, it reports the size of the model < 10GiB, but as I do "ollama run losslessmegacoder-llama2-13b-min.Q6_K" it attempts to load it into a GPU, apparently runs out of VRAM and loads into the host instead.
If the mode... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1416/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5841 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5841/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5841/comments | https://api.github.com/repos/ollama/ollama/issues/5841/events | https://github.com/ollama/ollama/issues/5841 | 2,421,834,741 | I_kwDOJ0Z1Ps6QWkP1 | 5,841 | Manage internlm2 models | {
"login": "RunningLeon",
"id": 28671653,
"node_id": "MDQ6VXNlcjI4NjcxNjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/28671653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RunningLeon",
"html_url": "https://github.com/RunningLeon",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA... | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 0 | 2024-07-22T03:28:41 | 2024-09-04T03:43:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi, maintainers of ollama. I'm one of the developers of [internlm2](https://huggingface.co/internlm) models and there is a bug with https://ollama.com/library/internlm2 model as referred in the issue #5761. I've uploaded some models in my account as in https://ollama.com/internlm/internlm2.5. ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5841/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5841/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2573 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2573/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2573/comments | https://api.github.com/repos/ollama/ollama/issues/2573/events | https://github.com/ollama/ollama/issues/2573 | 2,140,950,675 | I_kwDOJ0Z1Ps5_nFCT | 2,573 | Ollama for Windows, seems to crash on GPU with cuda | {
"login": "amonpaike",
"id": 884282,
"node_id": "MDQ6VXNlcjg4NDI4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/884282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amonpaike",
"html_url": "https://github.com/amonpaike",
"followers_url": "https://api.github.com/users/amon... | [] | closed | false | null | [] | null | 2 | 2024-02-18T11:46:37 | 2024-02-19T20:55:07 | 2024-02-19T20:55:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | My GTX 1060 gpu has little vram, 3GB but with the smaller models **it works great and on native linux it is super fast** with for example tinyllama stablelm2 phi2 etc.
**On windows with cuda it seems to crash. I start a model with for example "ollama run stablelm2" and after a few seconds it crashes.**
On the CPU... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2573/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2573/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6030 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6030/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6030/comments | https://api.github.com/repos/ollama/ollama/issues/6030/events | https://github.com/ollama/ollama/issues/6030 | 2,434,071,018 | I_kwDOJ0Z1Ps6RFPnq | 6,030 | How to Move Model Files on an External Hard Drive? | {
"login": "lennondong",
"id": 3203702,
"node_id": "MDQ6VXNlcjMyMDM3MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3203702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lennondong",
"html_url": "https://github.com/lennondong",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-07-28T19:06:31 | 2024-11-06T13:12:43 | 2024-11-06T13:12:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How to Move Model Files on an External Hard Drive?
Due to insufficient internal storage on my Mac, I would like to store model files on an external hard drive to run them. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6030/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6030/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7657 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7657/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7657/comments | https://api.github.com/repos/ollama/ollama/issues/7657/events | https://github.com/ollama/ollama/pull/7657 | 2,656,985,769 | PR_kwDOJ0Z1Ps6B2PAb | 7,657 | fix(mllama): sync backend between batches | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-11-13T22:30:09 | 2024-11-14T17:40:07 | 2024-11-14T17:40:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7657",
"html_url": "https://github.com/ollama/ollama/pull/7657",
"diff_url": "https://github.com/ollama/ollama/pull/7657.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7657.patch",
"merged_at": "2024-11-14T17:40:04"
} | On multi-GPUs and cross attention specifically, sync the backend between batches to ensure operations are complete before moving on
Resolves #7558 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7657/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1808 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1808/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1808/comments | https://api.github.com/repos/ollama/ollama/issues/1808/events | https://github.com/ollama/ollama/pull/1808 | 2,067,423,788 | PR_kwDOJ0Z1Ps5jUPns | 1,808 | Add Haystack to Community integrations | {
"login": "anakin87",
"id": 44616784,
"node_id": "MDQ6VXNlcjQ0NjE2Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/44616784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anakin87",
"html_url": "https://github.com/anakin87",
"followers_url": "https://api.github.com/users/ana... | [] | closed | false | null | [] | null | 2 | 2024-01-05T14:15:28 | 2024-08-28T14:03:22 | 2024-02-20T03:02:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1808",
"html_url": "https://github.com/ollama/ollama/pull/1808",
"diff_url": "https://github.com/ollama/ollama/pull/1808.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1808.patch",
"merged_at": null
} | Hi, maintainers!
[Haystack](https://github.com/deepset-ai/haystack) is a quite popular open-source LLM orchestration framework.
We recently developed an [integration with Ollama](https://haystack.deepset.ai/integrations/ollama).
This PR is to add Haystack to the Community integrations.
If you agree, we would ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1808/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/716 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/716/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/716/comments | https://api.github.com/repos/ollama/ollama/issues/716/events | https://github.com/ollama/ollama/issues/716 | 1,929,804,426 | I_kwDOJ0Z1Ps5zBnqK | 716 | Swagger / OpenAPI documentation / playground | {
"login": "netandreus",
"id": 313477,
"node_id": "MDQ6VXNlcjMxMzQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/313477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netandreus",
"html_url": "https://github.com/netandreus",
"followers_url": "https://api.github.com/users/n... | [] | closed | false | null | [] | null | 3 | 2023-10-06T09:46:59 | 2024-01-22T16:55:12 | 2023-12-04T19:36:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Do you plan to generate API documentation with ```github.com/swaggo/swag```?
It will be very usefull. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/716/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/716/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4115 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4115/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4115/comments | https://api.github.com/repos/ollama/ollama/issues/4115/events | https://github.com/ollama/ollama/issues/4115 | 2,276,813,590 | I_kwDOJ0Z1Ps6HtWsW | 4,115 | Error 403 when trying to call api/chat or api/generate from REST client | {
"login": "MaheshAwasare",
"id": 48666285,
"node_id": "MDQ6VXNlcjQ4NjY2Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48666285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshAwasare",
"html_url": "https://github.com/MaheshAwasare",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 14 | 2024-05-03T03:06:15 | 2024-11-25T23:48:47 | 2024-05-07T07:16:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi, Downloaded latest llama3 model after installing ollama for **Windows** from https://www.ollama.com
I have downloaded llama3 latest model.
```
C:\Windows\System32>ollama list
NAME ID SIZE MODIFIED
llama3:latest a6990ed6be41 4.7 GB 45 hours ago
```
... | {
"login": "MaheshAwasare",
"id": 48666285,
"node_id": "MDQ6VXNlcjQ4NjY2Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48666285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshAwasare",
"html_url": "https://github.com/MaheshAwasare",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4115/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4115/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6719 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6719/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6719/comments | https://api.github.com/repos/ollama/ollama/issues/6719/events | https://github.com/ollama/ollama/issues/6719 | 2,515,418,688 | I_kwDOJ0Z1Ps6V7j5A | 6,719 | (111) Connection refused | {
"login": "SheltonLiu-N",
"id": 65779833,
"node_id": "MDQ6VXNlcjY1Nzc5ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/65779833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SheltonLiu-N",
"html_url": "https://github.com/SheltonLiu-N",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-09-10T05:14:21 | 2024-09-10T15:22:06 | 2024-09-10T15:22:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I met a wired problem. When I use ollama in a server, I can run `ollama run llama3.1:70b` successfully in cmd, but failed in my python projects
```
process = subprocess.Popen("ollama serve", shell=True)
time.sleep(10)
ollama.pull("llama3.1:70b")
```
which returns:
```
<blockquote id... | {
"login": "SheltonLiu-N",
"id": 65779833,
"node_id": "MDQ6VXNlcjY1Nzc5ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/65779833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SheltonLiu-N",
"html_url": "https://github.com/SheltonLiu-N",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6719/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2826 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2826/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2826/comments | https://api.github.com/repos/ollama/ollama/issues/2826/events | https://github.com/ollama/ollama/issues/2826 | 2,160,307,546 | I_kwDOJ0Z1Ps6Aw61a | 2,826 | How to enable GPU support on Ubuntu? | {
"login": "realcarlos",
"id": 13853,
"node_id": "MDQ6VXNlcjEzODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/13853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realcarlos",
"html_url": "https://github.com/realcarlos",
"followers_url": "https://api.github.com/users/realcar... | [] | closed | false | null | [] | null | 2 | 2024-02-29T04:01:37 | 2024-02-29T08:31:02 | 2024-02-29T08:31:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi , I am using a DGX H800 server , and when I run
`ollama run llama2:70b`
I found that model is loaded into memory , not GPU .
<img width="558" alt="Screenshot 2024-02-29 at 12 03 28" src="https://github.com/ollama/ollama/assets/13853/182fb3d8-f075-4c33-8d42-82cec09b5459">
| {
"login": "realcarlos",
"id": 13853,
"node_id": "MDQ6VXNlcjEzODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/13853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realcarlos",
"html_url": "https://github.com/realcarlos",
"followers_url": "https://api.github.com/users/realcar... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2826/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7343 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7343/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7343/comments | https://api.github.com/repos/ollama/ollama/issues/7343/events | https://github.com/ollama/ollama/issues/7343 | 2,611,796,486 | I_kwDOJ0Z1Ps6brNoG | 7,343 | Aya-Expanse 32B & 8B | {
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 4 | 2024-10-24T14:32:57 | 2024-10-27T20:13:25 | 2024-10-25T00:24:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | These models should be very easy to support since people are already uploading ggufs to HF, so it should be already supported by ollama, just need to be ggufed and uploaded
https://huggingface.co/CohereForAI/aya-expanse-32b
https://huggingface.co/CohereForAI/aya-expanse-8b
). But Ollama uses only ~50% of all power. What do I need to do to use all CPU resources? I'm using Docker to run Ollama, here is my `docker-compose.yaml`:
```yaml
version: "... | {
"login": "LWJerri",
"id": 50290430,
"node_id": "MDQ6VXNlcjUwMjkwNDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/50290430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LWJerri",
"html_url": "https://github.com/LWJerri",
"followers_url": "https://api.github.com/users/LWJerr... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1101/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6995 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6995/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6995/comments | https://api.github.com/repos/ollama/ollama/issues/6995/events | https://github.com/ollama/ollama/issues/6995 | 2,552,096,757 | I_kwDOJ0Z1Ps6YHef1 | 6,995 | when can we user llama3.2 11b or 90b | {
"login": "wangjingyu001",
"id": 65798211,
"node_id": "MDQ6VXNlcjY1Nzk4MjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/65798211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangjingyu001",
"html_url": "https://github.com/wangjingyu001",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-09-27T06:19:34 | 2024-09-30T20:54:21 | 2024-09-30T20:54:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6995/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6127 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6127/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6127/comments | https://api.github.com/repos/ollama/ollama/issues/6127/events | https://github.com/ollama/ollama/issues/6127 | 2,443,516,017 | I_kwDOJ0Z1Ps6RpRhx | 6,127 | llama3.1 always uses tool | {
"login": "tomaszbk",
"id": 85851175,
"node_id": "MDQ6VXNlcjg1ODUxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/85851175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaszbk",
"html_url": "https://github.com/tomaszbk",
"followers_url": "https://api.github.com/users/tom... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 16 | 2024-08-01T21:35:21 | 2024-12-31T17:07:27 | 2024-08-02T00:59:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
no matter what I prompt, llama3.1 always replies with a tool call
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.10.58 | {
"login": "tomaszbk",
"id": 85851175,
"node_id": "MDQ6VXNlcjg1ODUxMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/85851175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaszbk",
"html_url": "https://github.com/tomaszbk",
"followers_url": "https://api.github.com/users/tom... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6127/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6127/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2388 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2388/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2388/comments | https://api.github.com/repos/ollama/ollama/issues/2388/events | https://github.com/ollama/ollama/issues/2388 | 2,123,060,333 | I_kwDOJ0Z1Ps5-i1Rt | 2,388 | Local caching / Custom Registry | {
"login": "KaruroChori",
"id": 9377545,
"node_id": "MDQ6VXNlcjkzNzc1NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9377545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaruroChori",
"html_url": "https://github.com/KaruroChori",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 8 | 2024-02-07T13:37:57 | 2025-01-09T07:44:44 | 2024-03-11T19:48:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any supported solution to:
- locally host a custom registry?
- at the very least a partial mirror of the registry for those models which have been downloaded once already?
Downloading the same model from the original source on multiple machines within the same network is quite slow and wasteful. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2388/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2388/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6595 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6595/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6595/comments | https://api.github.com/repos/ollama/ollama/issues/6595/events | https://github.com/ollama/ollama/issues/6595 | 2,501,337,734 | I_kwDOJ0Z1Ps6VF2KG | 6,595 | 4 AMD GPUs with mixed VRAM sizes: layer predictions incorrect leads to runner crash | {
"login": "MikeLP",
"id": 368849,
"node_id": "MDQ6VXNlcjM2ODg0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/368849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeLP",
"html_url": "https://github.com/MikeLP",
"followers_url": "https://api.github.com/users/MikeLP/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 41 | 2024-09-02T16:53:13 | 2024-11-13T22:17:45 | 2024-10-23T15:34:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I load a large model that doesn't fit in VRAM, Ollama crashes:
➜ ~ ollama run dbrx:132b-instruct-q8_0
Error: llama runner process has terminated: signal: segmentation fault (core dumped)
This issue does not occur with Ollama 0.3.6.
My hardware:
CPU: AMD Ryzen Threadripper PRO 796... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6595/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6595/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4837 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4837/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4837/comments | https://api.github.com/repos/ollama/ollama/issues/4837/events | https://github.com/ollama/ollama/issues/4837 | 2,336,227,433 | I_kwDOJ0Z1Ps6LQABp | 4,837 | Stop model without removing | {
"login": "alemian95",
"id": 43844752,
"node_id": "MDQ6VXNlcjQzODQ0NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/43844752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alemian95",
"html_url": "https://github.com/alemian95",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-06-05T15:32:09 | 2024-06-05T20:27:06 | 2024-06-05T20:27:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is it possible to have a command to clear the model from the memory without removing it? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4837/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7479 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7479/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7479/comments | https://api.github.com/repos/ollama/ollama/issues/7479/events | https://github.com/ollama/ollama/issues/7479 | 2,631,137,709 | I_kwDOJ0Z1Ps6c0_mt | 7,479 | 请问,我想让ollama每次调用模型的时候,默认全部加载到GPU,有什么办法吗? | {
"login": "fg2501",
"id": 164639270,
"node_id": "U_kgDOCdAyJg",
"avatar_url": "https://avatars.githubusercontent.com/u/164639270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fg2501",
"html_url": "https://github.com/fg2501",
"followers_url": "https://api.github.com/users/fg2501/follower... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 14 | 2024-11-03T11:07:45 | 2025-01-12T11:59:28 | 2024-11-06T05:17:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

每次调用的时候,经常会出现,GPU调用不到百分百,有时候一半CPU,一般GPU,有的时候甚至全部调用CPU,有办法强制只调用GPU吗?
还有,加载的GPU,默认5分钟之后卸载,我能改成10分钟之后再卸载,或者使其一直处于加载状态吗?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Oll... | {
"login": "fg2501",
"id": 164639270,
"node_id": "U_kgDOCdAyJg",
"avatar_url": "https://avatars.githubusercontent.com/u/164639270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fg2501",
"html_url": "https://github.com/fg2501",
"followers_url": "https://api.github.com/users/fg2501/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7479/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4580 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4580/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4580/comments | https://api.github.com/repos/ollama/ollama/issues/4580/events | https://github.com/ollama/ollama/pull/4580 | 2,311,650,111 | PR_kwDOJ0Z1Ps5wQ6NW | 4,580 | Use flash attention flag for now | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 3 | 2024-05-22T23:18:11 | 2024-05-25T14:56:12 | 2024-05-23T04:52:09 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4580",
"html_url": "https://github.com/ollama/ollama/pull/4580",
"diff_url": "https://github.com/ollama/ollama/pull/4580.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4580.patch",
"merged_at": "2024-05-23T04:52:09"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4580/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/142 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/142/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/142/comments | https://api.github.com/repos/ollama/ollama/issues/142/events | https://github.com/ollama/ollama/issues/142 | 1,814,432,890 | I_kwDOJ0Z1Ps5sJgx6 | 142 | Persist conversation histories | {
"login": "nathanleclaire",
"id": 1476820,
"node_id": "MDQ6VXNlcjE0NzY4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1476820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathanleclaire",
"html_url": "https://github.com/nathanleclaire",
"followers_url": "https://api.gith... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-07-20T16:56:56 | 2024-01-25T20:14:11 | 2024-01-25T20:14:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be really fun and useful to have a `docker ps`-like persistence of each session
```
SESSION_ID MODEL CREATED AT LAST CHAT SUMMARY
asdfasdfasd llama2 50 minutes ago 1 minute ago Explorations with Waluigi
qwfdsqwefd llama2 30 minutes ago 20 minutes ago Mario is Mean
```
This ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/142/reactions",
"total_count": 8,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/142/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6932 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6932/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6932/comments | https://api.github.com/repos/ollama/ollama/issues/6932/events | https://github.com/ollama/ollama/issues/6932 | 2,545,257,411 | I_kwDOJ0Z1Ps6XtYvD | 6,932 | qwen2.5coder /api/generate odd behavior when `suffix` is present but empty string. | {
"login": "kirel",
"id": 9124,
"node_id": "MDQ6VXNlcjkxMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kirel",
"html_url": "https://github.com/kirel",
"followers_url": "https://api.github.com/users/kirel/followers",
"fol... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-09-24T12:24:05 | 2024-09-24T12:24:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Please check the difference between
```
echo -e $(curl http://localhost:11434/api/generate -d '{
"model": "qwen2.5-coder:1.5b",
"prompt": "def fib(", "suffix": " "
}' | jq -s 'map(.response) | join("")')
```
and
```
echo -e $(curl http://localhost:11434/api/generate -d '{
"model"... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6932/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1230 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1230/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1230/comments | https://api.github.com/repos/ollama/ollama/issues/1230/events | https://github.com/ollama/ollama/issues/1230 | 2,005,288,754 | I_kwDOJ0Z1Ps53hkcy | 1,230 | Incorrect Version Displayed with Command: ollama --version | {
"login": "Samk13",
"id": 36583694,
"node_id": "MDQ6VXNlcjM2NTgzNjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/36583694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samk13",
"html_url": "https://github.com/Samk13",
"followers_url": "https://api.github.com/users/Samk13/fo... | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2023-11-21T23:03:06 | 2023-11-21T23:48:21 | 2023-11-21T23:48:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running the version command, the output incorrectly displays ollama version 0.0.0 instead of the expected version:
```bash
ollama --version
ollama version 0.0.0
```
expected behavior:
```bash
ollama --version
ollama version 0.1.11
```
Current Ollama version: `V0.1.11`
Running on `WSL2 Ubuntu 22... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1230/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6664 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6664/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6664/comments | https://api.github.com/repos/ollama/ollama/issues/6664/events | https://github.com/ollama/ollama/issues/6664 | 2,508,984,102 | I_kwDOJ0Z1Ps6VjA8m | 6,664 | Reflection 70B model request | {
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-09-05T23:05:16 | 2024-09-10T22:57:01 | 2024-09-06T06:30:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | would be nice to add it
<img width="593" alt="Screenshot 2024-09-06 at 01 04 54" src="https://github.com/user-attachments/assets/4d83c4aa-9d6d-43c0-8e25-693aa3c8cee6">
https://huggingface.co/mattshumer/Reflection-70B | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6664/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6664/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4670 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4670/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4670/comments | https://api.github.com/repos/ollama/ollama/issues/4670/events | https://github.com/ollama/ollama/issues/4670 | 2,319,818,939 | I_kwDOJ0Z1Ps6KRaC7 | 4,670 | llama3 8b BF16 error | {
"login": "ccbadd",
"id": 17816991,
"node_id": "MDQ6VXNlcjE3ODE2OTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17816991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccbadd",
"html_url": "https://github.com/ccbadd",
"followers_url": "https://api.github.com/users/ccbadd/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 4 | 2024-05-27T21:54:04 | 2024-08-13T18:09:42 | 2024-08-09T23:49:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I downloaded the model from huggingface llama3 8b bf16 and when I run:
ollama create llama3bf16 -f Modfile
I get this output:
transferring model data
Error: invalid file magic
This is on a Win 11 PC running ollama 0.1.38 with an AMD W6800 GPU. Is the bf16 model not supported?
### OS
Wind... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4670/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4670/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2723 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2723/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2723/comments | https://api.github.com/repos/ollama/ollama/issues/2723/events | https://github.com/ollama/ollama/issues/2723 | 2,152,207,503 | I_kwDOJ0Z1Ps6ASBSP | 2,723 | Updating max_tokens for LLM by OpenAI library doesn't work | {
"login": "shashade2012",
"id": 22316457,
"node_id": "MDQ6VXNlcjIyMzE2NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/22316457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashade2012",
"html_url": "https://github.com/shashade2012",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 2 | 2024-02-24T08:20:27 | 2024-03-13T03:35:25 | 2024-03-13T03:35:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I need to adjust the default token limit for my Large Language Model (LLM). Currently, I’m using Ollama with the Mistral model and have created two clients—one using the Ollama Python library and the other using the OpenAI library. Specifically, I want to increase the default maximum token limit to handle longer prompt... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2723/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3294 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3294/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3294/comments | https://api.github.com/repos/ollama/ollama/issues/3294/events | https://github.com/ollama/ollama/issues/3294 | 2,202,827,971 | I_kwDOJ0Z1Ps6DTHzD | 3,294 | System ram won't free up when using cuda. | {
"login": "stevenhobs",
"id": 47906512,
"node_id": "MDQ6VXNlcjQ3OTA2NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/47906512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenhobs",
"html_url": "https://github.com/stevenhobs",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-03-22T16:03:23 | 2024-06-01T21:53:17 | 2024-06-01T21:52:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I run starcoder2:15B. The model occupies 9.1GB of GPU memory. And the ollama serve process should occupy about 1GB of OS memory. But I check the real situation,seem the model cached in the system ram and won't be freed up until I end the ollama process. Is this typical?
![f46ccf38a0dc283... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3294/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1186 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1186/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1186/comments | https://api.github.com/repos/ollama/ollama/issues/1186/events | https://github.com/ollama/ollama/pull/1186 | 2,000,072,791 | PR_kwDOJ0Z1Ps5fzSP3 | 1,186 | fix cross device rename | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-11-17T23:22:38 | 2023-11-18T05:54:54 | 2023-11-18T05:54:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1186",
"html_url": "https://github.com/ollama/ollama/pull/1186",
"diff_url": "https://github.com/ollama/ollama/pull/1186.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1186.patch",
"merged_at": "2023-11-18T05:54:54"
} | `os.Rename` is only intended for files on the same filesystem. Instead of messing around with that, store the temporary file in the blobs directory
resolves #1181 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1186/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5302 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5302/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5302/comments | https://api.github.com/repos/ollama/ollama/issues/5302/events | https://github.com/ollama/ollama/pull/5302 | 2,375,482,955 | PR_kwDOJ0Z1Ps5zpH_M | 5,302 | Submit Chinese Document | {
"login": "aibabelx",
"id": 16663208,
"node_id": "MDQ6VXNlcjE2NjYzMjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16663208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aibabelx",
"html_url": "https://github.com/aibabelx",
"followers_url": "https://api.github.com/users/aib... | [] | closed | false | null | [] | null | 1 | 2024-06-26T14:12:07 | 2024-06-27T02:27:49 | 2024-06-27T02:27:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5302",
"html_url": "https://github.com/ollama/ollama/pull/5302",
"diff_url": "https://github.com/ollama/ollama/pull/5302.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5302.patch",
"merged_at": null
} | Add Chinese Document | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5302/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5885 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5885/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5885/comments | https://api.github.com/repos/ollama/ollama/issues/5885/events | https://github.com/ollama/ollama/issues/5885 | 2,425,834,653 | I_kwDOJ0Z1Ps6Ql0yd | 5,885 | llama3.1 template needs to be updated to include tools | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-23T18:22:12 | 2024-10-29T08:54:23 | 2024-07-24T16:35:35 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The current version of llama3.1 doesn't support tool use:
```
$ curl -s http://aitoolkit:11434/v1/chat/completions -d '{"model": "llama3.1","tools":[{"type":"function","function": {}}], "messages": [{"role":"user","content":"weather in zurich"}], "stream": false}' | jq
{
"error": {
... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5885/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5885/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4901 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4901/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4901/comments | https://api.github.com/repos/ollama/ollama/issues/4901/events | https://github.com/ollama/ollama/issues/4901 | 2,339,970,630 | I_kwDOJ0Z1Ps6LeR5G | 4,901 | Error: pull model manifest: ssh: no key found | {
"login": "674316",
"id": 24378150,
"node_id": "MDQ6VXNlcjI0Mzc4MTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/24378150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/674316",
"html_url": "https://github.com/674316",
"followers_url": "https://api.github.com/users/674316/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | null | [] | null | 5 | 2024-06-07T09:10:55 | 2024-10-20T17:25:45 | 2024-07-11T02:37:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama pull vicuna
>>
pulling manifest
Error: pull model manifest: ssh: no key found
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.41 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4901/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7173 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7173/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7173/comments | https://api.github.com/repos/ollama/ollama/issues/7173/events | https://github.com/ollama/ollama/issues/7173 | 2,581,895,770 | I_kwDOJ0Z1Ps6Z5Jpa | 7,173 | Model Response to Harmful Queries Not Properly Flagged – Safety Issue | {
"login": "diazr04",
"id": 160553481,
"node_id": "U_kgDOCZHaCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/160553481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diazr04",
"html_url": "https://github.com/diazr04",
"followers_url": "https://api.github.com/users/diazr04/foll... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 6 | 2024-10-11T17:47:31 | 2024-10-11T23:05:42 | 2024-10-11T23:05:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
Hi, I was testing the system’s response to potentially harmful queries to see how well it handles safety protocols. Unfortunately, I noticed that the model provided detailed information on a topic that should likely have been flagged or blocked for safety reasons. I’m reporting this to help improve the model’s filter... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7173/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5726 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5726/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5726/comments | https://api.github.com/repos/ollama/ollama/issues/5726/events | https://github.com/ollama/ollama/pull/5726 | 2,411,754,820 | PR_kwDOJ0Z1Ps51i-hY | 5,726 | fix unmarshal type errors | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-16T18:11:29 | 2024-07-16T19:12:11 | 2024-07-16T19:12:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5726",
"html_url": "https://github.com/ollama/ollama/pull/5726",
"diff_url": "https://github.com/ollama/ollama/pull/5726.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5726.patch",
"merged_at": "2024-07-16T19:12:10"
} | skip unmarshalable types when parsing tool call responses | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5726/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5424 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5424/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5424/comments | https://api.github.com/repos/ollama/ollama/issues/5424/events | https://github.com/ollama/ollama/issues/5424 | 2,384,935,750 | I_kwDOJ0Z1Ps6OJztG | 5,424 | Supports voice recognition and text-to-speech capabilities, with customizable extension abilities | {
"login": "skytodmoon",
"id": 11644650,
"node_id": "MDQ6VXNlcjExNjQ0NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11644650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skytodmoon",
"html_url": "https://github.com/skytodmoon",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-07-02T00:01:13 | 2025-01-21T03:59:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Feature Request: Support for Voice Recognition and Text-to-Speech with Custom Extension Capabilities
I would like to propose the addition of voice recognition and text-to-speech functionalities to the project. These features would greatly enhance the user experience by allowing for hands-free interaction and accessi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5424/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5424/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5969 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5969/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5969/comments | https://api.github.com/repos/ollama/ollama/issues/5969/events | https://github.com/ollama/ollama/issues/5969 | 2,431,330,333 | I_kwDOJ0Z1Ps6Q6ygd | 5,969 | llama3.1 8B losses context | {
"login": "Damien2s",
"id": 89670386,
"node_id": "MDQ6VXNlcjg5NjcwMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/89670386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damien2s",
"html_url": "https://github.com/Damien2s",
"followers_url": "https://api.github.com/users/Dam... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-07-26T03:19:58 | 2024-08-12T02:27:07 | 2024-08-12T02:00:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
llama3.1 doesn't seems to remember the context.
Here is an example:
> Me:
> Hello can you tell me how much is 56*9?
>
> llama3.1:
> That's a simple one!
> 56 * 9 = 504
> Is that correct?
>
> Me:
> can you divide this by 4?
>
> llama3.1
> However, I don't see any specific numb... | {
"login": "Damien2s",
"id": 89670386,
"node_id": "MDQ6VXNlcjg5NjcwMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/89670386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damien2s",
"html_url": "https://github.com/Damien2s",
"followers_url": "https://api.github.com/users/Dam... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5969/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1072 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1072/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1072/comments | https://api.github.com/repos/ollama/ollama/issues/1072/events | https://github.com/ollama/ollama/issues/1072 | 1,987,255,223 | I_kwDOJ0Z1Ps52cxu3 | 1,072 | How to host the Models Registry in a private network instead of registry.ollama.ai? | {
"login": "marcellodesales",
"id": 131457,
"node_id": "MDQ6VXNlcjEzMTQ1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/131457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcellodesales",
"html_url": "https://github.com/marcellodesales",
"followers_url": "https://api.git... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396220,
"node_id": ... | open | false | null | [] | null | 0 | 2023-11-10T09:34:23 | 2024-03-11T18:47:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # Problem
* Our private network has an Egress firewall rule not allowing connections to any `.ai` endpoints
* Just discovered it at 1:30am
* How a user can override the Models Registry server with their own?
* DO we have instructions to on how to bootstrap a Models Registry server?
* Is it even intended t... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1072/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1072/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7009 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7009/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7009/comments | https://api.github.com/repos/ollama/ollama/issues/7009/events | https://github.com/ollama/ollama/pull/7009 | 2,553,784,536 | PR_kwDOJ0Z1Ps5898_B | 7,009 | llama: add compiler tags for cpu features | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-09-27T21:48:58 | 2024-10-08T16:15:04 | 2024-10-08T15:53:59 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7009",
"html_url": "https://github.com/ollama/ollama/pull/7009",
"diff_url": "https://github.com/ollama/ollama/pull/7009.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7009.patch",
"merged_at": null
} | Replaced by #7137 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7009/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4053 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4053/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4053/comments | https://api.github.com/repos/ollama/ollama/issues/4053/events | https://github.com/ollama/ollama/issues/4053 | 2,271,517,232 | I_kwDOJ0Z1Ps6HZJow | 4,053 | The server-side output gets mixed with the responses. | {
"login": "JialeLiLab",
"id": 151911704,
"node_id": "U_kgDOCQ39GA",
"avatar_url": "https://avatars.githubusercontent.com/u/151911704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JialeLiLab",
"html_url": "https://github.com/JialeLiLab",
"followers_url": "https://api.github.com/users/Jia... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-04-30T13:43:09 | 2024-04-30T14:48:12 | 2024-04-30T14:48:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi all,
I'm running ollama that mixes its server logs with my outputs directly in the terminal, making it hard to interact with. Does anyone else experience this? Any advice on how to separate these so I can just see my inputs and outputs without the clutter of continuous server logs?
Than... | {
"login": "JialeLiLab",
"id": 151911704,
"node_id": "U_kgDOCQ39GA",
"avatar_url": "https://avatars.githubusercontent.com/u/151911704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JialeLiLab",
"html_url": "https://github.com/JialeLiLab",
"followers_url": "https://api.github.com/users/Jia... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4053/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5028 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5028/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5028/comments | https://api.github.com/repos/ollama/ollama/issues/5028/events | https://github.com/ollama/ollama/pull/5028 | 2,351,805,196 | PR_kwDOJ0Z1Ps5yZiyU | 5,028 | OpenAI: /v1/models/{model} compatibility | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-06-13T18:29:28 | 2024-07-02T18:40:49 | 2024-07-02T18:40:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5028",
"html_url": "https://github.com/ollama/ollama/pull/5028",
"diff_url": "https://github.com/ollama/ollama/pull/5028.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5028.patch",
"merged_at": "2024-07-02T18:40:48"
} | Adds compatibility for `/v1/models/{model}`
E.g
`curl http://localhost:11434/v1/models/llama3`
```
{
"id": "llama3",
"object": "model",
"created": 1718141294,
"owned_by": "library"
}
```
| {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5028/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5028/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5737 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5737/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5737/comments | https://api.github.com/repos/ollama/ollama/issues/5737/events | https://github.com/ollama/ollama/issues/5737 | 2,412,422,641 | I_kwDOJ0Z1Ps6PyqXx | 5,737 | Releases page: please also generate an archive with dependencies | {
"login": "vitaly-zdanevich",
"id": 3514015,
"node_id": "MDQ6VXNlcjM1MTQwMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3514015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitaly-zdanevich",
"html_url": "https://github.com/vitaly-zdanevich",
"followers_url": "https://ap... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-07-17T01:47:20 | 2024-08-19T18:14:25 | 2024-08-19T18:14:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Gentoo Linux package manager is not so good about Go software :(
Related https://github.com/gentoo/guru/pull/215#issuecomment-2230746689 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5737/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/433 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/433/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/433/comments | https://api.github.com/repos/ollama/ollama/issues/433/events | https://github.com/ollama/ollama/issues/433 | 1,868,600,687 | I_kwDOJ0Z1Ps5vYJVv | 433 | Add a way for user to approve CORS requests from origins without running a separate `ollama serve` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2023-08-27T19:14:09 | 2023-12-03T02:50:47 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Quite a few tools built on Ollama are hosted on alternate origins like `chrome-extension://` (chrome extensions) or `http://example.com` (websites that want to use the user's local Ollama instance).
Tools like Figma, Zoom and Slack have a way to "allow" the browser clients to interact with a server hosted on `localh... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/433/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/433/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7224 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7224/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7224/comments | https://api.github.com/repos/ollama/ollama/issues/7224/events | https://github.com/ollama/ollama/issues/7224 | 2,591,436,600 | I_kwDOJ0Z1Ps6adi84 | 7,224 | Latest version silently drops support for macOS 11 | {
"login": "close-paren",
"id": 3870093,
"node_id": "MDQ6VXNlcjM4NzAwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3870093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/close-paren",
"html_url": "https://github.com/close-paren",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-10-16T10:37:58 | 2024-10-17T18:35:11 | 2024-10-17T18:35:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The ollama CLI binary now appears to be targeting macOS 12 and higher, whereas the frontend still runs on macOS 11. This results in very high CPU usage from ReportCrash due to continuous crashing and respawning.
Sample crash report:
```
Process: ollama [45502]
Path: ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7224/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7224/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2971 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2971/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2971/comments | https://api.github.com/repos/ollama/ollama/issues/2971/events | https://github.com/ollama/ollama/issues/2971 | 2,173,005,613 | I_kwDOJ0Z1Ps6BhW8t | 2,971 | Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:49192->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. | {
"login": "yxl23",
"id": 115678682,
"node_id": "U_kgDOBuUd2g",
"avatar_url": "https://avatars.githubusercontent.com/u/115678682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yxl23",
"html_url": "https://github.com/yxl23",
"followers_url": "https://api.github.com/users/yxl23/followers",
... | [] | closed | false | null | [] | null | 4 | 2024-03-07T05:44:10 | 2024-09-02T07:59:30 | 2024-03-12T02:12:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:49192->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2971/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2764 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2764/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2764/comments | https://api.github.com/repos/ollama/ollama/issues/2764/events | https://github.com/ollama/ollama/issues/2764 | 2,153,989,598 | I_kwDOJ0Z1Ps6AY0Xe | 2,764 | Suggestion: Add a timeout parameter to Chat and Generation calls. | {
"login": "dezoito",
"id": 6494010,
"node_id": "MDQ6VXNlcjY0OTQwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6494010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dezoito",
"html_url": "https://github.com/dezoito",
"followers_url": "https://api.github.com/users/dezoito/... | [] | open | false | null | [] | null | 0 | 2024-02-26T11:51:29 | 2024-02-26T11:52:22 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | First of all thanks for the hard work you guys are putting into this!
I don't think there's an easy way to do this directly... please correct me if I'm wrong.
(Looks like Ollama-py implements this for sync calls, but it is passed to the `httpx` client, and not to the Ollama host.)
The motivation is to allow pr... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2764/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2764/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3747 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3747/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3747/comments | https://api.github.com/repos/ollama/ollama/issues/3747/events | https://github.com/ollama/ollama/issues/3747 | 2,252,117,314 | I_kwDOJ0Z1Ps6GPJVC | 3,747 | Support `XLMRobertaModel` architecture | {
"login": "wouterverduin",
"id": 10314050,
"node_id": "MDQ6VXNlcjEwMzE0MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10314050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wouterverduin",
"html_url": "https://github.com/wouterverduin",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": ... | open | false | null | [] | null | 20 | 2024-04-19T05:28:04 | 2024-10-17T09:21:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi all from Ollama!
First off: Great work with Ollama, keep up the good work!
What i am missing though is models in different languages (dutch for me personally). Is it possible to add multilingual embeddings like "intfloat/multilingual-e5-large-instruct"?
If there is a way to do this myself i would love the d... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3747/reactions",
"total_count": 20,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 9
} | https://api.github.com/repos/ollama/ollama/issues/3747/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/562 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/562/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/562/comments | https://api.github.com/repos/ollama/ollama/issues/562/events | https://github.com/ollama/ollama/pull/562 | 1,905,906,733 | PR_kwDOJ0Z1Ps5a1Ssn | 562 | fix OLLAMA_HOST parsing for ip6 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-09-21T00:59:40 | 2023-09-21T02:54:48 | 2023-09-21T02:54:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/562",
"html_url": "https://github.com/ollama/ollama/pull/562",
"diff_url": "https://github.com/ollama/ollama/pull/562.diff",
"patch_url": "https://github.com/ollama/ollama/pull/562.patch",
"merged_at": "2023-09-21T02:54:47"
} | Fix the environment parsing for `OLLAMA_HOST` so it can recognize ipv6 addresses, e.g. ipv6 loopback `[::1]:11434`
Some examples:
Default
```
$ OLLAMA_HOST='' ollama serve
2023/09/20 17:55:23 routes.go:540: Listening on 127.0.0.1:11434
```
IPv6 loopback
```
$ OLLAMA_HOST='[::1]:11434' ollama serve
2... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/562/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/426 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/426/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/426/comments | https://api.github.com/repos/ollama/ollama/issues/426/events | https://github.com/ollama/ollama/pull/426 | 1,868,245,067 | PR_kwDOJ0Z1Ps5Y21z6 | 426 | set default template | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-08-26T19:21:52 | 2023-08-26T21:15:39 | 2023-08-26T21:15:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/426",
"html_url": "https://github.com/ollama/ollama/pull/426",
"diff_url": "https://github.com/ollama/ollama/pull/426.diff",
"patch_url": "https://github.com/ollama/ollama/pull/426.patch",
"merged_at": "2023-08-26T21:15:38"
} | fixes #413 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/426/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3689 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3689/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3689/comments | https://api.github.com/repos/ollama/ollama/issues/3689/events | https://github.com/ollama/ollama/issues/3689 | 2,247,272,662 | I_kwDOJ0Z1Ps6F8qjW | 3,689 | Aya by Cohere - mt5-xxl arch | {
"login": "oliviermills",
"id": 6075303,
"node_id": "MDQ6VXNlcjYwNzUzMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6075303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliviermills",
"html_url": "https://github.com/oliviermills",
"followers_url": "https://api.github.com... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-04-17T03:17:43 | 2024-08-31T08:37:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
https://huggingface.co/CohereForAI/aya-101
See discussion re t5 and gguf attempts here: https://huggingface.co/CohereForAI/aya-101/discussions/12
trial: https://huggingface.co/kcoopermiller/aya-101-GGUF (using candle) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3689/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1276/comments | https://api.github.com/repos/ollama/ollama/issues/1276/events | https://github.com/ollama/ollama/issues/1276 | 2,010,952,287 | I_kwDOJ0Z1Ps533LJf | 1,276 | A way to reuse ollama API with RAG app. | {
"login": "sandangel",
"id": 22189661,
"node_id": "MDQ6VXNlcjIyMTg5NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/22189661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sandangel",
"html_url": "https://github.com/sandangel",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 4 | 2023-11-26T08:33:24 | 2024-09-04T03:27:28 | 2024-09-04T03:27:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, I would like to build an RAG app, but instead of having its own API, I like to reuse the Ollama existing API so that it will work with many existing clients. Is there a way to do that? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1276/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1742 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1742/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1742/comments | https://api.github.com/repos/ollama/ollama/issues/1742/events | https://github.com/ollama/ollama/issues/1742 | 2,060,697,690 | I_kwDOJ0Z1Ps5608Ba | 1,742 | Low VRAM mode? | {
"login": "JumboTortoise",
"id": 117757133,
"node_id": "U_kgDOBwTUzQ",
"avatar_url": "https://avatars.githubusercontent.com/u/117757133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JumboTortoise",
"html_url": "https://github.com/JumboTortoise",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 4 | 2023-12-30T09:51:32 | 2024-07-03T18:12:05 | 2023-12-30T11:15:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have a 12GB RTX 3060 that can easily run 7B models, but fails on the larger ones. Does ollama have a low-vram mode? Any way to move model layers from VRAM to system RAM? I would really like to try out larger LLM's without having to rent a cloud compute server or buy a new GPU, even if it is much slower due to inferen... | {
"login": "JumboTortoise",
"id": 117757133,
"node_id": "U_kgDOBwTUzQ",
"avatar_url": "https://avatars.githubusercontent.com/u/117757133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JumboTortoise",
"html_url": "https://github.com/JumboTortoise",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1742/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2157 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2157/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2157/comments | https://api.github.com/repos/ollama/ollama/issues/2157/events | https://github.com/ollama/ollama/issues/2157 | 2,096,205,436 | I_kwDOJ0Z1Ps588Y58 | 2,157 | Incoherent latency on ARM machine | {
"login": "racso-dev",
"id": 51890236,
"node_id": "MDQ6VXNlcjUxODkwMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/51890236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/racso-dev",
"html_url": "https://github.com/racso-dev",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 6 | 2024-01-23T14:24:42 | 2024-01-25T10:13:06 | 2024-01-25T10:13:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I deployed mistral:7b on an ARM instance of Scaleway, with 32 vCPUs and 128GB of memory. I can't figure out why the inference times are on the order of several minutes and was wondering if you had any idea of the cause of the problem, and a potential solution.
For the record, I installed ollama via `curl https://oll... | {
"login": "racso-dev",
"id": 51890236,
"node_id": "MDQ6VXNlcjUxODkwMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/51890236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/racso-dev",
"html_url": "https://github.com/racso-dev",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2157/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8049 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8049/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8049/comments | https://api.github.com/repos/ollama/ollama/issues/8049/events | https://github.com/ollama/ollama/pull/8049 | 2,733,305,728 | PR_kwDOJ0Z1Ps6E4KV2 | 8,049 | Implement OLLAMA_MAX_KEEP_ALIVE environment variable | {
"login": "jhbruhn",
"id": 1036566,
"node_id": "MDQ6VXNlcjEwMzY1NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1036566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhbruhn",
"html_url": "https://github.com/jhbruhn",
"followers_url": "https://api.github.com/users/jhbruhn/... | [] | open | false | null | [] | null | 4 | 2024-12-11T15:34:35 | 2024-12-13T23:53:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8049",
"html_url": "https://github.com/ollama/ollama/pull/8049",
"diff_url": "https://github.com/ollama/ollama/pull/8049.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8049.patch",
"merged_at": null
} | In cooperative server environments, it is desirable to cap the keep_alive value of models to a maximum duration. This enables server administrators to guarantee that models are unloaded at _some_ point if they are not needed anymore.
For this, I propose to introduce the `OLLAMA_MAX_KEEP_ALIVE` environment variable... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8049/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3817 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3817/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3817/comments | https://api.github.com/repos/ollama/ollama/issues/3817/events | https://github.com/ollama/ollama/pull/3817 | 2,255,964,277 | PR_kwDOJ0Z1Ps5tUfnb | 3,817 | docs: Update README for Lobe-chat integration. | {
"login": "MapleEve",
"id": 1845053,
"node_id": "MDQ6VXNlcjE4NDUwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1845053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MapleEve",
"html_url": "https://github.com/MapleEve",
"followers_url": "https://api.github.com/users/Maple... | [] | closed | false | null | [] | null | 0 | 2024-04-22T08:49:42 | 2024-04-23T03:19:02 | 2024-04-23T00:18:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3817",
"html_url": "https://github.com/ollama/ollama/pull/3817",
"diff_url": "https://github.com/ollama/ollama/pull/3817.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3817.patch",
"merged_at": "2024-04-23T00:18:16"
} | Added Lobe Chat for Ollama | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3817/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3622 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3622/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3622/comments | https://api.github.com/repos/ollama/ollama/issues/3622/events | https://github.com/ollama/ollama/issues/3622 | 2,241,315,880 | I_kwDOJ0Z1Ps6Fl8Qo | 3,622 | Ollama fails to create models when using IQ quantized GGUFs - Error: invalid file magic | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 29 | 2024-04-13T05:29:51 | 2024-06-01T05:03:19 | 2024-05-23T20:21:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Creating a Ollama model from a standard IQ quantized GGUF fails with "Error: invalid file magic"
```
ollama create sammcj/zephyr-orpo-141b-A35b-v0.1:IQ3_XS -f Modelfile-IQ3_XS
transferring model data
creating model layer
Error: invalid file magic
```
I've tried with pre-built Ollama... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3622/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3622/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2428 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2428/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2428/comments | https://api.github.com/repos/ollama/ollama/issues/2428/events | https://github.com/ollama/ollama/pull/2428 | 2,127,317,616 | PR_kwDOJ0Z1Ps5mfbAZ | 2,428 | Add typechat-cli to Terminal apps | {
"login": "anaisbetts",
"id": 1396,
"node_id": "MDQ6VXNlcjEzOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anaisbetts",
"html_url": "https://github.com/anaisbetts",
"followers_url": "https://api.github.com/users/anaisbett... | [] | closed | false | null | [] | null | 0 | 2024-02-09T15:02:09 | 2024-03-25T19:05:04 | 2024-03-25T19:05:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2428",
"html_url": "https://github.com/ollama/ollama/pull/2428",
"diff_url": "https://github.com/ollama/ollama/pull/2428.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2428.patch",
"merged_at": "2024-03-25T19:05:04"
} | [typechat-cli](https://github.com/anaisbetts/typechat-cli) is a CLI interface for the [TypeChat](https://github.com/Microsoft/TypeChat) library, it's a way to guarantee that the results returned by your Ollama query return consistent JSON in a structure that you request.
TypeChat doesn't natively support Ollama, but... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2428/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4835 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4835/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4835/comments | https://api.github.com/repos/ollama/ollama/issues/4835/events | https://github.com/ollama/ollama/issues/4835 | 2,336,049,453 | I_kwDOJ0Z1Ps6LPUkt | 4,835 | Feature request: Model metadata should list capabilities | {
"login": "it-s",
"id": 39939,
"node_id": "MDQ6VXNlcjM5OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/39939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/it-s",
"html_url": "https://github.com/it-s",
"followers_url": "https://api.github.com/users/it-s/followers",
"follo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-06-05T14:15:12 | 2024-06-05T14:24:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As the number of different models grow it would be great to add some sort of `capabilities` property to /tags response.
This property should contain information about what kind of input a model can have:
* since ollama now also includes modes that are able to accept process images (think llava)
* some models can onl... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4835/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7827 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7827/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7827/comments | https://api.github.com/repos/ollama/ollama/issues/7827/events | https://github.com/ollama/ollama/issues/7827 | 2,689,920,561 | I_kwDOJ0Z1Ps6gVO4x | 7,827 | I hope that ollama can optimize the parallel performance of CPU computations? | {
"login": "CarsonJiang",
"id": 22452648,
"node_id": "MDQ6VXNlcjIyNDUyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/22452648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CarsonJiang",
"html_url": "https://github.com/CarsonJiang",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-11-25T09:19:57 | 2024-12-05T10:08:00 | 2024-12-02T15:38:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When serving as the LLM provider for GraphRAG, the 100% single CPU usage is preventing full utilization of multiple GPU resources on the server.

settings.yaml
```
### This config file contains required... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7827/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7478 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7478/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7478/comments | https://api.github.com/repos/ollama/ollama/issues/7478/events | https://github.com/ollama/ollama/issues/7478 | 2,631,123,394 | I_kwDOJ0Z1Ps6c08HC | 7,478 | Issue with Reinstalling Ollama: "Killed" Error on ollama serve | {
"login": "hosein97",
"id": 22956600,
"node_id": "MDQ6VXNlcjIyOTU2NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/22956600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hosein97",
"html_url": "https://github.com/hosein97",
"followers_url": "https://api.github.com/users/hos... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2024-11-03T10:41:51 | 2024-12-02T14:50:49 | 2024-12-02T14:50:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I was using Ollama without any issues on linux, but after removing it and reinstalling, I encountered a problem when running the `ollama serve` command. The process fails and returns a "Killed" message, and subsequently, the Ollama command seems to be removed from `/usr/bin`.
Upon running t... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7478/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2205 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2205/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2205/comments | https://api.github.com/repos/ollama/ollama/issues/2205/events | https://github.com/ollama/ollama/issues/2205 | 2,101,971,611 | I_kwDOJ0Z1Ps59SYqb | 2,205 | Support additional AVX instruction sets | {
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7700262114,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 20 | 2024-01-26T10:30:51 | 2024-12-26T14:14:12 | 2024-12-10T17:47:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have a intel CPU that supports a number of AVX features, but most of them are not picked up when using ollama. Below is the llama.log file:
system info: AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2205/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1841 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1841/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1841/comments | https://api.github.com/repos/ollama/ollama/issues/1841/events | https://github.com/ollama/ollama/pull/1841 | 2,069,134,581 | PR_kwDOJ0Z1Ps5jZ9my | 1,841 | Update README.md - Community Integrations - vscode, Sublime Text, CLI… | {
"login": "icebaker",
"id": 113217272,
"node_id": "U_kgDOBr-O-A",
"avatar_url": "https://avatars.githubusercontent.com/u/113217272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icebaker",
"html_url": "https://github.com/icebaker",
"followers_url": "https://api.github.com/users/icebaker/... | [] | closed | false | null | [] | null | 2 | 2024-01-07T13:39:42 | 2024-09-05T20:24:10 | 2024-09-05T20:24:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1841",
"html_url": "https://github.com/ollama/ollama/pull/1841",
"diff_url": "https://github.com/ollama/ollama/pull/1841.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1841.patch",
"merged_at": null
} | :wave: I have added new integrations for CLI, Ruby, Visual Studio Code, Sublime Text, and Obsidian.
*VSCode Demonstration:
https://github.com/jmorganca/ollama/assets/113217272/e6ba9c62-56d5-401f-8b63-51407d9154bd
*CLI Demonstration:
https://github.com/jmorganca/ollama/assets/113217272/5612653b-c279-4fe7-910... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1841/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4949 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4949/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4949/comments | https://api.github.com/repos/ollama/ollama/issues/4949/events | https://github.com/ollama/ollama/pull/4949 | 2,342,245,568 | PR_kwDOJ0Z1Ps5x49T7 | 4,949 | Update README.md - Apple Vision Pro | {
"login": "gluonfield",
"id": 5672094,
"node_id": "MDQ6VXNlcjU2NzIwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5672094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gluonfield",
"html_url": "https://github.com/gluonfield",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-06-09T13:10:21 | 2024-09-05T05:30:20 | 2024-09-05T05:30:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4949",
"html_url": "https://github.com/ollama/ollama/pull/4949",
"diff_url": "https://github.com/ollama/ollama/pull/4949.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4949.patch",
"merged_at": "2024-09-05T05:30:19"
} | Added Enchanted with Apple Vision Pro support | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4949/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6944 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6944/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6944/comments | https://api.github.com/repos/ollama/ollama/issues/6944/events | https://github.com/ollama/ollama/issues/6944 | 2,546,641,051 | I_kwDOJ0Z1Ps6Xyqib | 6,944 | 怎么样才能将ollama打包成应用程序模块,集成到自己的单机软件中 | {
"login": "lymgithub",
"id": 26688670,
"node_id": "MDQ6VXNlcjI2Njg4Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/26688670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lymgithub",
"html_url": "https://github.com/lymgithub",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXU... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-09-25T00:29:50 | 2024-09-30T19:37:26 | 2024-09-25T20:08:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 在windows中现在安装了ollama,也成功部署了千问大模型,如何将现有的环境打包集成到自己的单机软件中呢 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6944/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6559 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6559/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6559/comments | https://api.github.com/repos/ollama/ollama/issues/6559/events | https://github.com/ollama/ollama/pull/6559 | 2,495,082,710 | PR_kwDOJ0Z1Ps553VO7 | 6,559 | Go server command line options support | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-08-29T16:58:25 | 2024-09-03T20:53:55 | 2024-09-03T20:53:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6559",
"html_url": "https://github.com/ollama/ollama/pull/6559",
"diff_url": "https://github.com/ollama/ollama/pull/6559.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6559.patch",
"merged_at": "2024-09-03T20:53:53"
} | Support for command line options for controlling resource usage such as mlock, mmap and GPU allocation. In addition, switches support to more recent LoRA formats. | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6559/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3967 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3967/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3967/comments | https://api.github.com/repos/ollama/ollama/issues/3967/events | https://github.com/ollama/ollama/issues/3967 | 2,266,568,778 | I_kwDOJ0Z1Ps6HGRhK | 3,967 | The parameter is incorrect | {
"login": "nb001",
"id": 39027571,
"node_id": "MDQ6VXNlcjM5MDI3NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/39027571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nb001",
"html_url": "https://github.com/nb001",
"followers_url": "https://api.github.com/users/nb001/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-04-26T22:15:56 | 2025-01-28T08:22:02 | 2024-05-02T18:22:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
No matter which model I run, I always end up with: The parameter is incorrect.

![360截图20240427060209](https://github.com/ollama/ollama/assets/39027571/5451517a-adc7-4646-a55d-1e30e3f19... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3967/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2206 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2206/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2206/comments | https://api.github.com/repos/ollama/ollama/issues/2206/events | https://github.com/ollama/ollama/issues/2206 | 2,102,007,528 | I_kwDOJ0Z1Ps59Shbo | 2,206 | Mixtral model issue ? | {
"login": "Nuranto",
"id": 1633463,
"node_id": "MDQ6VXNlcjE2MzM0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1633463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nuranto",
"html_url": "https://github.com/Nuranto",
"followers_url": "https://api.github.com/users/Nuranto/... | [] | closed | false | null | [] | null | 3 | 2024-01-26T10:52:21 | 2024-05-10T01:05:33 | 2024-05-10T01:05:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I wanted to test mixtral model, so I did `ollama run mixtral`
But after saying Hello, the model answers me :
```
Hello! Welcome to Bra****op.AI. How can I assist you today?
[...]
```
It's like ollama made me downloaded some fine-tuned model ? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2206/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3434 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3434/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3434/comments | https://api.github.com/repos/ollama/ollama/issues/3434/events | https://github.com/ollama/ollama/issues/3434 | 2,217,867,046 | I_kwDOJ0Z1Ps6EMfcm | 3,434 | Error: pull model manifest | {
"login": "taoxin2988",
"id": 47534353,
"node_id": "MDQ6VXNlcjQ3NTM0MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47534353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taoxin2988",
"html_url": "https://github.com/taoxin2988",
"followers_url": "https://api.github.com/use... | [
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] | closed | false | null | [] | null | 2 | 2024-04-01T09:03:29 | 2024-04-02T09:37:29 | 2024-04-02T03:00:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I‘ve tried the methods from the FAQ, but it does not work, some advices or guidance will appreciated, Here are the commands I executed and the results
[root@Installer ~]# ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/qwen/manifests/0.5b": dial tcp: loo... | {
"login": "taoxin2988",
"id": 47534353,
"node_id": "MDQ6VXNlcjQ3NTM0MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47534353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taoxin2988",
"html_url": "https://github.com/taoxin2988",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3434/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2950 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2950/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2950/comments | https://api.github.com/repos/ollama/ollama/issues/2950/events | https://github.com/ollama/ollama/issues/2950 | 2,171,079,206 | I_kwDOJ0Z1Ps6BaAom | 2,950 | When I access it through the API, ollama crashes with an 'out of memory' error,while I use the gemma-7b model. but it works fine when I use 'ollama run gemma' in Terminal | {
"login": "panp1",
"id": 107453035,
"node_id": "U_kgDOBmeaaw",
"avatar_url": "https://avatars.githubusercontent.com/u/107453035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panp1",
"html_url": "https://github.com/panp1",
"followers_url": "https://api.github.com/users/panp1/followers",
... | [] | closed | false | null | [] | null | 3 | 2024-03-06T09:49:19 | 2024-03-12T01:18:30 | 2024-03-12T01:18:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [GIN] 2024/03/06 - 17:37:31 | 200 | 11.9573683s | ::1 | POST "/api/chat"
[GIN] 2024/03/06 - 17:43:04 | 200 | 11.4739ms | ::1 | GET "/api/tags"
time=2024-03-06T17:43:13.345+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-06T17:43:13.345+08:00 level=INFO ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2950/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.