url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4946/comments | https://api.github.com/repos/ollama/ollama/issues/4946/events | https://github.com/ollama/ollama/pull/4946 | 2,342,103,638 | PR_kwDOJ0Z1Ps5x4gqa | 4,946 | Support for tools requests in ollama API | {
"login": "infinity0n3",
"id": 441751,
"node_id": "MDQ6VXNlcjQ0MTc1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/441751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/infinity0n3",
"html_url": "https://github.com/infinity0n3",
"followers_url": "https://api.github.com/user... | [] | closed | false | null | [] | null | 4 | 2024-06-09T08:35:10 | 2024-11-24T22:38:45 | 2024-11-24T22:38:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4946",
"html_url": "https://github.com/ollama/ollama/pull/4946",
"diff_url": "https://github.com/ollama/ollama/pull/4946.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4946.patch",
"merged_at": null
} | Ollama API tools/tool_calls support
The tool support would consist of three addons to the ollama core.
1. Adding "tools" to the chat request and "tool_calls" to the chat response
2. Extending the model template to support `.Tools` and `.Results` variables.
3. To support detection of the model responding with a to... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4946/reactions",
"total_count": 27,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 9,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4946/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7413 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7413/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7413/comments | https://api.github.com/repos/ollama/ollama/issues/7413/events | https://github.com/ollama/ollama/issues/7413 | 2,622,702,919 | I_kwDOJ0Z1Ps6cU0VH | 7,413 | Suno-AI Bark | {
"login": "ncamacho97",
"id": 23426639,
"node_id": "MDQ6VXNlcjIzNDI2NjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/23426639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncamacho97",
"html_url": "https://github.com/ncamacho97",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-10-30T02:29:28 | 2024-11-04T17:43:56 | 2024-11-04T17:43:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was hoping you guys can please add this model
https://github.com/suno-ai/bark
Bark is a transformer-based text-to-audio model created by [Suno](https://suno.ai/). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The m... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7413/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/7413/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5556 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5556/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5556/comments | https://api.github.com/repos/ollama/ollama/issues/5556/events | https://github.com/ollama/ollama/pull/5556 | 2,396,896,576 | PR_kwDOJ0Z1Ps50w1AR | 5,556 | feat: Support Moore Threads GPU | {
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 7 | 2024-07-09T01:32:31 | 2024-11-21T10:32:31 | 2024-11-21T10:32:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5556",
"html_url": "https://github.com/ollama/ollama/pull/5556",
"diff_url": "https://github.com/ollama/ollama/pull/5556.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5556.patch",
"merged_at": null
} | [Moore Threads](https://en.mthreads.com/), a cutting-edge GPU startup, introduces MUSA (Moore Threads Unified System Architecture) as its foundational technology. This pull request marks the initial integration of MTGPU support into Ollama, leveraging MUSA's capabilities to enhance LLM inference performance.
I also ... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5556/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5556/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2808 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2808/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2808/comments | https://api.github.com/repos/ollama/ollama/issues/2808/events | https://github.com/ollama/ollama/pull/2808 | 2,158,888,385 | PR_kwDOJ0Z1Ps5oKvra | 2,808 | Update faq.md: proxy support | {
"login": "NightMachinery",
"id": 36224762,
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NightMachinery",
"html_url": "https://github.com/NightMachinery",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 1 | 2024-02-28T12:44:27 | 2024-05-06T22:09:06 | 2024-05-06T22:09:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2808",
"html_url": "https://github.com/ollama/ollama/pull/2808",
"diff_url": "https://github.com/ollama/ollama/pull/2808.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2808.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2808/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6825 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6825/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6825/comments | https://api.github.com/repos/ollama/ollama/issues/6825/events | https://github.com/ollama/ollama/issues/6825 | 2,528,349,034 | I_kwDOJ0Z1Ps6Ws4tq | 6,825 | LLava:13B Model Outputting ############### After Period of Inactivity | {
"login": "Atharvaaat",
"id": 93177128,
"node_id": "U_kgDOBY3FKA",
"avatar_url": "https://avatars.githubusercontent.com/u/93177128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Atharvaaat",
"html_url": "https://github.com/Atharvaaat",
"followers_url": "https://api.github.com/users/Athar... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-16T12:35:41 | 2025-01-07T23:57:50 | 2025-01-07T23:57:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Description:**
I encountered an issue with the Ollama LLava:13B model where the output was consistently `###############` after a period of inactivity. Restarting the `ollama.service` resolved the issue temporarily, but the root cause remains unclear.
---
**Environment:**
- **Mod... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6825/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2296 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2296/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2296/comments | https://api.github.com/repos/ollama/ollama/issues/2296/events | https://github.com/ollama/ollama/pull/2296 | 2,111,231,112 | PR_kwDOJ0Z1Ps5loaiV | 2,296 | append image tags to user content | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-02-01T00:31:48 | 2024-02-01T21:17:00 | 2024-02-01T21:17:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2296",
"html_url": "https://github.com/ollama/ollama/pull/2296",
"diff_url": "https://github.com/ollama/ollama/pull/2296.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2296.patch",
"merged_at": "2024-02-01T21:16:59"
} | summary of changes:
1. add `[img-x]` to prompt content when there are images. `x` corresponds to the image's id. for generate, this is just the image's index in the Images list. for chat, this is the image's index of among all images in the messages list
2. account for image embedding when trimming the context. ima... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2296/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7171 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7171/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7171/comments | https://api.github.com/repos/ollama/ollama/issues/7171/events | https://github.com/ollama/ollama/issues/7171 | 2,580,997,488 | I_kwDOJ0Z1Ps6Z1uVw | 7,171 | Counting tokens in text before embedding | {
"login": "DewiarQR",
"id": 64423698,
"node_id": "MDQ6VXNlcjY0NDIzNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64423698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DewiarQR",
"html_url": "https://github.com/DewiarQR",
"followers_url": "https://api.github.com/users/Dew... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-10-11T10:14:55 | 2024-12-02T14:37:14 | 2024-12-02T14:37:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When creating a vector database, we use embedding models such as bge-m3. The problem is that if the size of the text sent for vectorization does not fit into the context window of the model, the data is simply lost! and the Ollama project does not have a SINGLE MODEL!!! that would simply calculate tokens in the text be... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7171/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6434 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6434/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6434/comments | https://api.github.com/repos/ollama/ollama/issues/6434/events | https://github.com/ollama/ollama/issues/6434 | 2,474,618,578 | I_kwDOJ0Z1Ps6Tf67S | 6,434 | error loading model "Xiaobu Embedding v2" :error="llama runner process has terminated: signal: segmentation fault (core dumped)" | {
"login": "AAEE86",
"id": 33052466,
"node_id": "MDQ6VXNlcjMzMDUyNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/33052466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AAEE86",
"html_url": "https://github.com/AAEE86",
"followers_url": "https://api.github.com/users/AAEE86/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-08-20T03:03:15 | 2024-09-02T01:06:02 | 2024-09-02T01:06:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama-1 | time=2024-08-20T02:46:33.204Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=25 layers.offload=0 layers.split="" memory.available="[22.2 GiB]" memory.required.full="820.5 MiB" memory.required.partial="0 B" memory.required.kv="48.0 MiB" memory.re... | {
"login": "AAEE86",
"id": 33052466,
"node_id": "MDQ6VXNlcjMzMDUyNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/33052466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AAEE86",
"html_url": "https://github.com/AAEE86",
"followers_url": "https://api.github.com/users/AAEE86/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6434/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5792 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5792/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5792/comments | https://api.github.com/repos/ollama/ollama/issues/5792/events | https://github.com/ollama/ollama/issues/5792 | 2,418,427,797 | I_kwDOJ0Z1Ps6QJkeV | 5,792 | List command optional flag to display license | {
"login": "albertotn",
"id": 12526457,
"node_id": "MDQ6VXNlcjEyNTI2NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/12526457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertotn",
"html_url": "https://github.com/albertotn",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-07-19T09:08:15 | 2024-07-19T09:08:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It could be useful, in corporate environment, for security and compliance, to be able to display with list command, using an optional flag, also type of license, for example:
ollama list -l
display usual information, but also related license name for each model | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5792/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/784 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/784/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/784/comments | https://api.github.com/repos/ollama/ollama/issues/784/events | https://github.com/ollama/ollama/pull/784 | 1,942,651,885 | PR_kwDOJ0Z1Ps5cxlan | 784 | check for newer updates | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-10-13T21:13:04 | 2023-10-13T21:29:47 | 2023-10-13T21:29:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/784",
"html_url": "https://github.com/ollama/ollama/pull/784",
"diff_url": "https://github.com/ollama/ollama/pull/784.diff",
"patch_url": "https://github.com/ollama/ollama/pull/784.patch",
"merged_at": "2023-10-13T21:29:46"
} | Check if there are newer version of ollama updates available. This prevents multiple updates in a row. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/784/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2278 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2278/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2278/comments | https://api.github.com/repos/ollama/ollama/issues/2278/events | https://github.com/ollama/ollama/issues/2278 | 2,108,239,881 | I_kwDOJ0Z1Ps59qTAJ | 2,278 | Add Code Llama 70B model | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 5 | 2024-01-30T16:12:31 | 2024-06-16T13:35:57 | 2024-01-30T19:20:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Code Llama 70B now available
--
"We just released new versions of Code Llama, our LLM for code generation. Code Llama 70B consists of two new 70B parameter base models and one additional instruction fine-tuned model — CodeLlama-70B-Instruct, which achieves the strongest HumanEval performance of any Llama model we’ve ... | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2278/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8615 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8615/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8615/comments | https://api.github.com/repos/ollama/ollama/issues/8615/events | https://github.com/ollama/ollama/issues/8615 | 2,813,973,065 | I_kwDOJ0Z1Ps6nudJJ | 8,615 | [Enhancement] New Cohere models are not validated in `config.json` | {
"login": "sealad886",
"id": 155285242,
"node_id": "U_kgDOCUF2-g",
"avatar_url": "https://avatars.githubusercontent.com/u/155285242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sealad886",
"html_url": "https://github.com/sealad886",
"followers_url": "https://api.github.com/users/sealad... | [] | closed | false | null | [] | null | 1 | 2025-01-27T20:21:20 | 2025-01-27T20:30:57 | 2025-01-27T20:30:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Cohere [has released](https://docs.cohere.com/v2/v1/docs/models/an-overview-of-coheres-models) several updated versions of old models (i.e. Command-R, Command-Light, and Command-R Plus) plus their embedding and reranker models; they have also release their new `command-r7b`. I note that all of these are available for f... | {
"login": "sealad886",
"id": 155285242,
"node_id": "U_kgDOCUF2-g",
"avatar_url": "https://avatars.githubusercontent.com/u/155285242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sealad886",
"html_url": "https://github.com/sealad886",
"followers_url": "https://api.github.com/users/sealad... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8615/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/2265 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2265/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2265/comments | https://api.github.com/repos/ollama/ollama/issues/2265/events | https://github.com/ollama/ollama/issues/2265 | 2,106,971,954 | I_kwDOJ0Z1Ps59ldcy | 2,265 | macOS vram layer offloading | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2024-01-30T05:14:50 | 2024-04-15T19:09:00 | 2024-04-15T19:08:59 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2265/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/159 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/159/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/159/comments | https://api.github.com/repos/ollama/ollama/issues/159/events | https://github.com/ollama/ollama/issues/159 | 1,815,644,048 | I_kwDOJ0Z1Ps5sOIeQ | 159 | Using already downloaded models | {
"login": "kartikwatwani",
"id": 28218177,
"node_id": "MDQ6VXNlcjI4MjE4MTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/28218177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kartikwatwani",
"html_url": "https://github.com/kartikwatwani",
"followers_url": "https://api.githu... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 13 | 2023-07-21T11:26:13 | 2024-01-18T21:18:02 | 2023-08-30T21:31:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I want to use the models I have already downloaded using the link provided via email from Meta which are saved in a specific location on my PC. Is there any way to do that? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/159/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/159/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7835 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7835/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7835/comments | https://api.github.com/repos/ollama/ollama/issues/7835/events | https://github.com/ollama/ollama/pull/7835 | 2,692,544,956 | PR_kwDOJ0Z1Ps6DG1nD | 7,835 | runner.go: Add unit tests for context shifting | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-11-25T23:07:31 | 2024-11-26T19:21:38 | 2024-11-26T19:21:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7835",
"html_url": "https://github.com/ollama/ollama/pull/7835",
"diff_url": "https://github.com/ollama/ollama/pull/7835.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7835.patch",
"merged_at": "2024-11-26T19:21:35"
} | This also makes it easier to truncate long inputs the same as shifting but does not actually implement it. This type of truncation has a trade off between quality and time to first token. | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7835/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/52 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/52/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/52/comments | https://api.github.com/repos/ollama/ollama/issues/52/events | https://github.com/ollama/ollama/pull/52 | 1,792,493,784 | PR_kwDOJ0Z1Ps5U3HIs | 52 | pass model and predict options | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 1 | 2023-07-07T00:12:59 | 2023-07-07T17:59:12 | 2023-07-07T17:59:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/52",
"html_url": "https://github.com/ollama/ollama/pull/52",
"diff_url": "https://github.com/ollama/ollama/pull/52.diff",
"patch_url": "https://github.com/ollama/ollama/pull/52.patch",
"merged_at": "2023-07-07T17:59:11"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/52/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/52/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8501 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8501/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8501/comments | https://api.github.com/repos/ollama/ollama/issues/8501/events | https://github.com/ollama/ollama/issues/8501 | 2,799,142,631 | I_kwDOJ0Z1Ps6m14bn | 8,501 | ollama run llama3-70b pulling very slow | {
"login": "kowshik1234",
"id": 26191169,
"node_id": "MDQ6VXNlcjI2MTkxMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/26191169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kowshik1234",
"html_url": "https://github.com/kowshik1234",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2025-01-20T12:30:16 | 2025-01-21T22:30:28 | 2025-01-21T22:30:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I just downloaded ollama and tried to run the command ` ollama pull llama3:70b-instruct-q2_K`, the download starts with good speed and exactly after 4-5 seconds speeds drop drastically to 165kbps and lower. Most of the times it stays at 0B, I tried this with both my home network and office ... | {
"login": "kowshik1234",
"id": 26191169,
"node_id": "MDQ6VXNlcjI2MTkxMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/26191169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kowshik1234",
"html_url": "https://github.com/kowshik1234",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8501/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6744 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6744/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6744/comments | https://api.github.com/repos/ollama/ollama/issues/6744/events | https://github.com/ollama/ollama/pull/6744 | 2,518,831,471 | PR_kwDOJ0Z1Ps57HONC | 6,744 | Polish loganalyzer example | {
"login": "codefromthecrypt",
"id": 64215,
"node_id": "MDQ6VXNlcjY0MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codefromthecrypt",
"html_url": "https://github.com/codefromthecrypt",
"followers_url": "https://api.github... | [] | closed | false | null | [] | null | 0 | 2024-09-11T07:34:57 | 2024-09-12T01:37:38 | 2024-09-12T01:37:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6744",
"html_url": "https://github.com/ollama/ollama/pull/6744",
"diff_url": "https://github.com/ollama/ollama/pull/6744.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6744.patch",
"merged_at": "2024-09-12T01:37:38"
} | Just added a couple commands in case folks aren't familiar with python.
```bash
$ python loganalysis.py logtest.logfile
>>>bash
2023-11-10 07:17:44 192.168.65.1 - - [10/Nov/2023:13:17:43 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Ch... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6744/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1016 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1016/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1016/comments | https://api.github.com/repos/ollama/ollama/issues/1016/events | https://github.com/ollama/ollama/issues/1016 | 1,979,415,308 | I_kwDOJ0Z1Ps51-3sM | 1,016 | Support AMD GPUs on Intel Macs | {
"login": "J0hnny007",
"id": 33027466,
"node_id": "MDQ6VXNlcjMzMDI3NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/33027466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/J0hnny007",
"html_url": "https://github.com/J0hnny007",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 104 | 2023-11-06T15:20:01 | 2025-01-30T10:58:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm currently trying out the ollama app on my iMac (i7/Vega64) and I can't seem to get it to use my GPU.
I have tried running it with num_gpu 1 but that generated the warnings below.
`
2023/11/06 16:06:33 llama.go:384: starting llama runner
2023/11/06 16:06:33 llama.go:386: error starting the external llama run... | {
"login": "J0hnny007",
"id": 33027466,
"node_id": "MDQ6VXNlcjMzMDI3NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/33027466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/J0hnny007",
"html_url": "https://github.com/J0hnny007",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1016/reactions",
"total_count": 10,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/ollama/ollama/issues/1016/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/1532 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1532/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1532/comments | https://api.github.com/repos/ollama/ollama/issues/1532/events | https://github.com/ollama/ollama/issues/1532 | 2,042,657,541 | I_kwDOJ0Z1Ps55wHsF | 1,532 | [Feature Request] Add Discussion Tab to Ollama Repo | {
"login": "scpedicini",
"id": 2040540,
"node_id": "MDQ6VXNlcjIwNDA1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2040540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scpedicini",
"html_url": "https://github.com/scpedicini",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2023-12-14T23:32:19 | 2024-03-26T16:13:20 | 2024-03-12T21:43:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There's a lot of useful content in the Discord Ollama that is hard to search/organize. Given that the relatively large Ollama discord only has one channel `general` - requests for help/etc can get lost pretty easily.
@jmorganca I think it would be worth adding the Discussions Tab to the Github
![CleanShot 2023-... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1532/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1532/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3681 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3681/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3681/comments | https://api.github.com/repos/ollama/ollama/issues/3681/events | https://github.com/ollama/ollama/pull/3681 | 2,246,809,095 | PR_kwDOJ0Z1Ps5s2DNL | 3,681 | Support unicode characters in model path | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-04-16T20:11:25 | 2024-04-16T21:00:13 | 2024-04-16T21:00:13 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3681",
"html_url": "https://github.com/ollama/ollama/pull/3681",
"diff_url": "https://github.com/ollama/ollama/pull/3681.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3681.patch",
"merged_at": "2024-04-16T21:00:12"
} | When running the c++ subprocess, unicode characters in file names were not being parsed correctly, resulting in an error. This changes `server.cpp` to use `wmain` to receive the wide characters and converts them first.
Closes #3273
Fixes #2888
Fixes #3120 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3681/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1998 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1998/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1998/comments | https://api.github.com/repos/ollama/ollama/issues/1998/events | https://github.com/ollama/ollama/issues/1998 | 2,081,380,751 | I_kwDOJ0Z1Ps58D1mP | 1,998 | ggml-cuda.cu:7850: !"CUDA error" Aborted (core dumped) with 8 GPUs | {
"login": "quanpinjie",
"id": 2564119,
"node_id": "MDQ6VXNlcjI1NjQxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2564119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quanpinjie",
"html_url": "https://github.com/quanpinjie",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 3 | 2024-01-15T07:05:57 | 2024-03-12T18:20:35 | 2024-03-12T18:20:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
Error: Post "http://127.0.0.1:11434/api/generate": EOF
GPU INFO:
![Uploading image.png…]()
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1998/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4651 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4651/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4651/comments | https://api.github.com/repos/ollama/ollama/issues/4651/events | https://github.com/ollama/ollama/issues/4651 | 2,317,860,277 | I_kwDOJ0Z1Ps6KJ721 | 4,651 | Error When Running Granite Code 20b | {
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-05-26T17:36:56 | 2024-05-29T21:09:44 | 2024-05-29T09:29:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When executing `ollama run granite-code:20b` getting the following error on `linux`:
```bash
Error: llama runner process has terminated: signal: aborted (core dumped)
```
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.1.38 | {
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4651/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4651/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/209 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/209/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/209/comments | https://api.github.com/repos/ollama/ollama/issues/209/events | https://github.com/ollama/ollama/pull/209 | 1,820,618,623 | PR_kwDOJ0Z1Ps5WWUKD | 209 | enable k quants | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-25T15:40:15 | 2023-07-25T18:53:59 | 2023-07-25T18:53:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/209",
"html_url": "https://github.com/ollama/ollama/pull/209",
"diff_url": "https://github.com/ollama/ollama/pull/209.diff",
"patch_url": "https://github.com/ollama/ollama/pull/209.patch",
"merged_at": "2023-07-25T18:53:29"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/209/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3460 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3460/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3460/comments | https://api.github.com/repos/ollama/ollama/issues/3460/events | https://github.com/ollama/ollama/issues/3460 | 2,220,991,527 | I_kwDOJ0Z1Ps6EYaQn | 3,460 | dual GPU 8G/16G - CUDA error: out of memory with dolphin-mixtral | {
"login": "sebastianlau",
"id": 5213667,
"node_id": "MDQ6VXNlcjUyMTM2Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5213667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastianlau",
"html_url": "https://github.com/sebastianlau",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-04-02T16:52:16 | 2024-06-03T13:33:29 | 2024-06-03T13:32:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama crashes out entirely with error (throws error, then terminates process)
[CUDA error: out of memory current device: 0, in function alloc at C:\a\ollama\ollama\llm\llama.cpp\ggml-cuda.cu:445 cudaMalloc((void **) &ptr, look_ahead_size) GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\ggml-cu... | {
"login": "sebastianlau",
"id": 5213667,
"node_id": "MDQ6VXNlcjUyMTM2Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5213667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastianlau",
"html_url": "https://github.com/sebastianlau",
"followers_url": "https://api.github.com... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3460/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1340 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1340/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1340/comments | https://api.github.com/repos/ollama/ollama/issues/1340/events | https://github.com/ollama/ollama/pull/1340 | 2,020,066,876 | PR_kwDOJ0Z1Ps5g2uw2 | 1,340 | Update generate_linux.go | {
"login": "yoshino-s",
"id": 28624661,
"node_id": "MDQ6VXNlcjI4NjI0NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28624661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshino-s",
"html_url": "https://github.com/yoshino-s",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-12-01T04:59:06 | 2024-01-18T22:17:26 | 2024-01-18T22:17:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1340",
"html_url": "https://github.com/ollama/ollama/pull/1340",
"diff_url": "https://github.com/ollama/ollama/pull/1340.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1340.patch",
"merged_at": null
} | Previous PR: https://github.com/jmorganca/ollama/pull/985
The issue still here | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1340/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7151 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7151/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7151/comments | https://api.github.com/repos/ollama/ollama/issues/7151/events | https://github.com/ollama/ollama/issues/7151 | 2,576,137,686 | I_kwDOJ0Z1Ps6ZjL3W | 7,151 | Falcon Mamba Quality Issues | {
"login": "hg0428",
"id": 45984899,
"node_id": "MDQ6VXNlcjQ1OTg0ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/45984899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hg0428",
"html_url": "https://github.com/hg0428",
"followers_url": "https://api.github.com/users/hg0428/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-10-09T14:39:01 | 2024-10-09T15:02:36 | 2024-10-09T15:02:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama has a quality issue when running Falcon Mamba that is not present when using Llama.cpp.
#### Ollama:
```
>>> What is 10*10/10 + 10/10*10?
Let's solve the given equation step by step:
- First, we need to simplify each term in parentheses according to BODMAS rule which stands for Br... | {
"login": "hg0428",
"id": 45984899,
"node_id": "MDQ6VXNlcjQ1OTg0ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/45984899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hg0428",
"html_url": "https://github.com/hg0428",
"followers_url": "https://api.github.com/users/hg0428/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7151/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6955 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6955/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6955/comments | https://api.github.com/repos/ollama/ollama/issues/6955/events | https://github.com/ollama/ollama/issues/6955 | 2,548,183,211 | I_kwDOJ0Z1Ps6X4jCr | 6,955 | nvidia gpu discovery problem in docker container on wsl | {
"login": "Paramjethwa",
"id": 142441855,
"node_id": "U_kgDOCH19fw",
"avatar_url": "https://avatars.githubusercontent.com/u/142441855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Paramjethwa",
"html_url": "https://github.com/Paramjethwa",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-09-25T14:37:45 | 2024-11-05T23:24:56 | 2024-11-05T23:24:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Developed a chat app with a function of pulling the model by user directly into streamlit and select the model through dropdown.
i have pulled several model succesfully but when i try to do with large model Eg: Llava ii gives me a Asyncio.timeout.error.
i am using WSL2 (UBUNTU) and running... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6955/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4215 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4215/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4215/comments | https://api.github.com/repos/ollama/ollama/issues/4215/events | https://github.com/ollama/ollama/pull/4215 | 2,281,984,999 | PR_kwDOJ0Z1Ps5usgqt | 4,215 | llm: add minimum based on layer size | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-07T00:05:47 | 2024-05-07T16:26:34 | 2024-05-07T16:26:33 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4215",
"html_url": "https://github.com/ollama/ollama/pull/4215",
"diff_url": "https://github.com/ollama/ollama/pull/4215.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4215.patch",
"merged_at": "2024-05-07T16:26:33"
} | adjust minimum memory requirements based on the model being loaded and reduce the static minimum | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4215/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5155 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5155/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5155/comments | https://api.github.com/repos/ollama/ollama/issues/5155/events | https://github.com/ollama/ollama/issues/5155 | 2,363,307,735 | I_kwDOJ0Z1Ps6M3TbX | 5,155 | Error when using deepseek-coder-v2 | {
"login": "HeroSong666",
"id": 142960235,
"node_id": "U_kgDOCIVmaw",
"avatar_url": "https://avatars.githubusercontent.com/u/142960235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HeroSong666",
"html_url": "https://github.com/HeroSong666",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-06-20T00:54:34 | 2024-06-20T15:35:20 | 2024-06-20T15:20:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Error when running deepseek-coder-v2:
`
(base) root@fdtech-ai-node08:~# ollama run deepseek-coder-v2
pulling manifest
pulling 5ff0abeeac1d... 94% ▕██████████████████████████████████████████████████ ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5155/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8236 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8236/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8236/comments | https://api.github.com/repos/ollama/ollama/issues/8236/events | https://github.com/ollama/ollama/pull/8236 | 2,758,606,126 | PR_kwDOJ0Z1Ps6GM6x0 | 8,236 | Update README.md | {
"login": "adarshM84",
"id": 95633830,
"node_id": "U_kgDOBbNBpg",
"avatar_url": "https://avatars.githubusercontent.com/u/95633830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adarshM84",
"html_url": "https://github.com/adarshM84",
"followers_url": "https://api.github.com/users/adarshM8... | [] | closed | false | null | [] | null | 2 | 2024-12-25T08:05:46 | 2024-12-27T18:16:06 | 2024-12-27T18:16:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8236",
"html_url": "https://github.com/ollama/ollama/pull/8236",
"diff_url": "https://github.com/ollama/ollama/pull/8236.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8236.patch",
"merged_at": "2024-12-27T18:16:06"
} | New extention contribution | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8236/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5416 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5416/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5416/comments | https://api.github.com/repos/ollama/ollama/issues/5416/events | https://github.com/ollama/ollama/issues/5416 | 2,384,588,027 | I_kwDOJ0Z1Ps6OIez7 | 5,416 | Improve ollama's Output Speed | {
"login": "System233",
"id": 20336040,
"node_id": "MDQ6VXNlcjIwMzM2MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/20336040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/System233",
"html_url": "https://github.com/System233",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5808482718,
"node_id": ... | open | false | null | [] | null | 5 | 2024-07-01T19:30:45 | 2024-10-16T16:19:37 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have noticed that ollama always outputs content at a fixed speed, and most of the time, the GPU is not fully utilized(0% load), indicating that the bottleneck in generating content is not in the GPU.
At the same time, I have observed that during content output, one CPU core consistently has a high load(70% load), ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5416/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5416/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3237 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3237/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3237/comments | https://api.github.com/repos/ollama/ollama/issues/3237/events | https://github.com/ollama/ollama/issues/3237 | 2,194,253,438 | I_kwDOJ0Z1Ps6CyaZ- | 3,237 | Out of memory - GTX 1650 4G | {
"login": "yxl23",
"id": 115678682,
"node_id": "U_kgDOBuUd2g",
"avatar_url": "https://avatars.githubusercontent.com/u/115678682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yxl23",
"html_url": "https://github.com/yxl23",
"followers_url": "https://api.github.com/users/yxl23/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 3 | 2024-03-19T07:41:57 | 2024-04-17T22:53:55 | 2024-04-17T22:53:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
CUDA error: out of memory
current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:8583
cuMemCreate(&handle, reserve_size, &prop, 0)
GGML_ASSERT: C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"
### What did y... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3237/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5646 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5646/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5646/comments | https://api.github.com/repos/ollama/ollama/issues/5646/events | https://github.com/ollama/ollama/pull/5646 | 2,404,823,822 | PR_kwDOJ0Z1Ps51Lkcd | 5,646 | app: also clean up tempdir runners on install | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-12T06:04:05 | 2024-07-12T19:29:26 | 2024-07-12T19:29:24 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5646",
"html_url": "https://github.com/ollama/ollama/pull/5646",
"diff_url": "https://github.com/ollama/ollama/pull/5646.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5646.patch",
"merged_at": "2024-07-12T19:29:23"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5646/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5565 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5565/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5565/comments | https://api.github.com/repos/ollama/ollama/issues/5565/events | https://github.com/ollama/ollama/pull/5565 | 2,397,531,871 | PR_kwDOJ0Z1Ps50zAEj | 5,565 | feat: compatible with openai embedding api | {
"login": "byebyebruce",
"id": 3973693,
"node_id": "MDQ6VXNlcjM5NzM2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3973693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/byebyebruce",
"html_url": "https://github.com/byebyebruce",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2024-07-09T08:37:44 | 2024-07-23T05:11:21 | 2024-07-23T05:11:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5565",
"html_url": "https://github.com/ollama/ollama/pull/5565",
"diff_url": "https://github.com/ollama/ollama/pull/5565.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5565.patch",
"merged_at": null
} | Compatible with openAI embedding api | {
"login": "byebyebruce",
"id": 3973693,
"node_id": "MDQ6VXNlcjM5NzM2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3973693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/byebyebruce",
"html_url": "https://github.com/byebyebruce",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5565/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5565/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5776 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5776/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5776/comments | https://api.github.com/repos/ollama/ollama/issues/5776/events | https://github.com/ollama/ollama/issues/5776 | 2,416,959,751 | I_kwDOJ0Z1Ps6QD-EH | 5,776 | Search/Filter by license | {
"login": "leobenkel",
"id": 4960573,
"node_id": "MDQ6VXNlcjQ5NjA1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4960573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leobenkel",
"html_url": "https://github.com/leobenkel",
"followers_url": "https://api.github.com/users/le... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-07-18T17:14:08 | 2024-07-18T18:09:04 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | First, thanks for all the awesome work !
It would be nice to be able to search / filter by license on this page https://ollama.com/library | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5776/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5560 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5560/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5560/comments | https://api.github.com/repos/ollama/ollama/issues/5560/events | https://github.com/ollama/ollama/pull/5560 | 2,397,097,148 | PR_kwDOJ0Z1Ps50xgYG | 5,560 | server: fix model reloads when setting `OLLAMA_NUM_PARALLEL` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-09T04:36:51 | 2024-07-09T05:32:16 | 2024-07-09T05:32:15 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5560",
"html_url": "https://github.com/ollama/ollama/pull/5560",
"diff_url": "https://github.com/ollama/ollama/pull/5560.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5560.patch",
"merged_at": "2024-07-09T05:32:15"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5560/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6692 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6692/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6692/comments | https://api.github.com/repos/ollama/ollama/issues/6692/events | https://github.com/ollama/ollama/issues/6692 | 2,512,090,648 | I_kwDOJ0Z1Ps6Vu3YY | 6,692 | [Feature request] compatibility with vm balloon ram | {
"login": "Xyz00777",
"id": 52142054,
"node_id": "MDQ6VXNlcjUyMTQyMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/52142054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xyz00777",
"html_url": "https://github.com/Xyz00777",
"followers_url": "https://api.github.com/users/Xyz... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5755339642,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-09-07T21:52:32 | 2024-09-17T18:24:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, it looks like ollama is not compatibel with ballon ram inside of an VM, i wanted to run ollama inside of an balloon ram, but i realized that when i have balloon enabled ollama thinks that there is just as example 5GB Ram available out of the 15GB it could get, because they are not provisoned at the start time when ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6692/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6631 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6631/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6631/comments | https://api.github.com/repos/ollama/ollama/issues/6631/events | https://github.com/ollama/ollama/issues/6631 | 2,504,716,249 | I_kwDOJ0Z1Ps6VSu_Z | 6,631 | Add model Phi3-Vision | {
"login": "asmit203",
"id": 92667287,
"node_id": "U_kgDOBYX9lw",
"avatar_url": "https://avatars.githubusercontent.com/u/92667287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asmit203",
"html_url": "https://github.com/asmit203",
"followers_url": "https://api.github.com/users/asmit203/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-09-04T08:57:28 | 2024-09-04T12:45:32 | 2024-09-04T12:45:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Phi3 Vision weights are opensourced in hugging face
https://huggingface.co/microsoft/Phi-3-vision-128k-instruct
Hoping to see it in ollama models. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6631/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6631/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1240 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1240/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1240/comments | https://api.github.com/repos/ollama/ollama/issues/1240/events | https://github.com/ollama/ollama/issues/1240 | 2,006,366,288 | I_kwDOJ0Z1Ps53lrhQ | 1,240 | The DeepSeek-Coder AI model is not loading entirely into RAM, causing the model responses to be very slow. | {
"login": "jveeru",
"id": 26097073,
"node_id": "MDQ6VXNlcjI2MDk3MDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/26097073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jveeru",
"html_url": "https://github.com/jveeru",
"followers_url": "https://api.github.com/users/jveeru/fo... | [] | closed | false | null | [] | null | 5 | 2023-11-22T13:40:28 | 2023-12-19T18:48:51 | 2023-12-19T18:48:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I am using Ollama on a Mac Studio M1 Max with 64GB RAM. I have experimented with different models such as DeepSeek Coder AI 33b, WizardCoder Python 13b, and Mistral 7b text. Most of these models are stored entirely in RAM, except for the DeepSeek Coder model. The 33b model uses less than 4GB of RAM, while Wizard... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1240/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4113 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4113/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4113/comments | https://api.github.com/repos/ollama/ollama/issues/4113/events | https://github.com/ollama/ollama/issues/4113 | 2,276,719,349 | I_kwDOJ0Z1Ps6Hs_r1 | 4,113 | The API has become extremely flaky. It used to be very stable. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-05-03T00:50:03 | 2024-05-03T01:04:20 | 2024-05-03T01:04:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
There was a problem with request to openai API:
('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
I guess it is the same as this:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation c... | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4113/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6121 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6121/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6121/comments | https://api.github.com/repos/ollama/ollama/issues/6121/events | https://github.com/ollama/ollama/pull/6121 | 2,442,653,638 | PR_kwDOJ0Z1Ps53IbKT | 6,121 | Speech mod feature | {
"login": "mytechnotalent",
"id": 7095767,
"node_id": "MDQ6VXNlcjcwOTU3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7095767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mytechnotalent",
"html_url": "https://github.com/mytechnotalent",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 1 | 2024-08-01T14:39:33 | 2024-11-21T10:03:09 | 2024-11-21T10:03:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6121",
"html_url": "https://github.com/ollama/ollama/pull/6121",
"diff_url": "https://github.com/ollama/ollama/pull/6121.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6121.patch",
"merged_at": null
} | This PR adds a speech mod. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6121/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4358 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4358/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4358/comments | https://api.github.com/repos/ollama/ollama/issues/4358/events | https://github.com/ollama/ollama/issues/4358 | 2,290,870,503 | I_kwDOJ0Z1Ps6Ii-jn | 4,358 | No Devices Found on Ryzen 7 8840u | {
"login": "madelponte",
"id": 3129897,
"node_id": "MDQ6VXNlcjMxMjk4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3129897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madelponte",
"html_url": "https://github.com/madelponte",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-05-11T12:17:13 | 2024-05-25T04:07:43 | 2024-05-21T23:18:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I try to load a model I receive this error message:
`Error: llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found`
Here is the docker compose file that I am using to run this:
```
version: '3'
services:
ollama:... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4358/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/829 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/829/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/829/comments | https://api.github.com/repos/ollama/ollama/issues/829/events | https://github.com/ollama/ollama/pull/829 | 1,948,469,981 | PR_kwDOJ0Z1Ps5dEs5j | 829 | added python rag news summary | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-10-17T23:42:28 | 2023-10-21T04:03:17 | 2023-10-21T04:03:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/829",
"html_url": "https://github.com/ollama/ollama/pull/829",
"diff_url": "https://github.com/ollama/ollama/pull/829.diff",
"patch_url": "https://github.com/ollama/ollama/pull/829.patch",
"merged_at": "2023-10-21T04:03:16"
} | null | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/829/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3965 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3965/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3965/comments | https://api.github.com/repos/ollama/ollama/issues/3965/events | https://github.com/ollama/ollama/issues/3965 | 2,266,561,185 | I_kwDOJ0Z1Ps6HGPqh | 3,965 | Serving Ollama behind a corporate proxy - Not working in Mac M3 | {
"login": "nagarjunr",
"id": 7522071,
"node_id": "MDQ6VXNlcjc1MjIwNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7522071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nagarjunr",
"html_url": "https://github.com/nagarjunr",
"followers_url": "https://api.github.com/users/na... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-26T22:05:49 | 2024-05-01T21:05:44 | 2024-05-01T21:05:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Encountered an error while running ollama behind corporate proxy.
Error:
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/gemma/manifests/2b": dial tcp: lookup registry.ollama.ai: no such host
Tried https://github.com/ollama/ollama/issues/729, But... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3965/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1601 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1601/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1601/comments | https://api.github.com/repos/ollama/ollama/issues/1601/events | https://github.com/ollama/ollama/issues/1601 | 2,048,114,596 | I_kwDOJ0Z1Ps56E7-k | 1,601 | Error: 403 on pulling manifest | {
"login": "honggyukim",
"id": 19642328,
"node_id": "MDQ6VXNlcjE5NjQyMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/19642328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/honggyukim",
"html_url": "https://github.com/honggyukim",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-12-19T07:53:42 | 2023-12-19T21:00:56 | 2023-12-19T17:22:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
Thanks very much for building this great project!
I would like to set up ollama in the internal Linux server of my office but it fails pulling pre-trained models as follows.
```
# installation
$ curl https://ollama.ai/install.sh | sh
# run
$ ollama run llama2
pulling manifest
Error: 403:
```
I'v... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1601/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4352 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4352/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4352/comments | https://api.github.com/repos/ollama/ollama/issues/4352/events | https://github.com/ollama/ollama/issues/4352 | 2,290,802,166 | I_kwDOJ0Z1Ps6Iit32 | 4,352 | How to set the parameters to make the ollama model output more detailed and comprehensive answers? | {
"login": "wangkun199608300514",
"id": 40048517,
"node_id": "MDQ6VXNlcjQwMDQ4NTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/40048517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangkun199608300514",
"html_url": "https://github.com/wangkun199608300514",
"followers_url": ... | [] | open | false | null | [] | null | 2 | 2024-05-11T09:18:51 | 2024-11-21T04:00:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The output content is not rich enough and appears to be very uniform. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4352/reactions",
"total_count": 1,
"+1": 0,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4352/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/926 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/926/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/926/comments | https://api.github.com/repos/ollama/ollama/issues/926/events | https://github.com/ollama/ollama/issues/926 | 1,964,760,798 | I_kwDOJ0Z1Ps51G97e | 926 | Unable to push | {
"login": "hemanth",
"id": 18315,
"node_id": "MDQ6VXNlcjE4MzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/18315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemanth",
"html_url": "https://github.com/hemanth",
"followers_url": "https://api.github.com/users/hemanth/follower... | [] | closed | false | null | [] | null | 3 | 2023-10-27T04:48:41 | 2023-10-27T05:00:06 | 2023-10-27T04:58:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | <img width="1349" alt="image" src="https://github.com/jmorganca/ollama/assets/18315/41f9c107-c2a2-4b1b-a9e3-3e4aa70ddcf0">
I have copied the pub keys to my profile, maybe it is a firewall issue?
| {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/926/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5150 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5150/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5150/comments | https://api.github.com/repos/ollama/ollama/issues/5150/events | https://github.com/ollama/ollama/pull/5150 | 2,363,106,472 | PR_kwDOJ0Z1Ps5zALjf | 5,150 | Ngrok yml config | {
"login": "plamen9",
"id": 24589988,
"node_id": "MDQ6VXNlcjI0NTg5OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/24589988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamen9",
"html_url": "https://github.com/plamen9",
"followers_url": "https://api.github.com/users/plamen... | [] | closed | false | null | [] | null | 1 | 2024-06-19T20:51:16 | 2024-09-05T16:56:27 | 2024-09-05T16:56:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5150",
"html_url": "https://github.com/ollama/ollama/pull/5150",
"diff_url": "https://github.com/ollama/ollama/pull/5150.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5150.patch",
"merged_at": null
} | An example added for users who are using yml config file to start their Ngrok tunnels. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5150/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1593 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1593/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1593/comments | https://api.github.com/repos/ollama/ollama/issues/1593/events | https://github.com/ollama/ollama/issues/1593 | 2,047,788,123 | I_kwDOJ0Z1Ps56DsRb | 1,593 | Fedora 39 Install Failure on NVIDIA Repo | {
"login": "carlowisse",
"id": 17878272,
"node_id": "MDQ6VXNlcjE3ODc4Mjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17878272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carlowisse",
"html_url": "https://github.com/carlowisse",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-12-19T02:10:02 | 2024-05-06T23:36:43 | 2023-12-19T18:45:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am trying to install Ollama on Fedora 39, but the install script fails on the following:
```
>>> Installing ollama to /usr/local/bin...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Installing NVIDIA repository...
Adding repo... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1593/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1515 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1515/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1515/comments | https://api.github.com/repos/ollama/ollama/issues/1515/events | https://github.com/ollama/ollama/issues/1515 | 2,040,866,572 | I_kwDOJ0Z1Ps55pScM | 1,515 | golang 1.20 version not include slices tool package | {
"login": "jjeejj",
"id": 15176971,
"node_id": "MDQ6VXNlcjE1MTc2OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15176971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjeejj",
"html_url": "https://github.com/jjeejj",
"followers_url": "https://api.github.com/users/jjeejj/fo... | [] | closed | false | null | [] | null | 1 | 2023-12-14T04:08:30 | 2023-12-15T19:15:58 | 2023-12-15T19:15:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1515/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5845 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5845/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5845/comments | https://api.github.com/repos/ollama/ollama/issues/5845/events | https://github.com/ollama/ollama/issues/5845 | 2,422,317,658 | I_kwDOJ0Z1Ps6QYaJa | 5,845 | codestral 7b | {
"login": "commitcompanion",
"id": 175923765,
"node_id": "U_kgDOCnxiNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/175923765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/commitcompanion",
"html_url": "https://github.com/commitcompanion",
"followers_url": "https://api.githu... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-07-22T08:52:18 | 2024-07-22T12:42:24 | 2024-07-22T12:42:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please add an 7b version of codestral. It is the best coding model, but the 22b version runs just very slow. | {
"login": "commitcompanion",
"id": 175923765,
"node_id": "U_kgDOCnxiNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/175923765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/commitcompanion",
"html_url": "https://github.com/commitcompanion",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5845/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5845/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/89 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/89/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/89/comments | https://api.github.com/repos/ollama/ollama/issues/89/events | https://github.com/ollama/ollama/issues/89 | 1,808,450,563 | I_kwDOJ0Z1Ps5rysQD | 89 | layer pulling issue when connection drops and comes back | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2023-07-17T19:23:55 | 2023-07-20T05:30:40 | 2023-07-20T05:30:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I ran `ollama run nous-hermes` and it started dl the model. I quit when I saw I was on wifi, and connected wired and tried again. It hung for a minute and then when I asked a question it error'd.
```
❯ ./ollama run library/nous-hermes:latest ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/89/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/89/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8387 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8387/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8387/comments | https://api.github.com/repos/ollama/ollama/issues/8387/events | https://github.com/ollama/ollama/issues/8387 | 2,782,126,283 | I_kwDOJ0Z1Ps6l0-DL | 8,387 | Ollama not completing chat request | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 23 | 2025-01-11T20:05:58 | 2025-01-12T13:35:36 | 2025-01-12T13:35:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
At times, Ollama does not complete chat requests. The client waits and timeouts. Ollama reports the cancellation in the log and cancels the runner. If streaming, Ollama will continue to stream repeating data and not end the stream. If not streaming, Ollama will never respond to the chat request.... | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8387/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8511 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8511/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8511/comments | https://api.github.com/repos/ollama/ollama/issues/8511/events | https://github.com/ollama/ollama/pull/8511 | 2,800,457,745 | PR_kwDOJ0Z1Ps6Ia3QT | 8,511 | Added golang template string manipulation functions, contains, hasPrefix, hasSuffix, split. | {
"login": "tbiehn",
"id": 184800,
"node_id": "MDQ6VXNlcjE4NDgwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/184800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tbiehn",
"html_url": "https://github.com/tbiehn",
"followers_url": "https://api.github.com/users/tbiehn/follow... | [] | closed | false | null | [] | null | 2 | 2025-01-21T00:26:05 | 2025-01-27T21:22:09 | 2025-01-27T17:56:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8511",
"html_url": "https://github.com/ollama/ollama/pull/8511",
"diff_url": "https://github.com/ollama/ollama/pull/8511.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8511.patch",
"merged_at": null
} | Uncovered and corrected a bug where the Template.Execute block drops custom registered functions. Extended test suite to cover these new template functions.
These have been introduced to support DeepSeek R1 models, their training templates elide old `<think>` blocks as the conversation progresses.
Supporting this us... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8511/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1626 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1626/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1626/comments | https://api.github.com/repos/ollama/ollama/issues/1626/events | https://github.com/ollama/ollama/issues/1626 | 2,050,300,136 | I_kwDOJ0Z1Ps56NRjo | 1,626 | Showing modelfiles on ollama.com | {
"login": "Riezebos",
"id": 22647971,
"node_id": "MDQ6VXNlcjIyNjQ3OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/22647971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Riezebos",
"html_url": "https://github.com/Riezebos",
"followers_url": "https://api.github.com/users/Rie... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | null | [] | null | 2 | 2023-12-20T10:46:32 | 2024-03-11T18:18:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Similar to docker hub showing the Dockerfile that each docker image was created with, is there a repository of the Modelfiles that models in the Ollama library were created with? If not, what do you think of creating one?
I can run `ollama show <model> --modelfile` but that only works for models I have downloaded. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1626/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1626/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5946/comments | https://api.github.com/repos/ollama/ollama/issues/5946/events | https://github.com/ollama/ollama/issues/5946 | 2,429,637,872 | I_kwDOJ0Z1Ps6Q0VTw | 5,946 | Text Generation Documentation | {
"login": "Demirrr",
"id": 13405667,
"node_id": "MDQ6VXNlcjEzNDA1NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13405667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Demirrr",
"html_url": "https://github.com/Demirrr",
"followers_url": "https://api.github.com/users/Demirr... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396200,
... | open | false | null | [] | null | 5 | 2024-07-25T10:43:14 | 2024-11-06T00:57:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Dear all,
we ❤️ Ollama. Thank you for this great framework. I wa
There are many parameters for text generation Many of such parameters overlap with [llama.ccp](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md#common-options), while few of them do not, e.g.
1. num_thread
2. repeat_la... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5946/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3982 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3982/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3982/comments | https://api.github.com/repos/ollama/ollama/issues/3982/events | https://github.com/ollama/ollama/issues/3982 | 2,267,208,590 | I_kwDOJ0Z1Ps6HItuO | 3,982 | CUDA error while trying to run llama3-8B: out of memory | {
"login": "piotrfila",
"id": 29634290,
"node_id": "MDQ6VXNlcjI5NjM0Mjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29634290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piotrfila",
"html_url": "https://github.com/piotrfila",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-04-27T21:53:48 | 2024-04-28T18:35:47 | 2024-04-28T18:35:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
I am trying to run llama3-8B:instruct on 2 * GTX 970 (4GB, CUDA 5.2), no SLI. #1288 led me to believe it should be possible in terms of VRAM requirements (8GB total) and I also have enough RAM (16GB). However, each time I try to run the model the ollama service crashes due to out of ... | {
"login": "piotrfila",
"id": 29634290,
"node_id": "MDQ6VXNlcjI5NjM0Mjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29634290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piotrfila",
"html_url": "https://github.com/piotrfila",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3982/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6856 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6856/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6856/comments | https://api.github.com/repos/ollama/ollama/issues/6856/events | https://github.com/ollama/ollama/issues/6856 | 2,533,518,264 | I_kwDOJ0Z1Ps6XAmu4 | 6,856 | link downloaded model to ollama | {
"login": "Jason-Zhi",
"id": 78072142,
"node_id": "MDQ6VXNlcjc4MDcyMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/78072142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jason-Zhi",
"html_url": "https://github.com/Jason-Zhi",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-09-18T11:49:58 | 2024-12-02T22:57:21 | 2024-12-02T22:57:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any API for linking an downloaded model to ollama from elsewhere. Suppose the model is downloaded before installing ollama, is there any way to link the model to ollama? | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6856/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4380 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4380/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4380/comments | https://api.github.com/repos/ollama/ollama/issues/4380/events | https://github.com/ollama/ollama/pull/4380 | 2,291,453,375 | PR_kwDOJ0Z1Ps5vL7PI | 4,380 | use tokenize/detokenize | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-12T18:45:53 | 2024-05-29T19:01:00 | 2024-05-29T19:01:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4380",
"html_url": "https://github.com/ollama/ollama/pull/4380",
"diff_url": "https://github.com/ollama/ollama/pull/4380.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4380.patch",
"merged_at": "2024-05-29T19:01:00"
} | remove server's infill and system prompt which are unused | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4380/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1053 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1053/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1053/comments | https://api.github.com/repos/ollama/ollama/issues/1053/events | https://github.com/ollama/ollama/issues/1053 | 1,985,028,931 | I_kwDOJ0Z1Ps52USND | 1,053 | Requesting support for basic auth or API key authentication | {
"login": "sebiweise",
"id": 22999200,
"node_id": "MDQ6VXNlcjIyOTk5MjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/22999200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebiweise",
"html_url": "https://github.com/sebiweise",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 20 | 2023-11-09T07:51:37 | 2024-12-23T01:01:03 | 2024-12-23T01:01:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be great to have some sort of authentication in front of the ollama api. Currently I´m using Nginx Proxy Manager to add a Access List to prevent unauthorized access but a standard way implemented into Ollama itself would be great for all developers that are integrating Ollama into there software. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1053/reactions",
"total_count": 56,
"+1": 56,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1053/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7395 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7395/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7395/comments | https://api.github.com/repos/ollama/ollama/issues/7395/events | https://github.com/ollama/ollama/issues/7395 | 2,617,880,525 | I_kwDOJ0Z1Ps6cCa_N | 7,395 | Why is the performance of x/llama3.2-vision for describing images poor? | {
"login": "hosea7456",
"id": 48872586,
"node_id": "MDQ6VXNlcjQ4ODcyNTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/48872586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hosea7456",
"html_url": "https://github.com/hosea7456",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-10-28T09:45:08 | 2024-11-13T22:50:28 | 2024-11-13T22:10:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am experiencing x/llama3.2-vision(11b) using ollama-0.4.0-rc5, but the performance is poor compared to the [official version](https://deepinfra.com/meta-llama/Llama-3.2-11B-Vision-Instruct). What causes this result?
**x/llama3.2-vision:**
), the custom System prompt ist working fine.
But if I upload an Picture at the start of the Conversation, the System Prompt is not Loading and if I continue the Conversation it s... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4431/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3032 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3032/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3032/comments | https://api.github.com/repos/ollama/ollama/issues/3032/events | https://github.com/ollama/ollama/issues/3032 | 2,177,500,774 | I_kwDOJ0Z1Ps6BygZm | 3,032 | Ollama errors with `msg="Failed to load dynamic library [...]/libext_server.so exception std::bad_alloc` | {
"login": "pythonHuang",
"id": 13172049,
"node_id": "MDQ6VXNlcjEzMTcyMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13172049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pythonHuang",
"html_url": "https://github.com/pythonHuang",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-03-10T03:13:10 | 2024-05-29T19:28:16 | 2024-05-29T19:28:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Error: Unable to load dynamic library: Unable to load dynamic server library: �Ҳ���ָ����ģ�顣
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3032/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3798 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3798/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3798/comments | https://api.github.com/repos/ollama/ollama/issues/3798/events | https://github.com/ollama/ollama/issues/3798 | 2,255,131,543 | I_kwDOJ0Z1Ps6GapOX | 3,798 | Can this library utilize Intel's UHD graphics cards? | {
"login": "Meshwa428",
"id": 135232056,
"node_id": "U_kgDOCA96OA",
"avatar_url": "https://avatars.githubusercontent.com/u/135232056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Meshwa428",
"html_url": "https://github.com/Meshwa428",
"followers_url": "https://api.github.com/users/Meshwa... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677491450,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-04-21T15:24:36 | 2024-08-28T02:52:49 | 2024-08-09T18:37:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | My question is that most of the models are supported/backed by cuda to run on NVIDIA's cards, but what about Intel?
Many machines (most probably consumer grade intel laptops) still have intel gpus (UHD cards) so will ollama add a support for them?
Or intel gpus are just too hard to be used?
Cause to run those big m... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3798/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5190 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5190/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5190/comments | https://api.github.com/repos/ollama/ollama/issues/5190/events | https://github.com/ollama/ollama/pull/5190 | 2,364,802,080 | PR_kwDOJ0Z1Ps5zF8st | 5,190 | Remove Quotes from Parameters in Ollama Show | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 1 | 2024-06-20T16:08:03 | 2024-08-11T22:19:40 | 2024-08-11T22:19:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5190",
"html_url": "https://github.com/ollama/ollama/pull/5190",
"diff_url": "https://github.com/ollama/ollama/pull/5190.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5190.patch",
"merged_at": null
} | Resolves #5183
Before:
<img width="492" alt="Screenshot 2024-06-20 at 9 07 56 AM" src="https://github.com/ollama/ollama/assets/65097070/0c6e8f89-854b-4069-a387-dc191766ee70">
After:
<img width="502" alt="Screenshot 2024-06-20 at 9 07 17 AM" src="https://github.com/ollama/ollama/assets/65097070/f6d69913-1bb1-... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5190/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2887 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2887/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2887/comments | https://api.github.com/repos/ollama/ollama/issues/2887/events | https://github.com/ollama/ollama/issues/2887 | 2,165,053,920 | I_kwDOJ0Z1Ps6BDBng | 2,887 | check default generator for windows | {
"login": "Kreijstal",
"id": 2415206,
"node_id": "MDQ6VXNlcjI0MTUyMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2415206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kreijstal",
"html_url": "https://github.com/Kreijstal",
"followers_url": "https://api.github.com/users/Kr... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-03-03T00:49:02 | 2024-03-12T01:37:22 | 2024-03-12T01:37:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Generator Ninja does not support platform specification but platform -A x64 was specified. when trying to build ollama. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2887/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2887/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4923 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4923/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4923/comments | https://api.github.com/repos/ollama/ollama/issues/4923/events | https://github.com/ollama/ollama/issues/4923 | 2,341,370,158 | I_kwDOJ0Z1Ps6Ljnku | 4,923 | ollama download时下载的server地址是开源的吗?在国内感觉不好拉,想弄个类似的 | {
"login": "papandadj",
"id": 25424898,
"node_id": "MDQ6VXNlcjI1NDI0ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papandadj",
"html_url": "https://github.com/papandadj",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-06-08T01:54:45 | 2024-06-09T17:39:05 | 2024-06-09T17:39:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4923/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7001 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7001/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7001/comments | https://api.github.com/repos/ollama/ollama/issues/7001/events | https://github.com/ollama/ollama/pull/7001 | 2,552,548,693 | PR_kwDOJ0Z1Ps585s7_ | 7,001 | cli: pull models without starting server, fixes #3369 | {
"login": "a-h",
"id": 1029947,
"node_id": "MDQ6VXNlcjEwMjk5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a-h",
"html_url": "https://github.com/a-h",
"followers_url": "https://api.github.com/users/a-h/followers",
"f... | [] | open | false | null | [] | null | 1 | 2024-09-27T10:12:50 | 2024-10-21T13:57:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7001",
"html_url": "https://github.com/ollama/ollama/pull/7001",
"diff_url": "https://github.com/ollama/ollama/pull/7001.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7001.patch",
"merged_at": null
} | In this change, I've added a new `-local` flag to the `ollama pull` command.
This enables models to be downloaded to the server directories without first starting the server, as per requests at #3369
This change makes it easier to package models for distribution and deployment.
To make the change, I made `reg... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7001/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7001/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1895 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1895/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1895/comments | https://api.github.com/repos/ollama/ollama/issues/1895/events | https://github.com/ollama/ollama/issues/1895 | 2,074,406,391 | I_kwDOJ0Z1Ps57pO33 | 1,895 | `CUDA error 999: unknown error` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-01-10T13:42:29 | 2024-02-19T19:47:23 | 2024-02-19T19:47:23 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
ollama serve
2024/01/10 12:36:43 images.go:808: total blobs: 9
2024/01/10 12:36:43 images.go:815: total unused blobs removed: 0
2024/01/10 12:36:43 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.19)
2024/01/10 12:36:43 shim_ext_server.go:142: Dynamic LLM variants [cuda rocm]
2024/01/10 12:36:43 gpu.... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1895/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4001 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4001/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4001/comments | https://api.github.com/repos/ollama/ollama/issues/4001/events | https://github.com/ollama/ollama/issues/4001 | 2,267,491,450 | I_kwDOJ0Z1Ps6HJyx6 | 4,001 | CORS configuration error blocking authorization in Ollama's OpenAI compatible endpoint | {
"login": "mnixry",
"id": 32300164,
"node_id": "MDQ6VXNlcjMyMzAwMTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32300164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnixry",
"html_url": "https://github.com/mnixry",
"followers_url": "https://api.github.com/users/mnixry/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 1 | 2024-04-28T11:09:27 | 2024-05-08T20:14:01 | 2024-05-08T20:14:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Related: #3571
Ollama's implementation of an OpenAI-compatible endpoint faces a CORS (Cross-Origin Resource Sharing) configuration issue. The preflight OPTIONS response currently includes the headers
```http
HTTP/1.1 204 No Content
Access-Control-Allow-Headers: Origin,Content-Length,... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4001/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4001/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1689 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1689/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1689/comments | https://api.github.com/repos/ollama/ollama/issues/1689/events | https://github.com/ollama/ollama/issues/1689 | 2,054,799,798 | I_kwDOJ0Z1Ps56ecG2 | 1,689 | Documentation suggestion: add small paragraph about quantization in README.md | {
"login": "moDal7",
"id": 97637845,
"node_id": "U_kgDOBdHV1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/97637845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moDal7",
"html_url": "https://github.com/moDal7",
"followers_url": "https://api.github.com/users/moDal7/followers"... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | 1 | 2023-12-23T14:12:50 | 2023-12-27T19:34:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Enhancement / Documentation
I suggest to add a small paragraph explaining the effect and why someone should or should not download a quantized version of a model.
I feel like it's a fairly technical subject but it can impact the perceived performance of any kind of user.
I would add it possibly under the Mo... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1689/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1689/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8685 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8685/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8685/comments | https://api.github.com/repos/ollama/ollama/issues/8685/events | https://github.com/ollama/ollama/issues/8685 | 2,819,830,645 | I_kwDOJ0Z1Ps6oEzN1 | 8,685 | Request to change the file location and model path, and also gui | {
"login": "Bostoneary",
"id": 96782219,
"node_id": "U_kgDOBcTHiw",
"avatar_url": "https://avatars.githubusercontent.com/u/96782219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bostoneary",
"html_url": "https://github.com/Bostoneary",
"followers_url": "https://api.github.com/users/Bosto... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2025-01-30T03:38:48 | 2025-01-30T03:56:41 | 2025-01-30T03:56:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This software is automatically install on the default path in my C disk. And all model is download to specific path in C. However, there is limited space in my disk C, can we change the software install location and the model download path? And it is possible to have a gui of this softeware? Hope this can be better one... | {
"login": "Bostoneary",
"id": 96782219,
"node_id": "U_kgDOBcTHiw",
"avatar_url": "https://avatars.githubusercontent.com/u/96782219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bostoneary",
"html_url": "https://github.com/Bostoneary",
"followers_url": "https://api.github.com/users/Bosto... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8685/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7878 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7878/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7878/comments | https://api.github.com/repos/ollama/ollama/issues/7878/events | https://github.com/ollama/ollama/pull/7878 | 2,703,829,434 | PR_kwDOJ0Z1Ps6DhtKD | 7,878 | server: add warning message for deprecated context field in /api/generate | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-11-29T04:34:56 | 2024-11-30T22:05:52 | 2024-11-30T22:05:50 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7878",
"html_url": "https://github.com/ollama/ollama/pull/7878",
"diff_url": "https://github.com/ollama/ollama/pull/7878.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7878.patch",
"merged_at": "2024-11-30T22:05:50"
} | The `context` parameter in `/api/generate` has been longtime replaced by functionality in the `/api/chat` endpoint. This PR adds a deprecation warning in the logs when used. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7878/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5060 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5060/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5060/comments | https://api.github.com/repos/ollama/ollama/issues/5060/events | https://github.com/ollama/ollama/issues/5060 | 2,354,736,609 | I_kwDOJ0Z1Ps6MWm3h | 5,060 | request for one useful vison model | {
"login": "OpenSourceCommunityInterface",
"id": 171415123,
"node_id": "U_kgDOCjeWUw",
"avatar_url": "https://avatars.githubusercontent.com/u/171415123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OpenSourceCommunityInterface",
"html_url": "https://github.com/OpenSourceCommunityInterface"... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-06-15T10:44:46 | 2024-06-15T14:35:59 | 2024-06-15T14:35:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | as there is no one useful... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5060/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4590 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4590/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4590/comments | https://api.github.com/repos/ollama/ollama/issues/4590/events | https://github.com/ollama/ollama/issues/4590 | 2,312,979,018 | I_kwDOJ0Z1Ps6J3UJK | 4,590 | How i can understand, that model is not sure with answers ? (Get Entropy/Probas) | {
"login": "antonbugaets",
"id": 64284277,
"node_id": "MDQ6VXNlcjY0Mjg0Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/64284277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonbugaets",
"html_url": "https://github.com/antonbugaets",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 2 | 2024-05-23T13:47:42 | 2024-05-24T19:27:32 | 2024-05-24T19:27:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | i'm aware about temperature, top k/p parameters which i can configure using Ollama as serv. function for models inferencing,
But how do i can understand, that model is not sure about particular answer by my promts? While model inferences with Ollama.
I need to understand this to perform post-processing of 'low qu... | {
"login": "antonbugaets",
"id": 64284277,
"node_id": "MDQ6VXNlcjY0Mjg0Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/64284277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonbugaets",
"html_url": "https://github.com/antonbugaets",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4590/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/830 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/830/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/830/comments | https://api.github.com/repos/ollama/ollama/issues/830/events | https://github.com/ollama/ollama/pull/830 | 1,948,669,271 | PR_kwDOJ0Z1Ps5dFXnw | 830 | Add basic JSON Schema support to the API (converts to GBNF grammar) | {
"login": "Lwrless",
"id": 2044349,
"node_id": "MDQ6VXNlcjIwNDQzNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2044349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lwrless",
"html_url": "https://github.com/Lwrless",
"followers_url": "https://api.github.com/users/Lwrless/... | [] | closed | false | null | [] | null | 15 | 2023-10-18T02:43:45 | 2024-12-05T00:44:28 | 2024-12-05T00:44:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/830",
"html_url": "https://github.com/ollama/ollama/pull/830",
"diff_url": "https://github.com/ollama/ollama/pull/830.diff",
"patch_url": "https://github.com/ollama/ollama/pull/830.patch",
"merged_at": null
} | This PR is based on PR https://github.com/jmorganca/ollama/pull/565 which adds GBNF grammar support.
JSON Schema is now available as an option (`schema`) in `/api/generate` API. If provided, it will be converted into GBNF grammar and added into the predict request for llama.cpp. Also, make sure to set the `format` t... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/830/reactions",
"total_count": 24,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/830/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7139 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7139/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7139/comments | https://api.github.com/repos/ollama/ollama/issues/7139/events | https://github.com/ollama/ollama/pull/7139 | 2,573,669,833 | PR_kwDOJ0Z1Ps59-T_q | 7,139 | llama: Decouple patching script from submodule | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-10-08T16:20:58 | 2024-10-17T22:03:12 | 2024-10-17T22:03:09 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7139",
"html_url": "https://github.com/ollama/ollama/pull/7139",
"diff_url": "https://github.com/ollama/ollama/pull/7139.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7139.patch",
"merged_at": "2024-10-17T22:03:09"
} | The current patching script relies on the existing llama.cpp submodule, which we plan to remove in favor of the vendored code soon, so this updates the patching flow to no longer rely on that. It still uses the git commit from the submodule to ensure we don't drift while the submodule is still present. In the commit ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7139/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2435 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2435/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2435/comments | https://api.github.com/repos/ollama/ollama/issues/2435/events | https://github.com/ollama/ollama/pull/2435 | 2,127,968,617 | PR_kwDOJ0Z1Ps5mhoPK | 2,435 | Update domain name references in docs and install script | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-09T23:16:30 | 2024-02-09T23:19:30 | 2024-02-09T23:19:30 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2435",
"html_url": "https://github.com/ollama/ollama/pull/2435",
"diff_url": "https://github.com/ollama/ollama/pull/2435.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2435.patch",
"merged_at": "2024-02-09T23:19:30"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2435/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1213 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1213/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1213/comments | https://api.github.com/repos/ollama/ollama/issues/1213/events | https://github.com/ollama/ollama/pull/1213 | 2,003,078,129 | PR_kwDOJ0Z1Ps5f9QMJ | 1,213 | Remove redundant filename parameter in api request for `ollama create` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2023-11-20T21:54:26 | 2023-11-20T22:05:37 | 2023-11-20T22:05:37 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1213",
"html_url": "https://github.com/ollama/ollama/pull/1213",
"diff_url": "https://github.com/ollama/ollama/pull/1213.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1213.patch",
"merged_at": "2023-11-20T22:05:37"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1213/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2931 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2931/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2931/comments | https://api.github.com/repos/ollama/ollama/issues/2931/events | https://github.com/ollama/ollama/issues/2931 | 2,168,706,364 | I_kwDOJ0Z1Ps6BQ9U8 | 2,931 | Will Ollama support FuncMaster-v0.1-Mistral-7B-Instruct-GGUF? | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-03-05T09:23:17 | 2024-03-12T01:26:14 | 2024-03-12T01:26:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Will Ollama support FuncMaster-v0.1-Mistral-7B-Instruct-GGUF?
https://huggingface.co/allyson-ai/FuncMaster-v0.1-Mistral-7B-Instruct-GGUF | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2931/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2931/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1062 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1062/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1062/comments | https://api.github.com/repos/ollama/ollama/issues/1062/events | https://github.com/ollama/ollama/pull/1062 | 1,986,339,321 | PR_kwDOJ0Z1Ps5fEqzE | 1,062 | Added gptel to list of integrations | {
"login": "nickanderson",
"id": 202896,
"node_id": "MDQ6VXNlcjIwMjg5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/202896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickanderson",
"html_url": "https://github.com/nickanderson",
"followers_url": "https://api.github.com/u... | [] | closed | false | null | [] | null | 0 | 2023-11-09T20:18:02 | 2023-11-09T20:52:37 | 2023-11-09T20:52:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1062",
"html_url": "https://github.com/ollama/ollama/pull/1062",
"diff_url": "https://github.com/ollama/ollama/pull/1062.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1062.patch",
"merged_at": "2023-11-09T20:52:36"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1062/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3203 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3203/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3203/comments | https://api.github.com/repos/ollama/ollama/issues/3203/events | https://github.com/ollama/ollama/issues/3203 | 2,190,927,280 | I_kwDOJ0Z1Ps6CluWw | 3,203 | Allow auth between the Ollama server/client | {
"login": "ftoppi",
"id": 4704016,
"node_id": "MDQ6VXNlcjQ3MDQwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4704016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ftoppi",
"html_url": "https://github.com/ftoppi",
"followers_url": "https://api.github.com/users/ftoppi/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2024-03-17T23:31:40 | 2025-01-05T21:48:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I'm trying to secure the connection between ollama client and server.
HTTPS works fine:

Basic auth does not work with ollama client:
, and the `keep_alive` option.
First of all it would be great to be able to load multiple instances of the _same_ model, if I'm not mistaken the model concurrency only works for different models.
Of course loading multiple instances would require a new way of managing ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3902/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3902/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8140 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8140/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8140/comments | https://api.github.com/repos/ollama/ollama/issues/8140/events | https://github.com/ollama/ollama/issues/8140 | 2,745,751,318 | I_kwDOJ0Z1Ps6jqNcW | 8,140 | GGML_ASSERT(i01 >= 0 && i01 < ne01) failed and SIGSEGV had occoured | {
"login": "9suns",
"id": 4477712,
"node_id": "MDQ6VXNlcjQ0Nzc3MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4477712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/9suns",
"html_url": "https://github.com/9suns",
"followers_url": "https://api.github.com/users/9suns/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-17T18:45:07 | 2024-12-18T03:20:05 | 2024-12-18T03:08:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
``` text
Dec 18 02:31:29 ksy ollama[2877902]: time=2024-12-18T02:31:29.468+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:45295"
Dec 18 02:31:29 ksy ollama[2877902]: llama_model_loader: loaded meta data with 23 key-value pairs and 197 tensors from /data/ollama/blobs/sha256-3757... | {
"login": "9suns",
"id": 4477712,
"node_id": "MDQ6VXNlcjQ0Nzc3MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4477712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/9suns",
"html_url": "https://github.com/9suns",
"followers_url": "https://api.github.com/users/9suns/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8140/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7887 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7887/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7887/comments | https://api.github.com/repos/ollama/ollama/issues/7887/events | https://github.com/ollama/ollama/issues/7887 | 2,706,513,655 | I_kwDOJ0Z1Ps6hUh73 | 7,887 | Add tests for openai response logic - potentially refactor middleware | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 0 | 2024-11-30T02:45:55 | 2024-11-30T02:46:08 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7887/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1116 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1116/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1116/comments | https://api.github.com/repos/ollama/ollama/issues/1116/events | https://github.com/ollama/ollama/issues/1116 | 1,991,509,510 | I_kwDOJ0Z1Ps52tAYG | 1,116 | vicuna33b not executing on GPU | {
"login": "nshern",
"id": 90867839,
"node_id": "MDQ6VXNlcjkwODY3ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/90867839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nshern",
"html_url": "https://github.com/nshern",
"followers_url": "https://api.github.com/users/nshern/fo... | [] | closed | false | null | [] | null | 2 | 2023-11-13T21:24:09 | 2023-11-14T02:55:46 | 2023-11-14T02:55:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am using Debian with an A4000. Vicuna13b will execute on GPU just fine but Vicuna33b will not.
Is this a bug or am I misunderstanding something in the documentation or configuration? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1116/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/631 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/631/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/631/comments | https://api.github.com/repos/ollama/ollama/issues/631/events | https://github.com/ollama/ollama/pull/631 | 1,917,201,385 | PR_kwDOJ0Z1Ps5bbUoB | 631 | Adding mistral 7B to README.md | {
"login": "bhagyas",
"id": 750003,
"node_id": "MDQ6VXNlcjc1MDAwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/750003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhagyas",
"html_url": "https://github.com/bhagyas",
"followers_url": "https://api.github.com/users/bhagyas/fo... | [] | closed | false | null | [] | null | 2 | 2023-09-28T10:39:05 | 2023-09-30T02:59:56 | 2023-09-29T00:36:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/631",
"html_url": "https://github.com/ollama/ollama/pull/631",
"diff_url": "https://github.com/ollama/ollama/pull/631.diff",
"patch_url": "https://github.com/ollama/ollama/pull/631.patch",
"merged_at": null
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/631/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7272 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7272/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7272/comments | https://api.github.com/repos/ollama/ollama/issues/7272/events | https://github.com/ollama/ollama/issues/7272 | 2,599,522,473 | I_kwDOJ0Z1Ps6a8ZCp | 7,272 | multi-part model+safetensors | {
"login": "werruww",
"id": 157249411,
"node_id": "U_kgDOCV9vgw",
"avatar_url": "https://avatars.githubusercontent.com/u/157249411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/werruww",
"html_url": "https://github.com/werruww",
"followers_url": "https://api.github.com/users/werruww/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-10-19T19:33:50 | 2024-10-23T01:36:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How do I run a gguf multi-part model on ollaama?
https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GGUF/blob/main/qwen2.5-7b-instruct-fp16-00004-of-00004.gguf
qwen2.5-7b-instruct-fp16-00001-of-00004.gguf
qwen2.5-7b-instruct-fp16-00002-of-00004.gguf
qwen2.5-7b-instruct-fp16-00003-of-00004.gguf
qwen2.5-7b-instruct-... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7272/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7132 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7132/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7132/comments | https://api.github.com/repos/ollama/ollama/issues/7132/events | https://github.com/ollama/ollama/issues/7132 | 2,572,917,916 | I_kwDOJ0Z1Ps6ZW5yc | 7,132 | Getting Error with OpenAI compatibility | {
"login": "php10xdev",
"id": 179617614,
"node_id": "U_kgDOCrS_Tg",
"avatar_url": "https://avatars.githubusercontent.com/u/179617614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/php10xdev",
"html_url": "https://github.com/php10xdev",
"followers_url": "https://api.github.com/users/php10x... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | null | [] | null | 6 | 2024-10-08T11:27:46 | 2024-12-10T06:21:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```js
import { NextApiRequest } from 'next';
import { OpenAIStream, StreamingTextResponse } from 'ai';
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:11434/v1',
apiKey: 'ollama', // required but unused
});
export async function POST(re... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7132/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5126 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5126/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5126/comments | https://api.github.com/repos/ollama/ollama/issues/5126/events | https://github.com/ollama/ollama/pull/5126 | 2,360,967,020 | PR_kwDOJ0Z1Ps5y4zPl | 5,126 | update message processing | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 2 | 2024-06-19T00:32:04 | 2024-07-09T16:20:48 | 2024-07-09T16:20:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5126",
"html_url": "https://github.com/ollama/ollama/pull/5126",
"diff_url": "https://github.com/ollama/ollama/pull/5126.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5126.patch",
"merged_at": "2024-07-09T16:20:44"
} | this change changes the way messages are processed before handing off to the llm. there are a few areas worth mentioning:
1. messages are now a first class component of the template. template rendering will only falling back to the previous iterative template if messages is unsupported by the template. however, new ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5126/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8418 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8418/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8418/comments | https://api.github.com/repos/ollama/ollama/issues/8418/events | https://github.com/ollama/ollama/issues/8418 | 2,786,508,663 | I_kwDOJ0Z1Ps6mFr93 | 8,418 | Running OLLAMA_FLASH_ATTENTION=true with LoRA Models Returns: flash_attn is not compatible with LoRA | {
"login": "AXDIGI",
"id": 194899132,
"node_id": "U_kgDOC53svA",
"avatar_url": "https://avatars.githubusercontent.com/u/194899132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AXDIGI",
"html_url": "https://github.com/AXDIGI",
"followers_url": "https://api.github.com/users/AXDIGI/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-14T08:09:59 | 2025-01-14T08:09:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello, I use fine tuned LLMs that use LoRA, when activating OLLAMA_FLASH_ATTENTION=true ollama serve the fine tuned models do not work, the error received is:
```
llama_lora_adapter_set: flash_attn is not compatible with LoRA
panic: error applying lora from file
```
This error stops the... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8418/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4785 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4785/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4785/comments | https://api.github.com/repos/ollama/ollama/issues/4785/events | https://github.com/ollama/ollama/issues/4785 | 2,329,714,620 | I_kwDOJ0Z1Ps6K3J-8 | 4,785 | ollama save feature | {
"login": "CorollaD",
"id": 28391643,
"node_id": "MDQ6VXNlcjI4MzkxNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28391643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CorollaD",
"html_url": "https://github.com/CorollaD",
"followers_url": "https://api.github.com/users/Cor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-06-02T15:23:30 | 2024-06-07T22:36:05 | 2024-06-07T22:36:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can you add the feature of ollama save as well as docker save that export the local docker image to migration to other envenriment? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4785/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1749 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1749/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1749/comments | https://api.github.com/repos/ollama/ollama/issues/1749/events | https://github.com/ollama/ollama/issues/1749 | 2,060,884,821 | I_kwDOJ0Z1Ps561ptV | 1,749 | The "seed" is not working reliable for me. | {
"login": "oderwat",
"id": 719156,
"node_id": "MDQ6VXNlcjcxOTE1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/719156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oderwat",
"html_url": "https://github.com/oderwat",
"followers_url": "https://api.github.com/users/oderwat/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 19 | 2023-12-30T23:36:58 | 2024-11-13T18:42:53 | 2024-02-20T01:36:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am using a seed (int 1) for prompt generation with a mistral model, and it works not reliable. Instead, I get some interesting results with a pattern:
EDIT: It seems like this behavior is independent of the seed choice and the seeds are not working at all?
When freshly start `ollama serve` and send the exact sa... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1749/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7900 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7900/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7900/comments | https://api.github.com/repos/ollama/ollama/issues/7900/events | https://github.com/ollama/ollama/pull/7900 | 2,708,318,412 | PR_kwDOJ0Z1Ps6DpWD2 | 7,900 | Structured Outputs - Chat Endpoint | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2024-12-01T01:24:40 | 2024-12-08T20:22:32 | 2024-12-05T00:31:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7900",
"html_url": "https://github.com/ollama/ollama/pull/7900",
"diff_url": "https://github.com/ollama/ollama/pull/7900.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7900.patch",
"merged_at": "2024-12-05T00:31:19"
} | ## Structured outputs
A longtime ask from the community - we now support the passing in of a json schema, translate to grammar and use it for sampling.
## Why not full grammar support
We gave this a ton of thought and there's 3 main points around here:
1. Inherent complexity of grammars - Generating a grammar for... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7900/reactions",
"total_count": 53,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 12,
"confused": 0,
"heart": 10,
"rocket": 9,
"eyes": 6
} | https://api.github.com/repos/ollama/ollama/issues/7900/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4668/comments | https://api.github.com/repos/ollama/ollama/issues/4668/events | https://github.com/ollama/ollama/issues/4668 | 2,319,464,602 | I_kwDOJ0Z1Ps6KQDia | 4,668 | Low GPU / High CPU Utilization ==> Slow Performance | {
"login": "tarekeldeeb",
"id": 90985,
"node_id": "MDQ6VXNlcjkwOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/90985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarekeldeeb",
"html_url": "https://github.com/tarekeldeeb",
"followers_url": "https://api.github.com/users/tare... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-27T16:07:43 | 2024-05-28T06:57:41 | 2024-05-28T06:57:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The Ollama on Ubuntu 22.04 can detect my Cuda GPU, and loads the model to its memory, but the processing seems to be mostly on CPU. Is this a normal behavior? The overall performance is not satisfying, like 1-token-per-second or so ... much slower than a human reading speed.
![Screenshot from ... | {
"login": "tarekeldeeb",
"id": 90985,
"node_id": "MDQ6VXNlcjkwOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/90985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarekeldeeb",
"html_url": "https://github.com/tarekeldeeb",
"followers_url": "https://api.github.com/users/tare... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4668/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3423 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3423/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3423/comments | https://api.github.com/repos/ollama/ollama/issues/3423/events | https://github.com/ollama/ollama/pull/3423 | 2,216,874,483 | PR_kwDOJ0Z1Ps5rPnyx | 3,423 | Community Integration: CRAG Ollama Chat | {
"login": "Nagi-ovo",
"id": 101612750,
"node_id": "U_kgDOBg58zg",
"avatar_url": "https://avatars.githubusercontent.com/u/101612750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nagi-ovo",
"html_url": "https://github.com/Nagi-ovo",
"followers_url": "https://api.github.com/users/Nagi-ovo/... | [] | closed | false | null | [] | null | 0 | 2024-03-31T09:48:09 | 2024-04-01T15:16:14 | 2024-04-01T15:16:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3423",
"html_url": "https://github.com/ollama/ollama/pull/3423",
"diff_url": "https://github.com/ollama/ollama/pull/3423.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3423.patch",
"merged_at": "2024-04-01T15:16:14"
} | Corrective Retrieval Augmented Generation Demo, powered by Langgraph and Streamlit 🤗
Supports:
- Ollama
- OpenAI APIs | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3423/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4990 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4990/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4990/comments | https://api.github.com/repos/ollama/ollama/issues/4990/events | https://github.com/ollama/ollama/issues/4990 | 2,347,708,822 | I_kwDOJ0Z1Ps6L7zGW | 4,990 | First value different on CUDA/ROCM when setting `seed` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | open | false | null | [] | null | 1 | 2024-06-12T03:14:30 | 2024-06-16T19:32:16 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This seems to be an issue with the kv cache on Nvidia/AMD GPUs. See https://github.com/ggerganov/llama.cpp/issues/2838
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4990/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3291 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3291/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3291/comments | https://api.github.com/repos/ollama/ollama/issues/3291/events | https://github.com/ollama/ollama/pull/3291 | 2,201,696,949 | PR_kwDOJ0Z1Ps5qchzv | 3,291 | Add Testcontainers into Libraries section | {
"login": "eddumelendez",
"id": 1810547,
"node_id": "MDQ6VXNlcjE4MTA1NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1810547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eddumelendez",
"html_url": "https://github.com/eddumelendez",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 0 | 2024-03-22T04:53:32 | 2024-03-23T19:29:43 | 2024-03-23T18:55:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3291",
"html_url": "https://github.com/ollama/ollama/pull/3291",
"diff_url": "https://github.com/ollama/ollama/pull/3291.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3291.patch",
"merged_at": "2024-03-23T18:55:25"
} | Testcontainers provides a module for Ollama.
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3291/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3693 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3693/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3693/comments | https://api.github.com/repos/ollama/ollama/issues/3693/events | https://github.com/ollama/ollama/issues/3693 | 2,247,517,175 | I_kwDOJ0Z1Ps6F9mP3 | 3,693 | Ollama v0.1.32-rocm throws "CUDA error: out of memory" on AMD GPU with model that worked on v0.1.31-rocm | {
"login": "artem-zinnatullin",
"id": 967132,
"node_id": "MDQ6VXNlcjk2NzEzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/967132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artem-zinnatullin",
"html_url": "https://github.com/artem-zinnatullin",
"followers_url": "https://a... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-04-17T06:49:16 | 2024-06-01T23:42:53 | 2024-06-01T23:42:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi, I've updated the Docker image `ollama/ollama:0.1.31-rocm` to `0.1.32-rocm` and started experiencing `CUDA error: out of memory` on `mixtral:8x7b` (`7708c059a8bb`) model that worked fine on `0.1.31-rocm`!
```js
CUDA error: out of memory
current device: 0, in function alloc at /go/src/g... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3693/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.