url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/585 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/585/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/585/comments | https://api.github.com/repos/ollama/ollama/issues/585/events | https://github.com/ollama/ollama/pull/585 | 1,910,410,563 | PR_kwDOJ0Z1Ps5bEQ6b | 585 | add the example for ask the mentors | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-09-24T22:59:46 | 2023-10-09T20:58:15 | 2023-10-09T20:58:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/585",
"html_url": "https://github.com/ollama/ollama/pull/585",
"diff_url": "https://github.com/ollama/ollama/pull/585.diff",
"patch_url": "https://github.com/ollama/ollama/pull/585.patch",
"merged_at": "2023-10-09T20:58:14"
} | this is an example that will be used in a blog post about talking to mentors | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/585/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4082 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4082/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4082/comments | https://api.github.com/repos/ollama/ollama/issues/4082/events | https://github.com/ollama/ollama/issues/4082 | 2,273,573,727 | I_kwDOJ0Z1Ps6Hg_tf | 4,082 | Llama3 Tokenizer | {
"login": "Bearsaerker",
"id": 92314812,
"node_id": "U_kgDOBYCcvA",
"avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearsaerker",
"html_url": "https://github.com/Bearsaerker",
"followers_url": "https://api.github.com/users/Be... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-05-01T14:05:15 | 2024-05-01T15:20:23 | 2024-05-01T15:20:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I requanted the llama3 Sauerkraut with the newest release of llama cpp which should have fixed the tokenizer, but when I load the model into Ollama, I still get the wrong output while people using llama cpp get the right one. So I'd say that there is still something buggy in ollama. Here is the ... | {
"login": "Bearsaerker",
"id": 92314812,
"node_id": "U_kgDOBYCcvA",
"avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearsaerker",
"html_url": "https://github.com/Bearsaerker",
"followers_url": "https://api.github.com/users/Be... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4082/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4082/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8608 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8608/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8608/comments | https://api.github.com/repos/ollama/ollama/issues/8608/events | https://github.com/ollama/ollama/issues/8608 | 2,812,662,391 | I_kwDOJ0Z1Ps6npdJ3 | 8,608 | Panic while downloading the model | {
"login": "tchaton",
"id": 12861981,
"node_id": "MDQ6VXNlcjEyODYxOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/12861981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchaton",
"html_url": "https://github.com/tchaton",
"followers_url": "https://api.github.com/users/tchato... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | null | [] | null | 2 | 2025-01-27T10:43:53 | 2025-01-27T16:23:44 | 2025-01-27T16:23:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`/bin/ollama run llama3.1`
<img width="1243" alt="Image" src="https://github.com/user-attachments/assets/0c520af1-52d5-4371-bf89-fac7a9fe94d9" />
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8608/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5387 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5387/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5387/comments | https://api.github.com/repos/ollama/ollama/issues/5387/events | https://github.com/ollama/ollama/issues/5387 | 2,382,027,700 | I_kwDOJ0Z1Ps6N-tu0 | 5,387 | Intel Integrated Graphics GPU not being utilized when OLLAMA_INTEL_GPU flag is enabled | {
"login": "suncloudsmoon",
"id": 34616349,
"node_id": "MDQ6VXNlcjM0NjE2MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34616349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suncloudsmoon",
"html_url": "https://github.com/suncloudsmoon",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-06-30T00:48:36 | 2024-07-02T21:07:25 | 2024-07-02T21:07:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When the flag 'OLLAMA_INTEL_GPU' is enabled, I expect Ollama to take full advantage of the Intel GPU/iGPU present on the system. However, the intel iGPU is not utilized at all on my system. My Intel iGPU is Intel Iris Xe Graphics (11th gen).
Logs:
```
C:\Users\ocean>ollama serve
2024/06/29 1... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5387/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3772 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3772/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3772/comments | https://api.github.com/repos/ollama/ollama/issues/3772/events | https://github.com/ollama/ollama/issues/3772 | 2,254,461,402 | I_kwDOJ0Z1Ps6GYFna | 3,772 | Please add a way to specify the installation location on windows :) | {
"login": "Vishwamithra37",
"id": 53423141,
"node_id": "MDQ6VXNlcjUzNDIzMTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/53423141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vishwamithra37",
"html_url": "https://github.com/Vishwamithra37",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-04-20T09:01:44 | 2024-04-24T16:57:51 | 2024-04-24T16:57:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I really want to mention the installation location and my C ddrive is FULL | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3772/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4129 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4129/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4129/comments | https://api.github.com/repos/ollama/ollama/issues/4129/events | https://github.com/ollama/ollama/pull/4129 | 2,277,994,343 | PR_kwDOJ0Z1Ps5ufYxE | 4,129 | Soften timeouts on sched unit tests | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-05-03T16:09:07 | 2024-05-03T18:10:29 | 2024-05-03T18:10:26 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4129",
"html_url": "https://github.com/ollama/ollama/pull/4129",
"diff_url": "https://github.com/ollama/ollama/pull/4129.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4129.patch",
"merged_at": "2024-05-03T18:10:26"
} | This gives us more headroom on the scheduler tests to tamp down some flakes. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4129/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2935 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2935/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2935/comments | https://api.github.com/repos/ollama/ollama/issues/2935/events | https://github.com/ollama/ollama/issues/2935 | 2,169,028,892 | I_kwDOJ0Z1Ps6BSMEc | 2,935 | Ollama returns: Error: error loading model when importing a fined-tuned converted and quantized model | {
"login": "FotieMConstant",
"id": 42372656,
"node_id": "MDQ6VXNlcjQyMzcyNjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/42372656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FotieMConstant",
"html_url": "https://github.com/FotieMConstant",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 20 | 2024-03-05T12:04:24 | 2024-05-10T20:25:33 | 2024-05-10T20:25:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi everyone, i am having an issue running a fine-tuned quantized version of llama2 on ollama. i followed all the stops at: https://github.com/ollama/ollama/blob/main/docs/import.md
however after quantizing and creating my model on ollama. i can see my model on the list however when i run it i get this error
```ba... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2935/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2935/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7691 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7691/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7691/comments | https://api.github.com/repos/ollama/ollama/issues/7691/events | https://github.com/ollama/ollama/issues/7691 | 2,662,969,562 | I_kwDOJ0Z1Ps6eubDa | 7,691 | [Docs] Incorrect default value for num_predict? | {
"login": "owboson",
"id": 115831817,
"node_id": "U_kgDOBud0CQ",
"avatar_url": "https://avatars.githubusercontent.com/u/115831817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/owboson",
"html_url": "https://github.com/owboson",
"followers_url": "https://api.github.com/users/owboson/foll... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 1 | 2024-11-15T19:43:53 | 2024-12-03T23:00:06 | 2024-12-03T23:00:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The API documentation (https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion) refers to https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values for more information about the parameters that can be specified in the `options` field of a chat completion request... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7691/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/669 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/669/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/669/comments | https://api.github.com/repos/ollama/ollama/issues/669/events | https://github.com/ollama/ollama/issues/669 | 1,921,030,601 | I_kwDOJ0Z1Ps5ygJnJ | 669 | Allow customizing allowed headers in CORS settings | {
"login": "spaceemotion",
"id": 429147,
"node_id": "MDQ6VXNlcjQyOTE0Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/429147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spaceemotion",
"html_url": "https://github.com/spaceemotion",
"followers_url": "https://api.github.com/u... | [] | closed | false | null | [] | null | 17 | 2023-10-01T23:31:21 | 2025-01-26T07:31:48 | 2023-10-28T19:25:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Based on some additional research on an issue I have (https://github.com/jmorganca/ollama/issues/300#issuecomment-1742099347), I am getting the following error in chrome/firefox:
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:11434/api/tags. (Reason: ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/669/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/669/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7446 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7446/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7446/comments | https://api.github.com/repos/ollama/ollama/issues/7446/events | https://github.com/ollama/ollama/issues/7446 | 2,626,492,019 | I_kwDOJ0Z1Ps6cjRZz | 7,446 | MiniCPM-V 2.6 model crash with error code 500 when using ollama API in golang | {
"login": "FreemanFeng",
"id": 1662126,
"node_id": "MDQ6VXNlcjE2NjIxMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1662126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FreemanFeng",
"html_url": "https://github.com/FreemanFeng",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-10-31T10:19:54 | 2024-11-14T21:00:10 | 2024-11-14T21:00:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

below is the golang code, set the prompt to be "请识别图片", load the attached image into []byte.
func RunVLM(prompt string, images ...[]byte) (bool, any) {
client, err := api.ClientFromEnviro... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7446/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7549 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7549/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7549/comments | https://api.github.com/repos/ollama/ollama/issues/7549/events | https://github.com/ollama/ollama/issues/7549 | 2,640,492,498 | I_kwDOJ0Z1Ps6dYrfS | 7,549 | ollama_embed issue | {
"login": "Ayush-developer",
"id": 84736562,
"node_id": "MDQ6VXNlcjg0NzM2NTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/84736562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ayush-developer",
"html_url": "https://github.com/Ayush-developer",
"followers_url": "https://api... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-11-07T09:59:43 | 2024-11-13T21:33:40 | 2024-11-13T21:33:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
postgres=# SELECT ai.ollama_embed('llama3', 'this is a test');
ERROR: ollama._types.ResponseError: model "llama3" not found, try pulling it first
CONTEXT: Traceback (most recent call last):
PL/Python function "ollama_embed", line 21, in <module>
resp = client.embeddings(model, input_... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7549/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3144 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3144/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3144/comments | https://api.github.com/repos/ollama/ollama/issues/3144/events | https://github.com/ollama/ollama/issues/3144 | 2,186,807,464 | I_kwDOJ0Z1Ps6CWAio | 3,144 | add /metrics endpoint | {
"login": "codearranger",
"id": 80373433,
"node_id": "MDQ6VXNlcjgwMzczNDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/80373433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codearranger",
"html_url": "https://github.com/codearranger",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 21 | 2024-03-14T16:39:01 | 2025-01-24T09:59:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be nice of ollama had a /metrics endpoint for collecting metrics for prometheus or other monitoring tools.
https://prometheus.io/docs/guides/go-application/
Some metrics to include might be,
GPU utilization, memory utilization, CPU utilzation, layers used, request counts, etc. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3144/reactions",
"total_count": 47,
"+1": 39,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 8,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3144/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1752 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1752/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1752/comments | https://api.github.com/repos/ollama/ollama/issues/1752/events | https://github.com/ollama/ollama/issues/1752 | 2,061,132,751 | I_kwDOJ0Z1Ps562mPP | 1,752 | Ollama can run in Docker (hosted in local machine) but not directly in local | {
"login": "Huertas97",
"id": 56938752,
"node_id": "MDQ6VXNlcjU2OTM4NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/56938752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huertas97",
"html_url": "https://github.com/Huertas97",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2023-12-31T18:00:17 | 2024-01-01T11:51:51 | 2024-01-01T11:51:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It is quite strange.
I have deployed the container of ollama and I can access to the bash shell and load models and chat with them. But when I install Ollama in the local system (the same that is running the docker container), when I try to chat with the same model (explored: tinyllama and mistral), it says:
`Err... | {
"login": "Huertas97",
"id": 56938752,
"node_id": "MDQ6VXNlcjU2OTM4NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/56938752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huertas97",
"html_url": "https://github.com/Huertas97",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1752/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7142 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7142/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7142/comments | https://api.github.com/repos/ollama/ollama/issues/7142/events | https://github.com/ollama/ollama/issues/7142 | 2,574,057,976 | I_kwDOJ0Z1Ps6ZbQH4 | 7,142 | Nvidia's brand spanking new model! | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-10-08T19:41:32 | 2024-10-16T01:40:21 | 2024-10-16T01:40:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/nvidia/NVLM-D-72B
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7142/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6258 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6258/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6258/comments | https://api.github.com/repos/ollama/ollama/issues/6258/events | https://github.com/ollama/ollama/pull/6258 | 2,455,676,379 | PR_kwDOJ0Z1Ps531Ldz | 6,258 | server/download.go: Fix a typo in log | {
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2024-08-08T12:29:37 | 2024-08-10T01:56:24 | 2024-08-10T00:19:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6258",
"html_url": "https://github.com/ollama/ollama/pull/6258",
"diff_url": "https://github.com/ollama/ollama/pull/6258.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6258.patch",
"merged_at": "2024-08-10T00:19:48"
} | null | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6258/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2945 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2945/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2945/comments | https://api.github.com/repos/ollama/ollama/issues/2945/events | https://github.com/ollama/ollama/issues/2945 | 2,170,470,316 | I_kwDOJ0Z1Ps6BXr-s | 2,945 | Error: Post "http://127.0.0.1:11434/api/generate": EOF / CUDA errors when trying to run ollama in terminal | {
"login": "jferments",
"id": 158022198,
"node_id": "U_kgDOCWs6Ng",
"avatar_url": "https://avatars.githubusercontent.com/u/158022198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jferments",
"html_url": "https://github.com/jferments",
"followers_url": "https://api.github.com/users/jferme... | [] | closed | false | null | [] | null | 2 | 2024-03-06T01:53:59 | 2024-03-06T16:21:46 | 2024-03-06T16:21:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am using Ollama version 0.1.20 and am getting CUDA errors when trying to run Ollama in terminal or from python scripts. When I try to run these in terminal:
`ollama run mistral`
`ollama run orca-mini`
They fail with the only message being:
`Error: Post "http://127.0.0.1:11434/api/generate": EOF`
These ar... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2945/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3269 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3269/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3269/comments | https://api.github.com/repos/ollama/ollama/issues/3269/events | https://github.com/ollama/ollama/issues/3269 | 2,197,213,593 | I_kwDOJ0Z1Ps6C9tGZ | 3,269 | Error 403 with zrok and other reverse proxies | {
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 10 | 2024-03-20T10:44:22 | 2024-10-07T06:35:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After updating to 1.29, I lost the ability to make public access to the ollama instance I have running through ngrok (or analogues (like zrok)). Ollama returns a 403 response to requests received through a proxy (ngrok) while correctly responding to the request through the localhost
### What di... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3269/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3269/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/360 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/360/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/360/comments | https://api.github.com/repos/ollama/ollama/issues/360/events | https://github.com/ollama/ollama/pull/360 | 1,853,723,184 | PR_kwDOJ0Z1Ps5YFyUO | 360 | Fix request copies | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-08-16T18:32:26 | 2023-08-17T16:58:44 | 2023-08-17T16:58:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/360",
"html_url": "https://github.com/ollama/ollama/pull/360",
"diff_url": "https://github.com/ollama/ollama/pull/360.diff",
"patch_url": "https://github.com/ollama/ollama/pull/360.patch",
"merged_at": "2023-08-17T16:58:43"
} | `makeRequest` makes copies of the request body via bytes.Buffer and bytes.Reader in anticipation of a possible retry. While the memory requirements are negligible for most requests, the copies become significant when pushing a model blob. A sufficiently large model will exhaust all memory on the system causing the proc... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/360/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3880 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3880/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3880/comments | https://api.github.com/repos/ollama/ollama/issues/3880/events | https://github.com/ollama/ollama/issues/3880 | 2,261,599,440 | I_kwDOJ0Z1Ps6GzUTQ | 3,880 | when i can use tools? | {
"login": "i-yoyocat",
"id": 17843761,
"node_id": "MDQ6VXNlcjE3ODQzNzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17843761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-yoyocat",
"html_url": "https://github.com/i-yoyocat",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-04-24T15:38:32 | 2024-07-26T00:46:11 | 2024-07-26T00:46:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when i can use tools in request? do we have a plan?thanks! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3880/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3880/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4283 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4283/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4283/comments | https://api.github.com/repos/ollama/ollama/issues/4283/events | https://github.com/ollama/ollama/issues/4283 | 2,287,656,393 | I_kwDOJ0Z1Ps6IWt3J | 4,283 | Ollama v0.1.34 Timeout issue on Codellama34B | {
"login": "humza-sami",
"id": 63999516,
"node_id": "MDQ6VXNlcjYzOTk5NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/63999516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/humza-sami",
"html_url": "https://github.com/humza-sami",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-05-09T13:04:20 | 2024-05-21T23:48:02 | 2024-05-21T23:47:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am trying to run Codellama34B model on Ollama 0.1.34 version and its keep giving me timeout error. Although I was able to run codellama70B on this version. Then I rollback ollama to v0.1.32 and it worked for me. It seems latest version is not supporting codellama34B.

}
if estimate.TensorSplit != "" {
params = append(params, "--tensor-split", estimate.TensorSplit)
}
```
### OS
_No response_
### GPU
_No response_
### CP... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5594/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8439 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8439/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8439/comments | https://api.github.com/repos/ollama/ollama/issues/8439/events | https://github.com/ollama/ollama/issues/8439 | 2,789,175,994 | I_kwDOJ0Z1Ps6mP3K6 | 8,439 | Add a service file in /etc/init.d/ to support service start stop restart in self-package containers | {
"login": "SunshineAI0523",
"id": 38200985,
"node_id": "MDQ6VXNlcjM4MjAwOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/38200985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunshineAI0523",
"html_url": "https://github.com/SunshineAI0523",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2025-01-15T08:44:55 | 2025-01-25T00:39:04 | 2025-01-25T00:39:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Title: Add a service file in /etc/init.d/ to support service start stop restart in self-package containers
Hi,I am looking for a Method to install ollama in a container, and use service start stop restart to manage the ollama backend,
I write a sample file in /etc/init.d/ollama,to support service start stop restart o... | {
"login": "SunshineAI0523",
"id": 38200985,
"node_id": "MDQ6VXNlcjM4MjAwOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/38200985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunshineAI0523",
"html_url": "https://github.com/SunshineAI0523",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8439/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7537 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7537/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7537/comments | https://api.github.com/repos/ollama/ollama/issues/7537/events | https://github.com/ollama/ollama/pull/7537 | 2,639,604,282 | PR_kwDOJ0Z1Ps6BH-ms | 7,537 | imageproc mllama refactor | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-11-07T01:22:41 | 2024-12-15T03:50:17 | 2024-12-15T03:50:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7537",
"html_url": "https://github.com/ollama/ollama/pull/7537",
"diff_url": "https://github.com/ollama/ollama/pull/7537.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7537.patch",
"merged_at": "2024-12-15T03:50:15"
} | This change breaks out the image processing routines into a generic module called `models/imageproc` and also creates a new `models/mllama` model which is specific the the mllama vision processing. There are a few other minor changes such as:
* Preprocess() now takes an io.Reader instead of sending the byte slice
*... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7537/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2354 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2354/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2354/comments | https://api.github.com/repos/ollama/ollama/issues/2354/events | https://github.com/ollama/ollama/pull/2354 | 2,117,299,887 | PR_kwDOJ0Z1Ps5l9Oxw | 2,354 | reliably determine available VRAM on macOS (resolves #1826, #2370) | {
"login": "peanut256",
"id": 13474248,
"node_id": "MDQ6VXNlcjEzNDc0MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13474248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peanut256",
"html_url": "https://github.com/peanut256",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 4 | 2024-02-04T20:54:11 | 2024-02-25T23:16:45 | 2024-02-25T23:16:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2354",
"html_url": "https://github.com/ollama/ollama/pull/2354",
"diff_url": "https://github.com/ollama/ollama/pull/2354.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2354.patch",
"merged_at": "2024-02-25T23:16:45"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2354/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3727 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3727/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3727/comments | https://api.github.com/repos/ollama/ollama/issues/3727/events | https://github.com/ollama/ollama/issues/3727 | 2,249,819,412 | I_kwDOJ0Z1Ps6GGYUU | 3,727 | Unable to load default model context length num_ctx for embedding | {
"login": "Kanishk-Kumar",
"id": 45518770,
"node_id": "MDQ6VXNlcjQ1NTE4Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/45518770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kanishk-Kumar",
"html_url": "https://github.com/Kanishk-Kumar",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-04-18T06:00:00 | 2024-05-17T16:09:30 | 2024-05-16T21:57:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This is the code I tried:
```
from ollama import Client
def generate_embedding(prompt: str):
r"""
Add this to utils later.
"""
client = Client(host="http://localhost:11434")
response = client.embeddings(
model="nomic-embed-text:latest",
prompt=prompt... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3727/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6704 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6704/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6704/comments | https://api.github.com/repos/ollama/ollama/issues/6704/events | https://github.com/ollama/ollama/issues/6704 | 2,512,745,969 | I_kwDOJ0Z1Ps6VxXXx | 6,704 | ollama model not support tool calling | {
"login": "sunshine19870316",
"id": 165765929,
"node_id": "U_kgDOCeFjKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/165765929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunshine19870316",
"html_url": "https://github.com/sunshine19870316",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 9 | 2024-09-09T02:12:10 | 2024-09-11T06:43:18 | 2024-09-11T06:43:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I use ollama model in langgraph multi-agent SupervisorAgent framework,
when I use API llm, that is give actual key and url, it can run successfully, but after changing to ollama server, can't call tools.
my code:
def get_qwen7b():
model = ChatOpenAI(
model_name="qwen2:7b",
openai_api_base="http://lo... | {
"login": "sunshine19870316",
"id": 165765929,
"node_id": "U_kgDOCeFjKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/165765929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunshine19870316",
"html_url": "https://github.com/sunshine19870316",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6704/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7286 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7286/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7286/comments | https://api.github.com/repos/ollama/ollama/issues/7286/events | https://github.com/ollama/ollama/issues/7286 | 2,601,574,699 | I_kwDOJ0Z1Ps6bEOEr | 7,286 | httpcore.ConnectError: [WinError 10061] | {
"login": "RXZAN",
"id": 176294975,
"node_id": "U_kgDOCoIMPw",
"avatar_url": "https://avatars.githubusercontent.com/u/176294975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RXZAN",
"html_url": "https://github.com/RXZAN",
"followers_url": "https://api.github.com/users/RXZAN/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706485628,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1ejfA... | closed | false | null | [] | null | 8 | 2024-10-21T07:07:53 | 2024-11-06T11:10:24 | 2024-11-06T11:10:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm running the ollama service on the server side。
There was a problem running this piece of code without ollama on the local machine。
**MY code:**
import os
os.environ["USER_AGENT"] = "MyCustomUserAgent/1.0"
os.environ['OLLAMA_API_KEY'] = 'none'
os.environ['OLLAMA_BASE_URL'] = 'http:/... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7286/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3353 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3353/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3353/comments | https://api.github.com/repos/ollama/ollama/issues/3353/events | https://github.com/ollama/ollama/pull/3353 | 2,207,101,188 | PR_kwDOJ0Z1Ps5qus3- | 3,353 | Use Rocky Linux Vault to get GCC 10.2 installed | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-03-26T02:20:02 | 2024-03-26T02:38:59 | 2024-03-26T02:38:56 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3353",
"html_url": "https://github.com/ollama/ollama/pull/3353",
"diff_url": "https://github.com/ollama/ollama/pull/3353.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3353.patch",
"merged_at": "2024-03-26T02:38:56"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3353/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6933/comments | https://api.github.com/repos/ollama/ollama/issues/6933/events | https://github.com/ollama/ollama/issues/6933 | 2,545,331,560 | I_kwDOJ0Z1Ps6Xtq1o | 6,933 | RTX A3000 GPU not being utilized for small LLMs | {
"login": "scotgopal",
"id": 76937732,
"node_id": "MDQ6VXNlcjc2OTM3NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/76937732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scotgopal",
"html_url": "https://github.com/scotgopal",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-09-24T12:55:36 | 2024-09-25T19:07:35 | 2024-09-25T19:07:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi there. I am using ollama from docker and I've already made sure that the gpu is available from the container, by using `nvidia-smi`
```shell
root@802f556c99c8:/# nvidia-smi
Tue Sep 24 12:52:58 2024
+---------------------------------------------------------------------------------... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6933/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7659 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7659/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7659/comments | https://api.github.com/repos/ollama/ollama/issues/7659/events | https://github.com/ollama/ollama/pull/7659 | 2,657,222,381 | PR_kwDOJ0Z1Ps6B2y6c | 7,659 | runner.go: Don't trim whitespace from inputs | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-11-14T01:02:04 | 2024-11-14T19:23:09 | 2024-11-14T19:23:07 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7659",
"html_url": "https://github.com/ollama/ollama/pull/7659",
"diff_url": "https://github.com/ollama/ollama/pull/7659.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7659.patch",
"merged_at": "2024-11-14T19:23:07"
} | It's possible to get prompts that consist entirely of whitespace - this is most likely to happen when generating embeddings. Currently, we will trim this away, leaving an empty prompt, which will then generate an error.
Generating embeddings from whitespace should not trigger an error, as this may break pipelines. I... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7659/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6698 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6698/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6698/comments | https://api.github.com/repos/ollama/ollama/issues/6698/events | https://github.com/ollama/ollama/issues/6698 | 2,512,230,854 | I_kwDOJ0Z1Ps6VvZnG | 6,698 | Custom OLLAMA_MODELS Environment Variable Not Respected | {
"login": "sascharo",
"id": 88222654,
"node_id": "MDQ6VXNlcjg4MjIyNjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/88222654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sascharo",
"html_url": "https://github.com/sascharo",
"followers_url": "https://api.github.com/users/sas... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-09-08T06:52:58 | 2024-09-08T13:27:15 | 2024-09-08T13:27:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Despite setting the environment variable `OLLAMA_MODELS` to a custom path, Ollama continues to download models to the default location (`C:\Users\%username%\.ollama\models`). The environment variable is set correctly and confirmed via echo `env:OLLAMA_MODELS`, but the expected behavior of downlo... | {
"login": "sascharo",
"id": 88222654,
"node_id": "MDQ6VXNlcjg4MjIyNjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/88222654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sascharo",
"html_url": "https://github.com/sascharo",
"followers_url": "https://api.github.com/users/sas... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6698/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5195 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5195/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5195/comments | https://api.github.com/repos/ollama/ollama/issues/5195/events | https://github.com/ollama/ollama/issues/5195 | 2,365,028,797 | I_kwDOJ0Z1Ps6M93m9 | 5,195 | How to import a model (.bin) from huggin face? | {
"login": "javierxio",
"id": 63758477,
"node_id": "MDQ6VXNlcjYzNzU4NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/63758477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javierxio",
"html_url": "https://github.com/javierxio",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 6 | 2024-06-20T18:34:21 | 2024-06-30T06:48:43 | 2024-06-30T06:48:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello. I would like to use a model from huggin face. I was able to download a file called `pytorch_model.bin` which I presume is the LLM. I created a directory and created a `Modelfile.txt` file. The contents of the `Modelfile.txt` are as:
```
FROM C:\ollama_models\florence-2-base\pytorch_model.bin
```
Running ... | {
"login": "javierxio",
"id": 63758477,
"node_id": "MDQ6VXNlcjYzNzU4NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/63758477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javierxio",
"html_url": "https://github.com/javierxio",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5195/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7128 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7128/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7128/comments | https://api.github.com/repos/ollama/ollama/issues/7128/events | https://github.com/ollama/ollama/issues/7128 | 2,572,198,369 | I_kwDOJ0Z1Ps6ZUKHh | 7,128 | Ollama host still be 127.0.0.1 while I have set OLLAMA_HOST = 0.0.0.0:11434 in the environment | {
"login": "Yangshford",
"id": 71912970,
"node_id": "MDQ6VXNlcjcxOTEyOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/71912970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangshford",
"html_url": "https://github.com/Yangshford",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-10-08T06:18:43 | 2024-11-01T09:27:23 | 2024-10-08T11:34:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
sorry for the bad english :(
i am not a native speaker
I run ollama on wsl2.
If I just use the commend "ollama serve" to start the ollama , I can't access to the ollama when I open the browser in windows and visit "localhost:11434".(ollama is accessable on wsl2)
I have set my environment i... | {
"login": "Yangshford",
"id": 71912970,
"node_id": "MDQ6VXNlcjcxOTEyOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/71912970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yangshford",
"html_url": "https://github.com/Yangshford",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7128/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8036 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8036/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8036/comments | https://api.github.com/repos/ollama/ollama/issues/8036/events | https://github.com/ollama/ollama/pull/8036 | 2,731,613,917 | PR_kwDOJ0Z1Ps6EyU9_ | 8,036 | go.mod: go 1.22.8 -> 1.23.4 | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-12-11T01:47:18 | 2024-12-11T02:16:18 | 2024-12-11T02:16:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8036",
"html_url": "https://github.com/ollama/ollama/pull/8036",
"diff_url": "https://github.com/ollama/ollama/pull/8036.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8036.patch",
"merged_at": "2024-12-11T02:16:17"
} | null | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8036/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4079 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4079/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4079/comments | https://api.github.com/repos/ollama/ollama/issues/4079/events | https://github.com/ollama/ollama/issues/4079 | 2,273,418,092 | I_kwDOJ0Z1Ps6HgZts | 4,079 | About OLLAMA_PARALLEL split the max context length | {
"login": "DirtyKnightForVi",
"id": 116725810,
"node_id": "U_kgDOBvUYMg",
"avatar_url": "https://avatars.githubusercontent.com/u/116725810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DirtyKnightForVi",
"html_url": "https://github.com/DirtyKnightForVi",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-05-01T12:19:18 | 2024-05-01T12:19:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I encountered this while testing SQL QA with extremely large table, and i put all DDL into `system` .
When `OLLAMA_PARALLEL = 4,` I observed that model appears to only understand the last 4000 tokens of the DDL. This is quite different from my previous experience. My webui is open webui , ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4079/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4079/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5700 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5700/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5700/comments | https://api.github.com/repos/ollama/ollama/issues/5700/events | https://github.com/ollama/ollama/issues/5700 | 2,408,492,530 | I_kwDOJ0Z1Ps6Pjq3y | 5,700 | zfs ARC leads to incorrect system memory prediction and refusal to load models that could work | {
"login": "arthurmelton",
"id": 29708070,
"node_id": "MDQ6VXNlcjI5NzA4MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29708070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthurmelton",
"html_url": "https://github.com/arthurmelton",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6849881759,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-07-15T11:23:55 | 2024-12-24T22:40:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I would like if there would be a flag to ignore this condition https://github.com/ollama/ollama/blob/e9f7f3602961d2b0beaff27144ec89301c2173ca/llm/server.go#L128-L135
I use Truenas Scale to store my models and to run the models. It uses zfs as the filesystem, so that means that there is ARC using a lot the memory. I ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5700/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5700/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8553 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8553/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8553/comments | https://api.github.com/repos/ollama/ollama/issues/8553/events | https://github.com/ollama/ollama/issues/8553 | 2,807,726,620 | I_kwDOJ0Z1Ps6nWoIc | 8,553 | When LLM generates empty string response, `eval_duration` is missing. | {
"login": "wch",
"id": 86978,
"node_id": "MDQ6VXNlcjg2OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/86978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wch",
"html_url": "https://github.com/wch",
"followers_url": "https://api.github.com/users/wch/followers",
"following... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-23T19:21:04 | 2025-01-28T09:16:25 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I noticed that ollama sometimes produces responses where `eval_duration` is missing. I've seen it happen with the response is simply an empty string -- just the stop message for a streaming response.
To reproduce:
```bash
curl http://localhost:11434/api/chat -d \
'{"model":"llama3.2:3b","opt... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8553/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5464 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5464/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5464/comments | https://api.github.com/repos/ollama/ollama/issues/5464/events | https://github.com/ollama/ollama/issues/5464 | 2,389,104,798 | I_kwDOJ0Z1Ps6OZtie | 5,464 | `Ollama` fails to work with `CUDA` after `Linux` suspend/resume, unlike other `CUDA` services | {
"login": "bwnjnOEI",
"id": 16009223,
"node_id": "MDQ6VXNlcjE2MDA5MjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16009223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwnjnOEI",
"html_url": "https://github.com/bwnjnOEI",
"followers_url": "https://api.github.com/users/bwn... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 12 | 2024-07-03T17:16:36 | 2025-01-23T04:33:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Every time Linux resumes from suspension, it fails to correctly reload `CUDA`. However, this issue has been well-resolved using commands like `sudo rmmod nvidia_uvm` and `sudo modprobe nvidia_uvm`. After this, all CUDA-dependent services except `Ollama` can utilize `CUDA` and work normally aga... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5464/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5464/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7252 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7252/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7252/comments | https://api.github.com/repos/ollama/ollama/issues/7252/events | https://github.com/ollama/ollama/issues/7252 | 2,597,064,614 | I_kwDOJ0Z1Ps6azA-m | 7,252 | add h2ovl-mississippi-800m and h2ovl-mississippi-2b | {
"login": "a-ghorbani",
"id": 11278140,
"node_id": "MDQ6VXNlcjExMjc4MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/11278140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a-ghorbani",
"html_url": "https://github.com/a-ghorbani",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-10-18T09:59:30 | 2024-10-22T01:32:51 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/h2oai/h2ovl-mississippi-2b
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7252/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7252/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6338 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6338/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6338/comments | https://api.github.com/repos/ollama/ollama/issues/6338/events | https://github.com/ollama/ollama/issues/6338 | 2,463,389,253 | I_kwDOJ0Z1Ps6S1FZF | 6,338 | ollama slower than llama.cpp | {
"login": "phly95",
"id": 3526540,
"node_id": "MDQ6VXNlcjM1MjY1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3526540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phly95",
"html_url": "https://github.com/phly95",
"followers_url": "https://api.github.com/users/phly95/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-08-13T13:41:53 | 2025-01-15T21:26:29 | 2025-01-15T21:26:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using the llm benchmark with ollama https://github.com/MinhNgyuen/llm-benchmark , I get around 80 t/s with gemma 2 2b. When asking the same questions to llama.cpp in conversation mode, I get 130 t/s. The llama.cpp command I'm running is ".\llama-cli -m gemma-2-2b-it-Q4_K_M.gguf --threads 16... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6338/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3618 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3618/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3618/comments | https://api.github.com/repos/ollama/ollama/issues/3618/events | https://github.com/ollama/ollama/pull/3618 | 2,240,864,654 | PR_kwDOJ0Z1Ps5sh8xe | 3,618 | Added grammar (and json schemas and CPU-only Dockerfile) support (from ollama/ollama PR #1606) | {
"login": "markcda",
"id": 35887062,
"node_id": "MDQ6VXNlcjM1ODg3MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35887062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markcda",
"html_url": "https://github.com/markcda",
"followers_url": "https://api.github.com/users/markcd... | [] | closed | false | null | [] | null | 12 | 2024-04-12T20:32:00 | 2024-08-07T20:42:27 | 2024-06-01T22:06:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3618",
"html_url": "https://github.com/ollama/ollama/pull/3618",
"diff_url": "https://github.com/ollama/ollama/pull/3618.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3618.patch",
"merged_at": null
} | Updated version of #1606. | {
"login": "markcda",
"id": 35887062,
"node_id": "MDQ6VXNlcjM1ODg3MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35887062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markcda",
"html_url": "https://github.com/markcda",
"followers_url": "https://api.github.com/users/markcd... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3618/reactions",
"total_count": 24,
"+1": 24,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3618/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2148 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2148/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2148/comments | https://api.github.com/repos/ollama/ollama/issues/2148/events | https://github.com/ollama/ollama/pull/2148 | 2,095,016,956 | PR_kwDOJ0Z1Ps5kx9Ob | 2,148 | Refine Accelerate usage on mac | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-23T00:26:46 | 2024-01-23T00:57:02 | 2024-01-23T00:56:58 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2148",
"html_url": "https://github.com/ollama/ollama/pull/2148",
"diff_url": "https://github.com/ollama/ollama/pull/2148.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2148.patch",
"merged_at": "2024-01-23T00:56:58"
} | For old macs, accelerate seems to cause crashes, but for AVX2 capable macs, it does not. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2148/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8211 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8211/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8211/comments | https://api.github.com/repos/ollama/ollama/issues/8211/events | https://github.com/ollama/ollama/pull/8211 | 2,754,599,721 | PR_kwDOJ0Z1Ps6GAuJC | 8,211 | docker: upgrade rocm to 6.2.4 | {
"login": "Pekkari",
"id": 13776314,
"node_id": "MDQ6VXNlcjEzNzc2MzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/13776314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pekkari",
"html_url": "https://github.com/Pekkari",
"followers_url": "https://api.github.com/users/Pekkar... | [] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-12-22T10:55:40 | 2024-12-23T15:07:22 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8211",
"html_url": "https://github.com/ollama/ollama/pull/8211",
"diff_url": "https://github.com/ollama/ollama/pull/8211.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8211.patch",
"merged_at": null
} | This patch upgrades the rocm version to the latest available for consumption.
Fixes: #7941 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8211/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6403 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6403/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6403/comments | https://api.github.com/repos/ollama/ollama/issues/6403/events | https://github.com/ollama/ollama/pull/6403 | 2,471,870,634 | PR_kwDOJ0Z1Ps54qGVP | 6,403 | feature: simple webclient | {
"login": "TecDroiD",
"id": 122358,
"node_id": "MDQ6VXNlcjEyMjM1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/122358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TecDroiD",
"html_url": "https://github.com/TecDroiD",
"followers_url": "https://api.github.com/users/TecDroi... | [] | closed | false | null | [] | null | 2 | 2024-08-18T09:08:55 | 2024-11-21T10:17:57 | 2024-11-21T09:50:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6403",
"html_url": "https://github.com/ollama/ollama/pull/6403",
"diff_url": "https://github.com/ollama/ollama/pull/6403.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6403.patch",
"merged_at": null
} | This is a braindead-easy (200 lines) Web client example for ollama.
I wrote it last night because i'm too stupid for using the more complex ones I've found online and don't even need those.
Well, there's still a little work to do but it's running and maybe some people are interested in it. Haven't found much about... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6403/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8674 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8674/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8674/comments | https://api.github.com/repos/ollama/ollama/issues/8674/events | https://github.com/ollama/ollama/issues/8674 | 2,819,400,345 | I_kwDOJ0Z1Ps6oDKKZ | 8,674 | No compatible GPUs were discovered | {
"login": "mikedolx",
"id": 15738117,
"node_id": "MDQ6VXNlcjE1NzM4MTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/15738117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikedolx",
"html_url": "https://github.com/mikedolx",
"followers_url": "https://api.github.com/users/mik... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-29T21:47:22 | 2025-01-29T22:06:33 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
i'm currently trying to setup ollama within docker. I am using the following `docker-compose.yml`:
```yaml
services:
ollama:
container_name: ollama
restart: unless-stopped
image: ollama/ollama:latest
ports:
- 11434:11434
environment:
- OLLAMA_KEEP_ALIVE=2... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8674/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3940 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3940/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3940/comments | https://api.github.com/repos/ollama/ollama/issues/3940/events | https://github.com/ollama/ollama/issues/3940 | 2,265,561,113 | I_kwDOJ0Z1Ps6HCbgZ | 3,940 | GPU offloading with little CPU RAM | {
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 17 | 2024-04-26T11:13:04 | 2025-01-10T08:55:00 | 2024-07-03T23:58:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Thanks for this amazing project, I really enjoy the simple, concise and easy-to-start interface! Keep up the fantastic work!
I have the following issue: I have a compute instance in the cloud with one NVIDIA A100 80GB and 16GB of CPU memory running Ubuntu.
When I try to run the llama3:70b... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3940/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4917 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4917/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4917/comments | https://api.github.com/repos/ollama/ollama/issues/4917/events | https://github.com/ollama/ollama/pull/4917 | 2,341,218,867 | PR_kwDOJ0Z1Ps5x1uxG | 4,917 | convert bert model from safetensors | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2024-06-07T21:56:33 | 2024-08-21T18:48:31 | 2024-08-21T18:48:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4917",
"html_url": "https://github.com/ollama/ollama/pull/4917",
"diff_url": "https://github.com/ollama/ollama/pull/4917.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4917.patch",
"merged_at": "2024-08-21T18:48:29"
} | add a `moreParser` interface which converters can implement to signal a need for more configuration parsing
fix a bug in the tokenizer.json parsing where vocab size might exceed intended count if added_token.json contains tokens already defined
fix a bug in cmd where create will flatten the directory structure po... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4917/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4917/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1921 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1921/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1921/comments | https://api.github.com/repos/ollama/ollama/issues/1921/events | https://github.com/ollama/ollama/pull/1921 | 2,076,058,971 | PR_kwDOJ0Z1Ps5jxm2m | 1,921 | fix gpu_test.go Error (same type) uint64->uint32 | {
"login": "fpreiss",
"id": 17441607,
"node_id": "MDQ6VXNlcjE3NDQxNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/17441607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fpreiss",
"html_url": "https://github.com/fpreiss",
"followers_url": "https://api.github.com/users/fpreis... | [] | closed | false | null | [] | null | 0 | 2024-01-11T08:41:45 | 2024-01-11T13:22:23 | 2024-01-11T13:22:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1921",
"html_url": "https://github.com/ollama/ollama/pull/1921",
"diff_url": "https://github.com/ollama/ollama/pull/1921.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1921.patch",
"merged_at": "2024-01-11T13:22:23"
} | When running the test suite on linux with a cuda build I get the following error without this commit:
```log
--- FAIL: TestBasicGetGPUInfo (0.06s)
gpu_test.go:21:
Error Trace: /build/ollama-cuda/src/ollama/gpu/gpu_test.go:21
Error: Elements should be the same type... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1921/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1363 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1363/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1363/comments | https://api.github.com/repos/ollama/ollama/issues/1363/events | https://github.com/ollama/ollama/issues/1363 | 2,022,629,683 | I_kwDOJ0Z1Ps54juEz | 1,363 | Meditron stops after the first line of answer | {
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | 2 | 2023-12-03T17:14:55 | 2023-12-06T16:08:23 | 2023-12-06T16:08:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello all,
I've tried Meditron with "`ollama run meditron`" and after that, I've asked "what are the symptoms of Kawasaki disease?".
The answer started with one line definition of Kawasaki disease and stopped after that.
I've tried with different questions, but results were only one liners.
What could be th... | {
"login": "orkutmuratyilmaz",
"id": 7395916,
"node_id": "MDQ6VXNlcjczOTU5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkutmuratyilmaz",
"html_url": "https://github.com/orkutmuratyilmaz",
"followers_url": "https://ap... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1363/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/834 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/834/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/834/comments | https://api.github.com/repos/ollama/ollama/issues/834/events | https://github.com/ollama/ollama/issues/834 | 1,949,136,320 | I_kwDOJ0Z1Ps50LXXA | 834 | Bring back the EMBED feature in the Modelfile | {
"login": "vividfog",
"id": 75913791,
"node_id": "MDQ6VXNlcjc1OTEzNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/75913791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vividfog",
"html_url": "https://github.com/vividfog",
"followers_url": "https://api.github.com/users/viv... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6100196012,
"node_id": ... | open | false | null | [] | null | 18 | 2023-10-18T08:10:36 | 2024-06-28T20:44:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I appreciate the effort keeping the codebase simple, Ollama is second to none in its elegance. But this was quick work removing the feature within a week without much debate if and how people use it, and is it really not valuable, or maybe it's a fantastic feature on second thought. I am going to miss this feature a lo... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/834/reactions",
"total_count": 29,
"+1": 19,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/834/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1349 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1349/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1349/comments | https://api.github.com/repos/ollama/ollama/issues/1349/events | https://github.com/ollama/ollama/pull/1349 | 2,021,752,091 | PR_kwDOJ0Z1Ps5g8gB8 | 1,349 | handle ctrl+z | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-02T00:04:30 | 2023-12-02T00:21:50 | 2023-12-02T00:21:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1349",
"html_url": "https://github.com/ollama/ollama/pull/1349",
"diff_url": "https://github.com/ollama/ollama/pull/1349.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1349.patch",
"merged_at": "2023-12-02T00:21:49"
} | resolves #1332 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1349/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3726 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3726/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3726/comments | https://api.github.com/repos/ollama/ollama/issues/3726/events | https://github.com/ollama/ollama/issues/3726 | 2,249,816,826 | I_kwDOJ0Z1Ps6GGXr6 | 3,726 | Error while trying to run/pull models | {
"login": "Radeeshp",
"id": 82216452,
"node_id": "MDQ6VXNlcjgyMjE2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/82216452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Radeeshp",
"html_url": "https://github.com/Radeeshp",
"followers_url": "https://api.github.com/users/Rad... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 0 | 2024-04-18T05:57:49 | 2024-07-11T03:47:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have a good internet connection, but still I am unable to run or pull models in ollama

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3726/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8436 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8436/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8436/comments | https://api.github.com/repos/ollama/ollama/issues/8436/events | https://github.com/ollama/ollama/issues/8436 | 2,789,070,510 | I_kwDOJ0Z1Ps6mPdau | 8,436 | kindly make f32 tensor type available in ollama | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2025-01-15T07:49:35 | 2025-01-24T09:33:36 | 2025-01-24T09:33:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
There are many models available in hugging face with f32 b32 but in ollama the highest avail is f16 if this is possible having higher tensor could result in better performance and results as newer hardware can actually perform good.
### OS
Linux
### GPU
AMD, Nvidia, Intel
### CPU
Intel, AM... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8436/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4262 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4262/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4262/comments | https://api.github.com/repos/ollama/ollama/issues/4262/events | https://github.com/ollama/ollama/issues/4262 | 2,286,051,799 | I_kwDOJ0Z1Ps6IQmHX | 4,262 | 403 using zrok | {
"login": "quantumalchemy",
"id": 22033041,
"node_id": "MDQ6VXNlcjIyMDMzMDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/22033041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quantumalchemy",
"html_url": "https://github.com/quantumalchemy",
"followers_url": "https://api.gi... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 2 | 2024-05-08T17:01:53 | 2024-06-30T21:36:32 | 2024-06-30T21:36:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Re: https://github.com/ollama/ollama/issues/3269
was fixed for ngrok But
ngrok is paid and has limits
anyway to get it to work with zrok?
--host-header doesn't work with zrok
zrok is opensource
_Originally posted by @quantumalchemy in https://github.com/ollama/ollama/issues/3269#issuecomment-2101017786_... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4262/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4826 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4826/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4826/comments | https://api.github.com/repos/ollama/ollama/issues/4826/events | https://github.com/ollama/ollama/issues/4826 | 2,334,985,468 | I_kwDOJ0Z1Ps6LLQz8 | 4,826 | Model request: GLM-4 9B | {
"login": "mywwq",
"id": 133221105,
"node_id": "U_kgDOB_DK8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133221105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mywwq",
"html_url": "https://github.com/mywwq",
"followers_url": "https://api.github.com/users/mywwq/followers",
... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 22 | 2024-06-05T06:08:18 | 2024-07-11T19:26:26 | 2024-07-09T16:34:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Add GLM-4 9B model
Model | Type | Seq Length | Download
-- | -- | -- | --
GLM-4-9B | Base | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b)
GLM-4-9B-Chat | Chat | 128K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat)
GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggin... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4826/reactions",
"total_count": 41,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 0,
"eyes": 14
} | https://api.github.com/repos/ollama/ollama/issues/4826/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1747 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1747/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1747/comments | https://api.github.com/repos/ollama/ollama/issues/1747/events | https://github.com/ollama/ollama/pull/1747 | 2,060,824,803 | PR_kwDOJ0Z1Ps5i-O3L | 1,747 | Added Ollama-SwiftUI to integrations | {
"login": "kghandour",
"id": 6333447,
"node_id": "MDQ6VXNlcjYzMzM0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6333447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kghandour",
"html_url": "https://github.com/kghandour",
"followers_url": "https://api.github.com/users/kg... | [] | closed | false | null | [] | null | 1 | 2023-12-30T18:42:25 | 2024-01-02T14:47:50 | 2024-01-02T14:47:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1747",
"html_url": "https://github.com/ollama/ollama/pull/1747",
"diff_url": "https://github.com/ollama/ollama/pull/1747.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1747.patch",
"merged_at": "2024-01-02T14:47:50"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1747/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/546 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/546/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/546/comments | https://api.github.com/repos/ollama/ollama/issues/546/events | https://github.com/ollama/ollama/issues/546 | 1,899,617,808 | I_kwDOJ0Z1Ps5xOd4Q | 546 | Request: `docker compose` support for Ollama server | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677677816,
"node_id": ... | closed | false | null | [] | null | 17 | 2023-09-17T01:14:06 | 2024-12-23T00:56:10 | 2024-12-23T00:56:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be really nice if Ollama supported `docker compose` for the Ollama server.
This would enable one to run:
- `docker compose up`: start the Ollama server
- `docker compose down`: stop the Ollama server
`docker compose` imo has two benefits:
- A bit easier than having to deal with multiprocessing associa... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/546/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7484 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7484/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7484/comments | https://api.github.com/repos/ollama/ollama/issues/7484/events | https://github.com/ollama/ollama/issues/7484 | 2,631,546,225 | I_kwDOJ0Z1Ps6c2jVx | 7,484 | Invalid prompt generation when the request message exceeds the context size | {
"login": "b4rtaz",
"id": 12797776,
"node_id": "MDQ6VXNlcjEyNzk3Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12797776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/b4rtaz",
"html_url": "https://github.com/b4rtaz",
"followers_url": "https://api.github.com/users/b4rtaz/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-11-03T23:12:36 | 2024-11-05T22:21:09 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello! You're doing a great job! Thank you so much!
Probably I found a bug when the user message exceedes the `num_ctx` value in the API server.
I started the server in the debug mode: `OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 ollama serve`
The below JS script works correctly with the `x/llama... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7484/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7064 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7064/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7064/comments | https://api.github.com/repos/ollama/ollama/issues/7064/events | https://github.com/ollama/ollama/pull/7064 | 2,559,479,104 | PR_kwDOJ0Z1Ps59RMvH | 7,064 | Update README.md, Terminal app "bb7" | {
"login": "drunkwcodes",
"id": 36228443,
"node_id": "MDQ6VXNlcjM2MjI4NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/36228443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drunkwcodes",
"html_url": "https://github.com/drunkwcodes",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-10-01T14:51:20 | 2024-11-21T08:03:11 | 2024-11-21T08:03:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7064",
"html_url": "https://github.com/ollama/ollama/pull/7064",
"diff_url": "https://github.com/ollama/ollama/pull/7064.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7064.patch",
"merged_at": "2024-11-21T08:03:11"
} | Introducing "bb7", an advanced chat bot designed for versatile interactions. Equipped with TTS (Text-to-Speech) capabilities, bb7 enables seamless voice conversations with users. It also supports local Retrieval-Augmented Generation (RAG), allowing for efficient document-based queries and responses, even without cloud ... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7064/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7051 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7051/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7051/comments | https://api.github.com/repos/ollama/ollama/issues/7051/events | https://github.com/ollama/ollama/issues/7051 | 2,557,401,866 | I_kwDOJ0Z1Ps6YbtsK | 7,051 | Tool call support in Qwen 2.5 hallucinates with Maybe pattern | {
"login": "ChristianWeyer",
"id": 888718,
"node_id": "MDQ6VXNlcjg4ODcxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/888718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristianWeyer",
"html_url": "https://github.com/ChristianWeyer",
"followers_url": "https://api.github... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-09-30T18:42:33 | 2024-10-12T01:18:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | According to https://python.useinstructor.com/concepts/maybe/.
There is an issue with tool calling in a case like this:
```json
{
"messages": [
{
"role": "system",
"content": "Today's date is 2024-09-30. Please consider this when processing the availability information.\nIf you cannot extra... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7051/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1166 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1166/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1166/comments | https://api.github.com/repos/ollama/ollama/issues/1166/events | https://github.com/ollama/ollama/issues/1166 | 1,998,311,717 | I_kwDOJ0Z1Ps53G9El | 1,166 | Since Modelfiles doesn't work How do we set default PARAMETER settings? | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 6 | 2023-11-17T05:34:37 | 2023-12-04T21:33:57 | 2023-12-04T21:33:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How can I set global settings for the current model without making a Modelfile? Example, set paramater for number of threads and gpus, etc fo a user chosen model?
Thanks. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1166/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4506 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4506/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4506/comments | https://api.github.com/repos/ollama/ollama/issues/4506/events | https://github.com/ollama/ollama/issues/4506 | 2,303,653,321 | I_kwDOJ0Z1Ps6JTvXJ | 4,506 | Any way to increase performance? And switch to F32? | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 7 | 2024-05-17T22:35:26 | 2024-07-29T20:03:01 | 2024-05-18T22:47:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am using a pascal Tesla P4 8gb gpu and i am looking for a way to increase performance.
Are there any tweaks/environment variables i can apply or things i can install such a pytorch version or something that will boost ollama performance?
I am getting very mixed results, any model bigger than a few gb has a massiv... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4506/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4510 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4510/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4510/comments | https://api.github.com/repos/ollama/ollama/issues/4510/events | https://github.com/ollama/ollama/issues/4510 | 2,303,836,269 | I_kwDOJ0Z1Ps6JUcBt | 4,510 | Would it be possible for Ollama to support re-rank models? | {
"login": "lyfuci",
"id": 12745441,
"node_id": "MDQ6VXNlcjEyNzQ1NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/12745441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lyfuci",
"html_url": "https://github.com/lyfuci",
"followers_url": "https://api.github.com/users/lyfuci/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 25 | 2024-05-18T04:05:07 | 2025-01-20T13:42:26 | 2024-09-02T20:57:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am using Ollama for my projects and it's been great. However, when using some AI app platform, like dify, build RAG app, rerank is nessesary. It's possible for Ollama to support rerank models.
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4510/reactions",
"total_count": 34,
"+1": 32,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/4510/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1983 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1983/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1983/comments | https://api.github.com/repos/ollama/ollama/issues/1983/events | https://github.com/ollama/ollama/pull/1983 | 2,080,508,666 | PR_kwDOJ0Z1Ps5kAzUB | 1,983 | use model defaults for `num_gqa`, `rope_frequency_base` and `rope_frequency_scale` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2024-01-13T23:18:45 | 2024-05-09T16:20:43 | 2024-05-09T16:06:14 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1983",
"html_url": "https://github.com/ollama/ollama/pull/1983",
"diff_url": "https://github.com/ollama/ollama/pull/1983.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1983.patch",
"merged_at": "2024-05-09T16:06:14"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1983/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1311 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1311/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1311/comments | https://api.github.com/repos/ollama/ollama/issues/1311/events | https://github.com/ollama/ollama/issues/1311 | 2,016,446,270 | I_kwDOJ0Z1Ps54MIc- | 1,311 | ollama causes "no space left on device" on common ubuntu installation. | {
"login": "Dougie777",
"id": 77511128,
"node_id": "MDQ6VXNlcjc3NTExMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/77511128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dougie777",
"html_url": "https://github.com/Dougie777",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-11-29T12:08:31 | 2024-01-20T00:04:10 | 2024-01-20T00:04:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Many ubuntu installations expect data to be added to the /home folder which I think is very common on many linux distros. However ollama writes the massive model files to /usr/share/ollama. This is fine for the bin files etc. But the data should not go here.
Is there a way to specify the installation folder or da... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1311/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5276/comments | https://api.github.com/repos/ollama/ollama/issues/5276/events | https://github.com/ollama/ollama/issues/5276 | 2,373,019,421 | I_kwDOJ0Z1Ps6NcWcd | 5,276 | Support for Vision Language Models that can process Videos. | {
"login": "manishkumart",
"id": 37763863,
"node_id": "MDQ6VXNlcjM3NzYzODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37763863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manishkumart",
"html_url": "https://github.com/manishkumart",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-06-25T15:42:21 | 2024-07-30T19:50:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is it possible to support loading the VLM's like VideoLLama, Chat-UniVi models that can process Videos? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5276/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/5276/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2024 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2024/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2024/comments | https://api.github.com/repos/ollama/ollama/issues/2024/events | https://github.com/ollama/ollama/issues/2024 | 2,084,955,906 | I_kwDOJ0Z1Ps58RecC | 2,024 | falcon model not working. | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-01-16T21:18:06 | 2024-05-17T21:34:16 | 2024-05-17T21:34:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I've been working with https://github.com/jmorganca/ollama/issues/1691 and found that it consistently dies with falcon.
So I tried falcon on it's own. It died.
So I tried removing falcon and reinstalling it.
Still died.
I can no longer get falcon to work.
I'm on Ollama version 0.1.20
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2024/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/1678 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1678/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1678/comments | https://api.github.com/repos/ollama/ollama/issues/1678/events | https://github.com/ollama/ollama/issues/1678 | 2,054,434,784 | I_kwDOJ0Z1Ps56dC_g | 1,678 | Error: timed out waiting for llama runner to start | {
"login": "LegendNava",
"id": 74506040,
"node_id": "MDQ6VXNlcjc0NTA2MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/74506040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LegendNava",
"html_url": "https://github.com/LegendNava",
"followers_url": "https://api.github.com/use... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 18 | 2023-12-22T19:29:59 | 2024-03-12T17:58:28 | 2024-03-12T17:58:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Was trying out dolphin-mixtral. Downloaded successfully but:

Does anything seem off? What should i do in this situation
I'm on Ubuntu 20.24, Intel i3 6th Gen. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1678/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2570 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2570/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2570/comments | https://api.github.com/repos/ollama/ollama/issues/2570/events | https://github.com/ollama/ollama/issues/2570 | 2,140,789,360 | I_kwDOJ0Z1Ps5_mdpw | 2,570 | Potential Regression with Model switching | {
"login": "libbaz",
"id": 10919499,
"node_id": "MDQ6VXNlcjEwOTE5NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/10919499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/libbaz",
"html_url": "https://github.com/libbaz",
"followers_url": "https://api.github.com/users/libbaz/fo... | [] | closed | false | null | [] | null | 1 | 2024-02-18T07:27:08 | 2024-02-18T07:28:57 | 2024-02-18T07:28:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Issue:**
I just pulled the latest ollama docker image (Ollama v0.1.25) and have noticed api `/chat` requests are no longer switching the Model Template on templates based on the same Models. In the past this wasnt an issue.
**Steps to reproduce:**
create Foo-1 from model "Foo"
create Foo-2 from model "Foo"
cre... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2570/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4699 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4699/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4699/comments | https://api.github.com/repos/ollama/ollama/issues/4699/events | https://github.com/ollama/ollama/issues/4699 | 2,322,884,407 | I_kwDOJ0Z1Ps6KdGc3 | 4,699 | Computing Context Embeddings, Instead of averagning token embeddings | {
"login": "Demirrr",
"id": 13405667,
"node_id": "MDQ6VXNlcjEzNDA1NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13405667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Demirrr",
"html_url": "https://github.com/Demirrr",
"followers_url": "https://api.github.com/users/Demirr... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-05-29T09:51:44 | 2024-05-29T09:51:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was wondering whether we can return the context embeddings used before the next token prediction instead of averaging the token embeddings as currently done. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4699/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2402 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2402/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2402/comments | https://api.github.com/repos/ollama/ollama/issues/2402/events | https://github.com/ollama/ollama/issues/2402 | 2,124,162,929 | I_kwDOJ0Z1Ps5-nCdx | 2,402 | Error dial tcp: lookup no such host | {
"login": "casey-martin",
"id": 13857230,
"node_id": "MDQ6VXNlcjEzODU3MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13857230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casey-martin",
"html_url": "https://github.com/casey-martin",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 5 | 2024-02-08T00:49:01 | 2024-05-31T07:16:54 | 2024-02-20T21:40:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am encountering a `dial tcp lookup` error when executing any `ollama pull` or `ollama run` commands through docker on Ubuntu 22.04. I searched through the issues and found some similar errors, however they were related to the users' proxies which I am not using. I am also not running any firewalls. The commands I exe... | {
"login": "casey-martin",
"id": 13857230,
"node_id": "MDQ6VXNlcjEzODU3MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13857230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casey-martin",
"html_url": "https://github.com/casey-martin",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2402/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1427 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1427/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1427/comments | https://api.github.com/repos/ollama/ollama/issues/1427/events | https://github.com/ollama/ollama/pull/1427 | 2,031,749,716 | PR_kwDOJ0Z1Ps5hec27 | 1,427 | post-response templating | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-12-08T00:56:14 | 2023-12-22T22:07:06 | 2023-12-22T22:07:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1427",
"html_url": "https://github.com/ollama/ollama/pull/1427",
"diff_url": "https://github.com/ollama/ollama/pull/1427.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1427.patch",
"merged_at": "2023-12-22T22:07:05"
} | - add post-response templating to /generate
- add post-response templating to /chat
- add templating tests
A common format for LLM templating may include post-response templating. Our current template format kind of supported this by checking `{{ if not .First }}` but it is confusing to read. This change allows po... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1427/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7697 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7697/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7697/comments | https://api.github.com/repos/ollama/ollama/issues/7697/events | https://github.com/ollama/ollama/issues/7697 | 2,663,637,106 | I_kwDOJ0Z1Ps6ew-By | 7,697 | ollama is not working , Error: could not connect to ollama app, is it running? | {
"login": "gokulcoder7",
"id": 167660982,
"node_id": "U_kgDOCf5Ntg",
"avatar_url": "https://avatars.githubusercontent.com/u/167660982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gokulcoder7",
"html_url": "https://github.com/gokulcoder7",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 26 | 2024-11-16T02:22:22 | 2025-01-12T11:50:14 | 2024-12-02T15:29:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
C:\Windows\System32>ollama list
Error: could not connect to ollama app, is it running?
C:\Windows\System32>
C:\Windows\System32>ollama --version
Warning: could not connect to a running Ollama instance
Warning: client version is 0.4.2
### OS
Windows
### GPU
Nvidia
### CPU
Intel
##... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7697/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2390 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2390/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2390/comments | https://api.github.com/repos/ollama/ollama/issues/2390/events | https://github.com/ollama/ollama/issues/2390 | 2,123,355,704 | I_kwDOJ0Z1Ps5-j9Y4 | 2,390 | List of domains ollama needs access to | {
"login": "arno4000",
"id": 50365065,
"node_id": "MDQ6VXNlcjUwMzY1MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50365065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arno4000",
"html_url": "https://github.com/arno4000",
"followers_url": "https://api.github.com/users/arn... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 2 | 2024-02-07T15:58:37 | 2025-01-23T12:29:03 | 2024-03-11T19:31:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there a list of domains, which need to be allowed in the forward proxy for ollama to function properly? ollama.ai is allowed, and I see in the logs of the proxy that ollama tries to connect to `https://registry.ollama.ai`. But then ollama tries to access `https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2390/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2390/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7761 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7761/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7761/comments | https://api.github.com/repos/ollama/ollama/issues/7761/events | https://github.com/ollama/ollama/issues/7761 | 2,676,131,157 | I_kwDOJ0Z1Ps6fgoVV | 7,761 | High Inference Time and Limited GPU Utilization with Ollama Docker | {
"login": "nicho2",
"id": 11471811,
"node_id": "MDQ6VXNlcjExNDcxODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11471811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicho2",
"html_url": "https://github.com/nicho2",
"followers_url": "https://api.github.com/users/nicho2/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-11-20T14:46:32 | 2024-11-21T07:01:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
## Description:
I am using Ollama in a Docker setup with GPU support, configured to use all available GPUs on my system. However, when using the NemoTron model with a simple prompt and utilizing the function calling feature, the inference time is around 50 seconds to get a response, which i... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7761/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6296 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6296/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6296/comments | https://api.github.com/repos/ollama/ollama/issues/6296/events | https://github.com/ollama/ollama/issues/6296 | 2,458,866,096 | I_kwDOJ0Z1Ps6Sj1Gw | 6,296 | Better to add athene70b f16 and q8 | {
"login": "Llamadouble999q",
"id": 176237961,
"node_id": "U_kgDOCoEtiQ",
"avatar_url": "https://avatars.githubusercontent.com/u/176237961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Llamadouble999q",
"html_url": "https://github.com/Llamadouble999q",
"followers_url": "https://api.githu... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-08-10T02:57:46 | 2024-09-02T23:10:45 | 2024-09-02T23:10:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Why ollama stopped uploading athene? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6296/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4372 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4372/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4372/comments | https://api.github.com/repos/ollama/ollama/issues/4372/events | https://github.com/ollama/ollama/issues/4372 | 2,291,259,798 | I_kwDOJ0Z1Ps6IkdmW | 4,372 | When can I make the api support functions parameters like openai, using langchain implementation will make the request slow, which is not what I want | {
"login": "zhangweiwei0326",
"id": 5975616,
"node_id": "MDQ6VXNlcjU5NzU2MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5975616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangweiwei0326",
"html_url": "https://github.com/zhangweiwei0326",
"followers_url": "https://api.g... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-12T09:54:05 | 2024-05-13T06:09:24 | 2024-05-13T06:09:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
from langchain_experimental.llms.ollama_functions import OllamaFunctions
model=OllamaFunctions(base_url="http://192.168.1.117:11434", model="qwen:4b", temperature=0.0, format="json")
# 绑定这个函数
model_with_tools = model.bind_tools(
tools = [
{
"name": "getCurrentWe... | {
"login": "zhangweiwei0326",
"id": 5975616,
"node_id": "MDQ6VXNlcjU5NzU2MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5975616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangweiwei0326",
"html_url": "https://github.com/zhangweiwei0326",
"followers_url": "https://api.g... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4372/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4372/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/995 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/995/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/995/comments | https://api.github.com/repos/ollama/ollama/issues/995/events | https://github.com/ollama/ollama/pull/995 | 1,977,240,956 | PR_kwDOJ0Z1Ps5el2kT | 995 | Added ollama-rs to community integrations | {
"login": "pepperoni21",
"id": 29759371,
"node_id": "MDQ6VXNlcjI5NzU5Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29759371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepperoni21",
"html_url": "https://github.com/pepperoni21",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2023-11-04T08:51:06 | 2023-11-04T21:51:29 | 2023-11-04T21:51:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/995",
"html_url": "https://github.com/ollama/ollama/pull/995",
"diff_url": "https://github.com/ollama/ollama/pull/995.diff",
"patch_url": "https://github.com/ollama/ollama/pull/995.patch",
"merged_at": "2023-11-04T21:51:29"
} | Hey, I made Rust bindings for Ollama https://github.com/pepperoni21/ollama-rs | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/995/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3364 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3364/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3364/comments | https://api.github.com/repos/ollama/ollama/issues/3364/events | https://github.com/ollama/ollama/issues/3364 | 2,209,552,040 | I_kwDOJ0Z1Ps6Dsxao | 3,364 | add starling-lm beta | {
"login": "Lev1ty",
"id": 15148828,
"node_id": "MDQ6VXNlcjE1MTQ4ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15148828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lev1ty",
"html_url": "https://github.com/Lev1ty",
"followers_url": "https://api.github.com/users/Lev1ty/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 5 | 2024-03-26T23:58:29 | 2024-04-10T19:49:24 | 2024-04-10T19:49:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
https://huggingface.co/Nexusflow/Starling-LM-7B-beta
Strong performance on LMSYS leaderboard
 | {
"login": "Lev1ty",
"id": 15148828,
"node_id": "MDQ6VXNlcjE1MTQ4ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15148828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lev1ty",
"html_url": "https://github.com/Lev1ty",
"followers_url": "https://api.github.com/users/Lev1ty/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3364/reactions",
"total_count": 18,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3364/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6854 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6854/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6854/comments | https://api.github.com/repos/ollama/ollama/issues/6854/events | https://github.com/ollama/ollama/pull/6854 | 2,533,079,340 | PR_kwDOJ0Z1Ps573ay7 | 6,854 | server: Add OLLAMA_NO_MMAP to disable mmap globally | {
"login": "yubingjiaocn",
"id": 9165347,
"node_id": "MDQ6VXNlcjkxNjUzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9165347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yubingjiaocn",
"html_url": "https://github.com/yubingjiaocn",
"followers_url": "https://api.github.com... | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-09-18T08:33:28 | 2025-01-03T06:15:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6854",
"html_url": "https://github.com/ollama/ollama/pull/6854",
"diff_url": "https://github.com/ollama/ollama/pull/6854.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6854.patch",
"merged_at": null
} | Close #4895
This PR added an environment variable `OLLAMA_NO_MMAP` to `ollama serve`. When this environment variable is set to `1`, `--no-mmap` param is always added to llama runner.
This PR will not bring any breaking change. If this environment variable is not set, mmap will keep enabled except pre-defined co... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6854/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6854/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8688 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8688/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8688/comments | https://api.github.com/repos/ollama/ollama/issues/8688/events | https://github.com/ollama/ollama/pull/8688 | 2,820,160,395 | PR_kwDOJ0Z1Ps6JduvO | 8,688 | Add library in Zig. | {
"login": "dravenk",
"id": 14295318,
"node_id": "MDQ6VXNlcjE0Mjk1MzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/14295318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dravenk",
"html_url": "https://github.com/dravenk",
"followers_url": "https://api.github.com/users/draven... | [] | open | false | null | [] | null | 0 | 2025-01-30T08:05:43 | 2025-01-30T08:05:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8688",
"html_url": "https://github.com/ollama/ollama/pull/8688",
"diff_url": "https://github.com/ollama/ollama/pull/8688.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8688.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8688/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2066/comments | https://api.github.com/repos/ollama/ollama/issues/2066/events | https://github.com/ollama/ollama/issues/2066 | 2,089,566,574 | I_kwDOJ0Z1Ps58jEFu | 2,066 | Switching from CUDA to CPU runner causes segmentation fault | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-01-19T04:34:56 | 2024-01-19T20:22:06 | 2024-01-19T20:22:05 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is only currently an issue on `main`
```
2024/01/19 04:46:40 routes.go:76: INFO changing loaded model
2024/01/19 04:46:40 gpu.go:136: INFO CUDA Compute Capability detected: 8.9
2024/01/19 04:46:40 gpu.go:136: INFO CUDA Compute Capability detected: 8.9
2024/01/19 04:46:40 cpu_common.go:11: INFO CPU has AVX2
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2066/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6883 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6883/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6883/comments | https://api.github.com/repos/ollama/ollama/issues/6883/events | https://github.com/ollama/ollama/issues/6883 | 2,537,368,900 | I_kwDOJ0Z1Ps6XPS1E | 6,883 | Problem Executing 'ollama create' Multiple Times with Different GGUF Files | {
"login": "michaelc2005",
"id": 50670873,
"node_id": "MDQ6VXNlcjUwNjcwODcz",
"avatar_url": "https://avatars.githubusercontent.com/u/50670873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelc2005",
"html_url": "https://github.com/michaelc2005",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-19T21:06:04 | 2024-12-02T23:00:54 | 2024-12-02T23:00:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
(I have done some searching and as of yet not found any mention of this issue, but I may have missed it.)
When creating models from GGUF files downloaded from Hugging Face, I observed that two different models, when tested with an identical prompt (copied and pasted), produced nearly similar ... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6883/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2751 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2751/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2751/comments | https://api.github.com/repos/ollama/ollama/issues/2751/events | https://github.com/ollama/ollama/issues/2751 | 2,152,955,339 | I_kwDOJ0Z1Ps6AU33L | 2,751 | Error on run ollama serve on windows 10 | {
"login": "Alias4D",
"id": 27604791,
"node_id": "MDQ6VXNlcjI3NjA0Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/27604791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alias4D",
"html_url": "https://github.com/Alias4D",
"followers_url": "https://api.github.com/users/Alias4... | [] | closed | false | null | [] | null | 4 | 2024-02-25T21:46:51 | 2024-02-26T14:25:27 | 2024-02-26T14:25:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Error on run ollama serve**
time=2024-02-26T00:41:24.616+03:00 level=INFO source=images.go:706 msg="total blobs: 0"
time=2024-02-26T00:41:24.627+03:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
panic: bad origin: origins must contain '*' or include http://,https://,chrome-extension://,sa... | {
"login": "Alias4D",
"id": 27604791,
"node_id": "MDQ6VXNlcjI3NjA0Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/27604791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alias4D",
"html_url": "https://github.com/Alias4D",
"followers_url": "https://api.github.com/users/Alias4... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2751/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/806 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/806/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/806/comments | https://api.github.com/repos/ollama/ollama/issues/806/events | https://github.com/ollama/ollama/issues/806 | 1,945,578,017 | I_kwDOJ0Z1Ps5z9yoh | 806 | Add System prompt in WizardLM template | {
"login": "louisabraham",
"id": 13174805,
"node_id": "MDQ6VXNlcjEzMTc0ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisabraham",
"html_url": "https://github.com/louisabraham",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 1 | 2023-10-16T15:53:36 | 2023-12-04T20:19:04 | 2023-12-04T20:19:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I think the following works quite well
```
{{ .System }}
USER: {{ .Prompt }}
ASSISTANT:
``` | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/806/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/429 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/429/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/429/comments | https://api.github.com/repos/ollama/ollama/issues/429/events | https://github.com/ollama/ollama/issues/429 | 1,868,409,157 | I_kwDOJ0Z1Ps5vXalF | 429 | Why does Ollama need sudo? | {
"login": "vegabook",
"id": 3780883,
"node_id": "MDQ6VXNlcjM3ODA4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegabook",
"html_url": "https://github.com/vegabook",
"followers_url": "https://api.github.com/users/vegab... | [] | closed | false | null | [] | null | 2 | 2023-08-27T08:27:29 | 2023-08-27T11:41:19 | 2023-08-27T11:39:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I run nix on my mac to isolate all software.
`nix-shell -p ollama` works great since ollama is [available on the unstable channel](https://search.nixos.org/packages?channel=unstable&from=0&size=50&sort=relevance&type=packages&query=ollama).
Works perfectly if I sudo both the server, and the client:
<img width=... | {
"login": "vegabook",
"id": 3780883,
"node_id": "MDQ6VXNlcjM3ODA4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegabook",
"html_url": "https://github.com/vegabook",
"followers_url": "https://api.github.com/users/vegab... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/429/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/429/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8633 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8633/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8633/comments | https://api.github.com/repos/ollama/ollama/issues/8633/events | https://github.com/ollama/ollama/pull/8633 | 2,815,643,279 | PR_kwDOJ0Z1Ps6JOZes | 8,633 | my commit | {
"login": "aditya-agrawalSFDC",
"id": 122862436,
"node_id": "U_kgDOB1K7ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122862436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-agrawalSFDC",
"html_url": "https://github.com/aditya-agrawalSFDC",
"followers_url": "https://... | [] | closed | false | null | [] | null | 0 | 2025-01-28T13:15:07 | 2025-01-28T13:17:19 | 2025-01-28T13:17:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8633",
"html_url": "https://github.com/ollama/ollama/pull/8633",
"diff_url": "https://github.com/ollama/ollama/pull/8633.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8633.patch",
"merged_at": null
} | null | {
"login": "aditya-agrawalSFDC",
"id": 122862436,
"node_id": "U_kgDOB1K7ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/122862436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-agrawalSFDC",
"html_url": "https://github.com/aditya-agrawalSFDC",
"followers_url": "https://... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8633/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1636 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1636/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1636/comments | https://api.github.com/repos/ollama/ollama/issues/1636/events | https://github.com/ollama/ollama/issues/1636 | 2,050,938,443 | I_kwDOJ0Z1Ps56PtZL | 1,636 | Error : llama runner process has terminated , on running mistral "ollama run mistral" | {
"login": "yashchittora",
"id": 112685991,
"node_id": "U_kgDOBrdzpw",
"avatar_url": "https://avatars.githubusercontent.com/u/112685991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashchittora",
"html_url": "https://github.com/yashchittora",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 9 | 2023-12-20T16:54:46 | 2024-07-23T19:37:12 | 2024-01-08T21:42:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have a MacBook Air M1. Earlier the mistral model used to run flawlessly , upon the latest update of both ollama and mistral model , the model refuses to run.
Any Explaination or troubleshooting ? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1636/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/745 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/745/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/745/comments | https://api.github.com/repos/ollama/ollama/issues/745/events | https://github.com/ollama/ollama/issues/745 | 1,934,099,225 | I_kwDOJ0Z1Ps5zSAMZ | 745 | why different answers from same model? | {
"login": "Enhitech",
"id": 36785833,
"node_id": "MDQ6VXNlcjM2Nzg1ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/36785833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Enhitech",
"html_url": "https://github.com/Enhitech",
"followers_url": "https://api.github.com/users/Enh... | [] | closed | false | null | [] | null | 1 | 2023-10-10T01:11:20 | 2023-10-11T00:21:04 | 2023-10-11T00:21:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, guys,
I run a llama2 model and then access the model in three ways: 1. use rest api; 2. use cmd "ollama run modelname 'prompt'"; 3. use a conversational terminal.
I got different answers. 1 and 2 are similar, but 3 is much better than 1 and 2.
WHY? how could I get the same answer as 3 via 1 or 2?
Than... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/745/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2834 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2834/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2834/comments | https://api.github.com/repos/ollama/ollama/issues/2834/events | https://github.com/ollama/ollama/issues/2834 | 2,161,505,243 | I_kwDOJ0Z1Ps6A1fPb | 2,834 | [feature request]Cmd: New Topic | {
"login": "lededev",
"id": 30518126,
"node_id": "MDQ6VXNlcjMwNTE4MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/30518126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lededev",
"html_url": "https://github.com/lededev",
"followers_url": "https://api.github.com/users/ledede... | [] | closed | false | null | [] | null | 1 | 2024-02-29T15:18:48 | 2024-03-01T01:16:02 | 2024-03-01T01:16:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Command line press `/?`, there is no such feature as New Topic or New Session, Please add one, instead of `/bye` and rerun again. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2834/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/632 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/632/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/632/comments | https://api.github.com/repos/ollama/ollama/issues/632/events | https://github.com/ollama/ollama/pull/632 | 1,917,549,741 | PR_kwDOJ0Z1Ps5bcjLo | 632 | Document response stream chunk delimiter. | {
"login": "JayNakrani",
"id": 6269279,
"node_id": "MDQ6VXNlcjYyNjkyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6269279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JayNakrani",
"html_url": "https://github.com/JayNakrani",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 5 | 2023-09-28T13:24:16 | 2023-09-30T04:46:03 | 2023-09-30T04:45:52 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/632",
"html_url": "https://github.com/ollama/ollama/pull/632",
"diff_url": "https://github.com/ollama/ollama/pull/632.diff",
"patch_url": "https://github.com/ollama/ollama/pull/632.patch",
"merged_at": "2023-09-30T04:45:52"
} | Discussion on discord at https://discord.com/channels/1128867683291627614/1128867684130508875/1156838261919076352 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/632/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.