url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/495 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/495/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/495/comments | https://api.github.com/repos/ollama/ollama/issues/495/events | https://github.com/ollama/ollama/issues/495 | 1,887,329,211 | I_kwDOJ0Z1Ps5wflu7 | 495 | Build Error: Unable to Apply Patch in 'examples/server/server.cpp' during Docker Build Process | {
"login": "avri-schneider",
"id": 6785181,
"node_id": "MDQ6VXNlcjY3ODUxODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6785181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avri-schneider",
"html_url": "https://github.com/avri-schneider",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 11 | 2023-09-08T09:47:09 | 2023-12-06T11:41:27 | 2023-10-28T19:34:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Issue Description:**
During the Docker build process, an error occurred while attempting to apply patches to the 'examples/server/server.cpp' file. The error message indicated that the patch did not apply successfully. Upon investigation, it was discovered that the patches being applied have already been applied t... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/495/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1687 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1687/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1687/comments | https://api.github.com/repos/ollama/ollama/issues/1687/events | https://github.com/ollama/ollama/issues/1687 | 2,054,752,935 | I_kwDOJ0Z1Ps56eQqn | 1,687 | Old Models disappear after Ollama Update (0.1.17) | {
"login": "sthufnagl",
"id": 1492014,
"node_id": "MDQ6VXNlcjE0OTIwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1492014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sthufnagl",
"html_url": "https://github.com/sthufnagl",
"followers_url": "https://api.github.com/users/st... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2023-12-23T10:59:30 | 2024-07-19T07:15:27 | 2023-12-26T12:10:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
**Environment:**
my environment ist WSL on Win11.
**Update Command:**
curl https://ollama.ai/install.sh | sh
**Situation:**
After an Update to Ollama 0.1.17 all my old Models (202GB) are not visible anymore and when I try to start an old one the Model is downloaded once again. Physically the Model File... | {
"login": "sthufnagl",
"id": 1492014,
"node_id": "MDQ6VXNlcjE0OTIwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1492014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sthufnagl",
"html_url": "https://github.com/sthufnagl",
"followers_url": "https://api.github.com/users/st... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1687/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1687/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7403 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7403/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7403/comments | https://api.github.com/repos/ollama/ollama/issues/7403/events | https://github.com/ollama/ollama/issues/7403 | 2,619,039,241 | I_kwDOJ0Z1Ps6cG14J | 7,403 | Memory leaks after each prompt on 6.11 kernel with nvidia GPU | {
"login": "regularRandom",
"id": 14252934,
"node_id": "MDQ6VXNlcjE0MjUyOTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/14252934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regularRandom",
"html_url": "https://github.com/regularRandom",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 20 | 2024-10-28T17:21:15 | 2024-11-18T20:05:00 | 2024-11-18T20:05:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It seems Ollama has a memory leak or doesn't clean the memory after the prompt (execution). I have following stuff in the logs:
> [Mon Oct 28 13:03:00 2024] ------------[ cut here ]------------
> [Mon Oct 28 13:03:00 2024] WARNING: CPU: 38 PID: 15739 at mm/page_alloc.c:4678 __alloc_pages_nop... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7403/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7403/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5010 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5010/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5010/comments | https://api.github.com/repos/ollama/ollama/issues/5010/events | https://github.com/ollama/ollama/issues/5010 | 2,349,811,726 | I_kwDOJ0Z1Ps6MD0gO | 5,010 | Suggestion for RFC7231 Compliant Endpoint for Model Deletion | {
"login": "JerrettDavis",
"id": 2610199,
"node_id": "MDQ6VXNlcjI2MTAxOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JerrettDavis",
"html_url": "https://github.com/JerrettDavis",
"followers_url": "https://api.github.com... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 0 | 2024-06-12T23:00:52 | 2024-11-06T01:22:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Description:**
Currently, the Ollama library’s DELETE /api/delete endpoint requires the model name to be provided in the request body. However, it would be beneficial and more in line with RFC7231 standards to support a URL-based model name specification. This approach is more intuitive and aligns with common RESTfu... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5010/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3206 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3206/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3206/comments | https://api.github.com/repos/ollama/ollama/issues/3206/events | https://github.com/ollama/ollama/issues/3206 | 2,191,084,152 | I_kwDOJ0Z1Ps6CmUp4 | 3,206 | 我将MiniCPM-2B-dpo-bf16-gguf.gguf模型成功的导入到了ollama中,并运行起来了。但是在推理的过程中,发现模型再说胡话,臆想比较严重。详见截图 | {
"login": "zhao1012",
"id": 38517343,
"node_id": "MDQ6VXNlcjM4NTE3MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/38517343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhao1012",
"html_url": "https://github.com/zhao1012",
"followers_url": "https://api.github.com/users/zha... | [] | closed | false | null | [] | null | 13 | 2024-03-18T02:16:50 | 2024-08-09T16:08:35 | 2024-06-09T17:12:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 我将MiniCPM-2B-dpo-bf16-gguf.gguf模型成功的导入到了ollama中,并运行起来了。但是在推理的过程中,发现模型再说胡话,臆想比较严重。详见截图

_Originally posted by @zhao1012 in https://github.com/ollama/ollama/issues/2383#issuecomment-2002754353... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3206/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7498 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7498/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7498/comments | https://api.github.com/repos/ollama/ollama/issues/7498/events | https://github.com/ollama/ollama/pull/7498 | 2,633,737,969 | PR_kwDOJ0Z1Ps6A2LnI | 7,498 | CI: Switch to v13 macos runner | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-11-04T20:07:11 | 2024-11-04T21:04:25 | 2024-11-04T21:02:07 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7498",
"html_url": "https://github.com/ollama/ollama/pull/7498",
"diff_url": "https://github.com/ollama/ollama/pull/7498.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7498.patch",
"merged_at": "2024-11-04T21:02:07"
} | GitHub has started doing brown-outs on the deprecated macos-12 runner which has blocked the 0.4.0 release CI.
I tried using XCode 14.3.1 however it generates warnings when trying to target macos v11. I've verified that 14.1.0 generates valid v11 binaries without these warnings based on our current build_darwin.sh s... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7498/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7037 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7037/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7037/comments | https://api.github.com/repos/ollama/ollama/issues/7037/events | https://github.com/ollama/ollama/issues/7037 | 2,555,089,394 | I_kwDOJ0Z1Ps6YS5Hy | 7,037 | ollama app not running | {
"login": "horyekhunley",
"id": 23106322,
"node_id": "MDQ6VXNlcjIzMTA2MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/23106322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/horyekhunley",
"html_url": "https://github.com/horyekhunley",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-09-29T18:59:47 | 2024-10-23T00:17:22 | 2024-10-23T00:17:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
normally when i install ollama, it just works as is, i dont usually have to 'ollama serve'. but now i have to do that if i want to use ollama, if i don't i get the error:
`Error: could not connect to ollama app, is it running?`
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7037/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8097 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8097/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8097/comments | https://api.github.com/repos/ollama/ollama/issues/8097/events | https://github.com/ollama/ollama/issues/8097 | 2,739,860,598 | I_kwDOJ0Z1Ps6jTvR2 | 8,097 | Add the ability to "skip vision decoder" to make it easier to support future models | {
"login": "vYLQs6",
"id": 143073604,
"node_id": "U_kgDOCIchRA",
"avatar_url": "https://avatars.githubusercontent.com/u/143073604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vYLQs6",
"html_url": "https://github.com/vYLQs6",
"followers_url": "https://api.github.com/users/vYLQs6/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-12-14T13:00:24 | 2024-12-17T19:40:08 | 2024-12-17T19:40:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Since most AI companies have their own vision models, in 2025, there might be more models that release only a vision variant instead of both text and vision variants.
To make it easier to support these new models, it would be nice to be able to skip the vision decoder and infer only the text part.
Is this possib... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8097/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1320 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1320/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1320/comments | https://api.github.com/repos/ollama/ollama/issues/1320/events | https://github.com/ollama/ollama/pull/1320 | 2,017,217,758 | PR_kwDOJ0Z1Ps5gs_Wy | 1,320 | Do no overwrite systemd service file | {
"login": "ex3ndr",
"id": 400659,
"node_id": "MDQ6VXNlcjQwMDY1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/400659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ex3ndr",
"html_url": "https://github.com/ex3ndr",
"followers_url": "https://api.github.com/users/ex3ndr/follow... | [] | closed | false | null | [] | null | 4 | 2023-11-29T18:54:10 | 2024-02-20T03:27:22 | 2024-02-20T03:27:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1320",
"html_url": "https://github.com/ollama/ollama/pull/1320",
"diff_url": "https://github.com/ollama/ollama/pull/1320.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1320.patch",
"merged_at": null
} | Currently during upgrade systemd file is lost, this fix avoid overwriting a file | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1320/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2132 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2132/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2132/comments | https://api.github.com/repos/ollama/ollama/issues/2132/events | https://github.com/ollama/ollama/issues/2132 | 2,093,042,795 | I_kwDOJ0Z1Ps58wUxr | 2,132 | How to solve ConnectionError ([Errno 111] Connection refused) | {
"login": "yliu2702",
"id": 154867456,
"node_id": "U_kgDOCTsXAA",
"avatar_url": "https://avatars.githubusercontent.com/u/154867456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yliu2702",
"html_url": "https://github.com/yliu2702",
"followers_url": "https://api.github.com/users/yliu2702/... | [] | closed | false | null | [] | null | 27 | 2024-01-22T04:20:16 | 2024-10-02T15:02:28 | 2024-05-14T19:06:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, I tried to access 'llama 2' and 'mistral' model to build a local open-source LLM chatbot. However, maybe I access your website too ofter during debugging, I met this error : 'ConnectionError: HTTPConnectionPool(host=‘0.0.0.0’, port=11434): Max retries exceeded with url: /api/chat (Caused by NewConnectionError(‘<... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2132/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4641 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4641/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4641/comments | https://api.github.com/repos/ollama/ollama/issues/4641/events | https://github.com/ollama/ollama/issues/4641 | 2,317,318,135 | I_kwDOJ0Z1Ps6KH3f3 | 4,641 | What's happening?When I enter the Serve command. | {
"login": "SuzuKaO",
"id": 32011143,
"node_id": "MDQ6VXNlcjMyMDExMTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/32011143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuzuKaO",
"html_url": "https://github.com/SuzuKaO",
"followers_url": "https://api.github.com/users/SuzuKa... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 2 | 2024-05-25T23:55:37 | 2024-08-09T23:48:41 | 2024-08-09T23:48:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4641/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8661 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8661/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8661/comments | https://api.github.com/repos/ollama/ollama/issues/8661/events | https://github.com/ollama/ollama/issues/8661 | 2,818,282,626 | I_kwDOJ0Z1Ps6n-5SC | 8,661 | Will Ollama run on the NPU(ANE) of Apple M silicon? | {
"login": "imJack6",
"id": 58357771,
"node_id": "MDQ6VXNlcjU4MzU3Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/58357771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imJack6",
"html_url": "https://github.com/imJack6",
"followers_url": "https://api.github.com/users/imJack... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2025-01-29T13:50:08 | 2025-01-29T13:50:08 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | RT | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8661/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8661/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8237 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8237/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8237/comments | https://api.github.com/repos/ollama/ollama/issues/8237/events | https://github.com/ollama/ollama/pull/8237 | 2,758,607,598 | PR_kwDOJ0Z1Ps6GM7FK | 8,237 | Changes macOS installer to skip symlink step if ollama is already in path. | {
"login": "dey-indranil",
"id": 18570914,
"node_id": "MDQ6VXNlcjE4NTcwOTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/18570914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dey-indranil",
"html_url": "https://github.com/dey-indranil",
"followers_url": "https://api.github.c... | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2024-12-25T08:08:00 | 2025-01-28T21:42:38 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8237",
"html_url": "https://github.com/ollama/ollama/pull/8237",
"diff_url": "https://github.com/ollama/ollama/pull/8237.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8237.patch",
"merged_at": null
} | Resolves [283](https://github.com/ollama/ollama/issues/283) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8237/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8530 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8530/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8530/comments | https://api.github.com/repos/ollama/ollama/issues/8530/events | https://github.com/ollama/ollama/issues/8530 | 2,803,368,900 | I_kwDOJ0Z1Ps6nGAPE | 8,530 | ollama pull hangs at ~90% completion | {
"login": "bdytx5",
"id": 32812705,
"node_id": "MDQ6VXNlcjMyODEyNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/32812705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bdytx5",
"html_url": "https://github.com/bdytx5",
"followers_url": "https://api.github.com/users/bdytx5/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-22T04:47:45 | 2025-01-22T04:47:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="1266" alt="Image" src="https://github.com/user-attachments/assets/821bda2a-f119-4c46-b72d-c00305072cc4" />
Seems to work fine after a couple retries... Very strange
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8530/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6630 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6630/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6630/comments | https://api.github.com/repos/ollama/ollama/issues/6630/events | https://github.com/ollama/ollama/pull/6630 | 2,504,571,966 | PR_kwDOJ0Z1Ps56Wqct | 6,630 | docs(integrations): add claude-dev | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [] | closed | false | null | [] | null | 0 | 2024-09-04T07:50:19 | 2024-09-04T20:01:55 | 2024-09-04T13:32:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6630",
"html_url": "https://github.com/ollama/ollama/pull/6630",
"diff_url": "https://github.com/ollama/ollama/pull/6630.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6630.patch",
"merged_at": "2024-09-04T13:32:26"
} | - Claude Dev [just added](https://github.com/saoudrizwan/claude-dev/releases/tag/v1.5.19) support for Ollama.
It's currently via the OpenAI compatible API, but specifically calls out Ollama as an option.
<img width="594" alt="image" src="https://github.com/user-attachments/assets/21167eb3-5020-4f21-b354-27d4e7e04... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6630/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1897 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1897/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1897/comments | https://api.github.com/repos/ollama/ollama/issues/1897/events | https://github.com/ollama/ollama/pull/1897 | 2,074,502,272 | PR_kwDOJ0Z1Ps5jsRjD | 1,897 | Make sure the WSL version of libnvidia-ml.so is loaded | {
"login": "taweili",
"id": 6722,
"node_id": "MDQ6VXNlcjY3MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taweili",
"html_url": "https://github.com/taweili",
"followers_url": "https://api.github.com/users/taweili/followers"... | [] | closed | false | null | [] | null | 1 | 2024-01-10T14:30:52 | 2024-01-11T08:37:46 | 2024-01-11T08:37:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1897",
"html_url": "https://github.com/ollama/ollama/pull/1897",
"diff_url": "https://github.com/ollama/ollama/pull/1897.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1897.patch",
"merged_at": null
} | In WSL environment, the /usr/lib/wsl/lib/libnvidia-ml.so.1 should be used instead of the generic libnvidia-ml from nvidia-compute.
| {
"login": "taweili",
"id": 6722,
"node_id": "MDQ6VXNlcjY3MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taweili",
"html_url": "https://github.com/taweili",
"followers_url": "https://api.github.com/users/taweili/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1897/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5648 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5648/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5648/comments | https://api.github.com/repos/ollama/ollama/issues/5648/events | https://github.com/ollama/ollama/issues/5648 | 2,404,993,015 | I_kwDOJ0Z1Ps6PWUf3 | 5,648 | image description model is too slow | {
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-07-12T08:00:57 | 2024-07-23T21:55:10 | 2024-07-23T21:54:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After updating to the latest version, I am using llava:13b on Ubuntu, and the API call speed takes about 1 minute.
It was originally about 10 seconds, but it became too slow.
The graphics card is A30.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
i just use curl -fsS... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5648/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/931 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/931/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/931/comments | https://api.github.com/repos/ollama/ollama/issues/931/events | https://github.com/ollama/ollama/issues/931 | 1,964,792,441 | I_kwDOJ0Z1Ps51HFp5 | 931 | How do we stop a model to release GPU memory? (not ollama server). | {
"login": "riskk21",
"id": 22312065,
"node_id": "MDQ6VXNlcjIyMzEyMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22312065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riskk21",
"html_url": "https://github.com/riskk21",
"followers_url": "https://api.github.com/users/riskk2... | [] | closed | false | null | [] | null | 11 | 2023-10-27T05:26:44 | 2024-04-23T12:41:37 | 2024-02-20T00:57:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How do we stop a model to release GPU memory? (not ollama server). | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/931/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/931/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4862 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4862/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4862/comments | https://api.github.com/repos/ollama/ollama/issues/4862/events | https://github.com/ollama/ollama/issues/4862 | 2,338,720,902 | I_kwDOJ0Z1Ps6LZgyG | 4,862 | Probably I am missing something... | {
"login": "Zibri",
"id": 855176,
"node_id": "MDQ6VXNlcjg1NTE3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/855176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zibri",
"html_url": "https://github.com/Zibri",
"followers_url": "https://api.github.com/users/Zibri/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-06-06T16:44:53 | 2024-06-06T21:26:23 | 2024-06-06T21:26:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I created a file containing: ``FROM I:\models\Mistral-7b-Instruct-v0.3.f16.q6_k.gguf``
Then I did: `ollama create mistral file`
The model loaded.
then I did:
ollama run mistral
and If I say "Hello" it starts talking by itself introdfucing itself every time with a different identity.
... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4862/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2877 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2877/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2877/comments | https://api.github.com/repos/ollama/ollama/issues/2877/events | https://github.com/ollama/ollama/issues/2877 | 2,164,785,912 | I_kwDOJ0Z1Ps6BCAL4 | 2,877 | Getting error with `nomic-embed-text` | {
"login": "isavita",
"id": 5805397,
"node_id": "MDQ6VXNlcjU4MDUzOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5805397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isavita",
"html_url": "https://github.com/isavita",
"followers_url": "https://api.github.com/users/isavita/... | [] | closed | false | null | [] | null | 2 | 2024-03-02T12:20:27 | 2024-08-07T02:41:32 | 2024-03-02T13:00:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # Description
It might be me doing something wrong, but when I try to run `ollama run nomic-embed-text` I am getting following error
```shell
> ollama run nomic-embed-text
Error: embedding models do not support chat
```
Here is info for my os and the ollama version
```shell
> ollama -v
ollama version is 0.1.27... | {
"login": "isavita",
"id": 5805397,
"node_id": "MDQ6VXNlcjU4MDUzOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5805397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isavita",
"html_url": "https://github.com/isavita",
"followers_url": "https://api.github.com/users/isavita/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2877/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6795 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6795/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6795/comments | https://api.github.com/repos/ollama/ollama/issues/6795/events | https://github.com/ollama/ollama/issues/6795 | 2,525,444,823 | I_kwDOJ0Z1Ps6WhzrX | 6,795 | there are various models which is default provided by meta llama when downloaded i have tried but couldn't not find it | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 6 | 2024-09-13T18:40:54 | 2024-09-14T18:09:59 | 2024-09-14T16:58:41 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [8b-instruct-fp16](https://ollama.com/library/llama3.1:8b-instruct-fp16)
4aacac419454 • 16GB • Updated 3 days ago
[8b-instruct-q2_K](https://ollama.com/library/llama3.1:8b-instruct-q2_K)
44a139eeb344 • 3.2GB • Updated 3 days ago
[8b-instruct-q3_K_S](https://ollama.com/library/llama3.1:8b-instruct-q3_K_S)
16268e519... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6795/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6400 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6400/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6400/comments | https://api.github.com/repos/ollama/ollama/issues/6400/events | https://github.com/ollama/ollama/pull/6400 | 2,471,622,819 | PR_kwDOJ0Z1Ps54pUoL | 6,400 | Add arm64 cuda jetpack variants | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 4 | 2024-08-17T18:25:44 | 2024-10-15T22:39:59 | 2024-10-15T22:39:27 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6400",
"html_url": "https://github.com/ollama/ollama/pull/6400",
"diff_url": "https://github.com/ollama/ollama/pull/6400.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6400.patch",
"merged_at": null
} | This adds 2 new variants for the arm64 build to support nvidia jetson systems based on jetpack 5 and 6. Jetpack 4 is too old to be built with our toolchain (the older cuda requires an old gcc which can't build llama.cpp) and will remain unsupported.
The sbsa discrete GPU cuda libraries we bundle in the existing arm... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6400/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1623 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1623/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1623/comments | https://api.github.com/repos/ollama/ollama/issues/1623/events | https://github.com/ollama/ollama/pull/1623 | 2,050,062,266 | PR_kwDOJ0Z1Ps5icesJ | 1,623 | adds ooo to Community Integrations in README | {
"login": "Npahlfer",
"id": 1068840,
"node_id": "MDQ6VXNlcjEwNjg4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1068840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Npahlfer",
"html_url": "https://github.com/Npahlfer",
"followers_url": "https://api.github.com/users/Npahl... | [] | closed | false | null | [] | null | 0 | 2023-12-20T08:14:18 | 2024-03-25T19:08:34 | 2024-03-25T19:08:33 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1623",
"html_url": "https://github.com/ollama/ollama/pull/1623",
"diff_url": "https://github.com/ollama/ollama/pull/1623.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1623.patch",
"merged_at": "2024-03-25T19:08:33"
} | Adds a link to a terminal command (https://github.com/npahlfer/ooo) that lets you pipe in outputs from other terminal commands "into" Ollama and parse them through your prompt.
This way you can parse command outputs in an easy way!
You can also just prompt Ollama like you normally would eg. `$ ooo how long is a rope`... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1623/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4614 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4614/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4614/comments | https://api.github.com/repos/ollama/ollama/issues/4614/events | https://github.com/ollama/ollama/issues/4614 | 2,315,522,822 | I_kwDOJ0Z1Ps6KBBMG | 4,614 | Cpu selected over GPU when running ollama service | {
"login": "Talleyrand-34",
"id": 119809076,
"node_id": "U_kgDOByQkNA",
"avatar_url": "https://avatars.githubusercontent.com/u/119809076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Talleyrand-34",
"html_url": "https://github.com/Talleyrand-34",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-05-24T14:16:23 | 2024-05-28T16:51:07 | 2024-05-28T16:51:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The problems is that i cannot specify to use gpu over cpu. The instructions to get the gpu running are also not clear.
Is it that the gpu is not supported? If yes which options do i have?
```
ollama ps
NAME ID SIZE PROCESSOR UNTIL
codellama:latest... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4614/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3846 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3846/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3846/comments | https://api.github.com/repos/ollama/ollama/issues/3846/events | https://github.com/ollama/ollama/pull/3846 | 2,259,362,406 | PR_kwDOJ0Z1Ps5tgGN3 | 3,846 | Detect and recover if runner removed | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-23T17:06:59 | 2024-04-23T20:14:14 | 2024-04-23T20:14:12 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3846",
"html_url": "https://github.com/ollama/ollama/pull/3846",
"diff_url": "https://github.com/ollama/ollama/pull/3846.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3846.patch",
"merged_at": "2024-04-23T20:14:12"
} | Tmp cleaners can nuke the file out from underneath us. This detects the missing runner, and re-initializes the payloads.
Manually tested by hand removing the server after loading it once, trigger an unload with `keep_alive: 0` sent another request, saw the log message, and it loaded correctly. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3846/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2561 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2561/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2561/comments | https://api.github.com/repos/ollama/ollama/issues/2561/events | https://github.com/ollama/ollama/issues/2561 | 2,140,009,166 | I_kwDOJ0Z1Ps5_jfLO | 2,561 | Dark mode request | {
"login": "nav9",
"id": 2093933,
"node_id": "MDQ6VXNlcjIwOTM5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2093933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nav9",
"html_url": "https://github.com/nav9",
"followers_url": "https://api.github.com/users/nav9/followers",
... | [
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2024-02-17T11:57:00 | 2024-03-05T19:19:05 | 2024-03-05T19:19:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Kindly provide a dark theme for https://ollama.com/. The intensity of the bright white color gets strenuous on the eyes. A toggle button at the top of the screen to activate dark mode or a default dark mode would help. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2561/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2561/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6694 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6694/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6694/comments | https://api.github.com/repos/ollama/ollama/issues/6694/events | https://github.com/ollama/ollama/issues/6694 | 2,512,149,605 | I_kwDOJ0Z1Ps6VvFxl | 6,694 | A mixture of experts model | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-09-08T01:37:57 | 2024-09-12T00:23:11 | 2024-09-12T00:23:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/allenai/OLMoE-1B-7B-0924 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6694/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6694/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4817 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4817/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4817/comments | https://api.github.com/repos/ollama/ollama/issues/4817/events | https://github.com/ollama/ollama/issues/4817 | 2,333,886,407 | I_kwDOJ0Z1Ps6LHEfH | 4,817 | Apple neural engine | {
"login": "EnderRobber101",
"id": 116851736,
"node_id": "U_kgDOBvcEGA",
"avatar_url": "https://avatars.githubusercontent.com/u/116851736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EnderRobber101",
"html_url": "https://github.com/EnderRobber101",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-06-04T16:01:07 | 2024-07-11T02:40:07 | 2024-07-11T02:40:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was wondering if ollama will support Apple neural engine for faster computations in the future? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4817/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7921 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7921/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7921/comments | https://api.github.com/repos/ollama/ollama/issues/7921/events | https://github.com/ollama/ollama/pull/7921 | 2,716,124,067 | PR_kwDOJ0Z1Ps6D88PC | 7,921 | server: feedback before failing push on uppercase | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2024-12-03T22:49:22 | 2024-12-09T22:31:27 | 2024-12-09T22:31:27 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7921",
"html_url": "https://github.com/ollama/ollama/pull/7921",
"diff_url": "https://github.com/ollama/ollama/pull/7921.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7921.patch",
"merged_at": null
} | When a username or model name is uppercase the registry will reject the push. This is done for file-system compatibility. If we rely on the registry error on push the message returned is 'file not found', which does not convey why the push actually failed.
Before:
```bash
> ollama push TEST_CAPS/x
retrieving mani... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7921/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1664 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1664/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1664/comments | https://api.github.com/repos/ollama/ollama/issues/1664/events | https://github.com/ollama/ollama/issues/1664 | 2,052,976,992 | I_kwDOJ0Z1Ps56XfFg | 1,664 | CLI display flickers in SSH session on pull | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 4 | 2023-12-21T19:47:38 | 2024-11-06T18:50:26 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When pulling occasionally I see the loading bar flicker during and after download. This can be seen more dramatically on a fast connection.
System details:
```
OS: Debian 11
Terminal: Warp
Ollama: v0.1.17
```
https://github.com/jmorganca/ollama/assets/5853428/ec9c2410-f5c0-4a41-a9ef-73bee50b99f2 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1664/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/3888 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3888/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3888/comments | https://api.github.com/repos/ollama/ollama/issues/3888/events | https://github.com/ollama/ollama/issues/3888 | 2,262,037,183 | I_kwDOJ0Z1Ps6G0_K_ | 3,888 | Restrict model pulling based on license | {
"login": "slyt",
"id": 5429371,
"node_id": "MDQ6VXNlcjU0MjkzNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5429371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyt",
"html_url": "https://github.com/slyt",
"followers_url": "https://api.github.com/users/slyt/followers",
... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-04-24T19:36:13 | 2024-04-24T21:04:54 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be great if there was a configurable option to restrict the ability for Ollama server to pull certain models based on the model's license. This is necessary for organizations that would like to use Ollama as a model runtime, but cannot use some models due to limitations in their licenses.
A check could be p... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3888/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3888/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1589 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1589/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1589/comments | https://api.github.com/repos/ollama/ollama/issues/1589/events | https://github.com/ollama/ollama/issues/1589 | 2,047,604,820 | I_kwDOJ0Z1Ps56C_hU | 1,589 | Access internet | {
"login": "PeachesMLG",
"id": 26843204,
"node_id": "MDQ6VXNlcjI2ODQzMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26843204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeachesMLG",
"html_url": "https://github.com/PeachesMLG",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2023-12-18T23:01:44 | 2024-02-05T18:00:33 | 2023-12-19T05:08:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Im customising my own model, using the steps in the ReadMe.
In this Modelfile I added a link to an faq with a bunch of information available, aswell as a github url in hopes it can search open/closed issues to awnser queries.
However it doesnt seem to be querying the url's like OpenAI GPT4 does
Is this current... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1589/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6693 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6693/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6693/comments | https://api.github.com/repos/ollama/ollama/issues/6693/events | https://github.com/ollama/ollama/pull/6693 | 2,512,092,218 | PR_kwDOJ0Z1Ps56wUK_ | 6,693 | Notify the user if systemd is not running during install | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2024-09-07T21:58:33 | 2024-11-19T09:40:11 | 2024-11-18T23:02:41 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6693",
"html_url": "https://github.com/ollama/ollama/pull/6693",
"diff_url": "https://github.com/ollama/ollama/pull/6693.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6693.patch",
"merged_at": "2024-11-18T23:02:41"
} | Fixes: https://github.com/ollama/ollama/issues/6636 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6693/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7640 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7640/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7640/comments | https://api.github.com/repos/ollama/ollama/issues/7640/events | https://github.com/ollama/ollama/issues/7640 | 2,653,937,136 | I_kwDOJ0Z1Ps6eL93w | 7,640 | Error: POST predict: Post "http://127.0.0.1:42623/completion": EOF | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 29 | 2024-11-13T02:35:29 | 2024-11-25T19:41:55 | 2024-11-25T19:41:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
(CodeLlama) developer@ai:~/PROJECTS/OllamaModelFiles$ ~/ollama/ollama run gemma-2-27b-it-Q8_0:latest
>>> Hello.
Error: POST predict: Post "http://127.0.0.1:42623/completion": EOF
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
Latest | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7640/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5484 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5484/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5484/comments | https://api.github.com/repos/ollama/ollama/issues/5484/events | https://github.com/ollama/ollama/issues/5484 | 2,390,764,489 | I_kwDOJ0Z1Ps6OgCvJ | 5,484 | Unnecessary use of GPUs when I run "ollama pull" | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-07-04T12:32:02 | 2024-07-04T12:43:23 | 2024-07-04T12:43:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I run "ollama pull" to just download modules, e.g.
> ollama pull deepseek-v2:16b
Both of my GPUs are fully 100% used, which is unnecessary for download task ONLY.
```
========================================== ROCm System Management Interface ======================================... | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5484/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5352 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5352/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5352/comments | https://api.github.com/repos/ollama/ollama/issues/5352/events | https://github.com/ollama/ollama/issues/5352 | 2,379,429,798 | I_kwDOJ0Z1Ps6N0zem | 5,352 | [BUG]: Gemma2 crashes on run. | {
"login": "jasper-clarke",
"id": 154771146,
"node_id": "U_kgDOCTmeyg",
"avatar_url": "https://avatars.githubusercontent.com/u/154771146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasper-clarke",
"html_url": "https://github.com/jasper-clarke",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-28T02:23:22 | 2024-06-28T02:36:40 | 2024-06-28T02:36:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Running the following in sequence crashes with the below output.
1. `ollama pull gemma2`
2. `ollama run gemma2`
Output:
`Error: llama runner process has terminated: signal: aborted (core dumped)`
Coredumpctl:
```
PID: 3776 (ollama_llama_se)
UID: 61547 (ollama)
... | {
"login": "jasper-clarke",
"id": 154771146,
"node_id": "U_kgDOCTmeyg",
"avatar_url": "https://avatars.githubusercontent.com/u/154771146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasper-clarke",
"html_url": "https://github.com/jasper-clarke",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5352/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5908 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5908/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5908/comments | https://api.github.com/repos/ollama/ollama/issues/5908/events | https://github.com/ollama/ollama/issues/5908 | 2,427,384,900 | I_kwDOJ0Z1Ps6QrvRE | 5,908 | GPU ID initialization incorrect - CPUs not always first in list | {
"login": "7910f6ba7ee4",
"id": 89554543,
"node_id": "MDQ6VXNlcjg5NTU0NTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/89554543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7910f6ba7ee4",
"html_url": "https://github.com/7910f6ba7ee4",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-07-24T11:50:36 | 2024-07-29T21:24:58 | 2024-07-29T21:24:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
In `amd_linux.go`, it is assumed that "CPUs are always first in the list" when calculating the gpu id:
```
// CPUs are always first in the list
gpuID := nodeID - cpuCount
```
however, this is not always the case when the number of topology nodes is greater than 10 (which I unfortunately h... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5908/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3320 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3320/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3320/comments | https://api.github.com/repos/ollama/ollama/issues/3320/events | https://github.com/ollama/ollama/pull/3320 | 2,204,165,579 | PR_kwDOJ0Z1Ps5qk1BJ | 3,320 | llm: prevent race appending to slice | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-03-24T03:48:22 | 2024-03-24T18:35:55 | 2024-03-24T18:35:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3320",
"html_url": "https://github.com/ollama/ollama/pull/3320",
"diff_url": "https://github.com/ollama/ollama/pull/3320.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3320.patch",
"merged_at": "2024-03-24T18:35:55"
} | llm: prevent race appending to slice
Previously, multiple goroutines were appending to the same unguarded
slice.
Also, convert slice declaration to idiomatic zero value form.
Also, convert errgroup.Group declaration to idiomatic zero value form. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3320/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5447 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5447/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5447/comments | https://api.github.com/repos/ollama/ollama/issues/5447/events | https://github.com/ollama/ollama/pull/5447 | 2,387,271,135 | PR_kwDOJ0Z1Ps50QRW1 | 5,447 | Only set default keep_alive on initial model load | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-02T22:35:29 | 2024-07-03T22:34:40 | 2024-07-03T22:34:38 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5447",
"html_url": "https://github.com/ollama/ollama/pull/5447",
"diff_url": "https://github.com/ollama/ollama/pull/5447.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5447.patch",
"merged_at": "2024-07-03T22:34:38"
} | This change fixes the handling of keep_alive so that if client request omits the setting, we only set this on initial load. Once the model is loaded, if new requests leave this unset, we'll keep whatever keep_alive was there.
Fixes #5272
```
% ollama run llama3 --keepalive 1h hello
Hello! It's nice to meet yo... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5447/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5447/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7229 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7229/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7229/comments | https://api.github.com/repos/ollama/ollama/issues/7229/events | https://github.com/ollama/ollama/pull/7229 | 2,592,597,237 | PR_kwDOJ0Z1Ps5-3czw | 7,229 | Move Go code out of llm package | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-10-16T17:38:20 | 2025-01-19T19:28:44 | 2025-01-19T19:28:43 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7229",
"html_url": "https://github.com/ollama/ollama/pull/7229",
"diff_url": "https://github.com/ollama/ollama/pull/7229.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7229.patch",
"merged_at": null
} | This can be deferred to after the 0.4.0 release as a follow up cleanup step
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7229/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1105 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1105/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1105/comments | https://api.github.com/repos/ollama/ollama/issues/1105/events | https://github.com/ollama/ollama/issues/1105 | 1,989,631,987 | I_kwDOJ0Z1Ps52l1_z | 1,105 | Out of memory when using multiple GPUs | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 5 | 2023-11-12T23:24:50 | 2024-01-10T13:46:31 | 2024-01-10T13:46:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When a system has multiple GPUs generation (ex: `ollama run ...`) may fail with an `out of memory` error.
```
Nov 05 22:41:50 example.com ollama[943528]: 2023/11/05 22:41:50 llama.go:259: 7197 MB VRAM available, loading up to 47 GPU layers
Nov 05 22:41:50 example.com ollama[943528]: 2023/11/05 22:41:50 llama.go:37... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1105/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3133 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3133/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3133/comments | https://api.github.com/repos/ollama/ollama/issues/3133/events | https://github.com/ollama/ollama/issues/3133 | 2,185,252,286 | I_kwDOJ0Z1Ps6CQE2- | 3,133 | v0.1.29 #bug | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | [] | closed | false | null | [] | null | 2 | 2024-03-14T02:25:29 | 2024-06-11T05:53:34 | 2024-03-14T02:39:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | v0.1.29报403错误!补充说明我是通过frpc代理访问11434端口的,在[v0.1.29]之前版本确实都ok的。 | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3133/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1645 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1645/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1645/comments | https://api.github.com/repos/ollama/ollama/issues/1645/events | https://github.com/ollama/ollama/issues/1645 | 2,051,405,991 | I_kwDOJ0Z1Ps56Rfin | 1,645 | Dark mode for ollama.com | {
"login": "MaherJendoubi",
"id": 1798510,
"node_id": "MDQ6VXNlcjE3OTg1MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1798510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaherJendoubi",
"html_url": "https://github.com/MaherJendoubi",
"followers_url": "https://api.github.... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | [
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.git... | null | 3 | 2023-12-20T22:43:24 | 2024-12-25T21:51:37 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Just to protect your eyes, especially during the night. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1645/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1645/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4259 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4259/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4259/comments | https://api.github.com/repos/ollama/ollama/issues/4259/events | https://github.com/ollama/ollama/issues/4259 | 2,285,514,065 | I_kwDOJ0Z1Ps6IOi1R | 4,259 | stop loading model while i close my computer. | {
"login": "chaserstrong",
"id": 18061322,
"node_id": "MDQ6VXNlcjE4MDYxMzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/18061322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaserstrong",
"html_url": "https://github.com/chaserstrong",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-05-08T12:46:07 | 2024-05-08T12:46:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
During the download, the computer screen stopped. When I opened it again, I found that it was stuck at the previous download progress.Even if I download another model, it will still be like this.
<img width="566" alt="image" src="https://github.com/ollama/ollama/assets/18061322/ca8379f3-e898-... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4259/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/575 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/575/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/575/comments | https://api.github.com/repos/ollama/ollama/issues/575/events | https://github.com/ollama/ollama/pull/575 | 1,909,295,183 | PR_kwDOJ0Z1Ps5bAwZi | 575 | fix ipv6 parse ip | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-22T17:42:18 | 2023-09-22T18:47:12 | 2023-09-22T18:47:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/575",
"html_url": "https://github.com/ollama/ollama/pull/575",
"diff_url": "https://github.com/ollama/ollama/pull/575.diff",
"patch_url": "https://github.com/ollama/ollama/pull/575.patch",
"merged_at": "2023-09-22T18:47:11"
} | `net.ParseIP` for IPv6 doesn't expect `[]` so trim it | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/575/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1118 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1118/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1118/comments | https://api.github.com/repos/ollama/ollama/issues/1118/events | https://github.com/ollama/ollama/issues/1118 | 1,991,949,557 | I_kwDOJ0Z1Ps52urz1 | 1,118 | Verbose request logs for `ollama serve` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-11-14T04:05:32 | 2024-01-28T23:22:36 | 2024-01-28T23:22:36 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It can be hard to debug what kind of requests `ollama serve` is receiving when using SDKs or other tooling with it. A way to log full requests would be helpful for this. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1118/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1118/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2947 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2947/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2947/comments | https://api.github.com/repos/ollama/ollama/issues/2947/events | https://github.com/ollama/ollama/issues/2947 | 2,170,956,332 | I_kwDOJ0Z1Ps6BZios | 2,947 | I need to move the ollama into a no-internet-webserver,how should I backup all the files in my windows/linux ollama folder | {
"login": "sddzcuigc",
"id": 85976753,
"node_id": "MDQ6VXNlcjg1OTc2NzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/85976753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sddzcuigc",
"html_url": "https://github.com/sddzcuigc",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-03-06T08:48:58 | 2024-03-12T01:21:14 | 2024-03-12T01:21:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I reallise the windows ollama use the .ollama folder to put the models in. But still not a very good way to transport it to another computer. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2947/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3525 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3525/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3525/comments | https://api.github.com/repos/ollama/ollama/issues/3525/events | https://github.com/ollama/ollama/issues/3525 | 2,229,801,582 | I_kwDOJ0Z1Ps6E6BJu | 3,525 | error: listen tcp: lookup tcp/\ollama: unknown port | {
"login": "bkdigitalworld",
"id": 166310020,
"node_id": "U_kgDOCemwhA",
"avatar_url": "https://avatars.githubusercontent.com/u/166310020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkdigitalworld",
"html_url": "https://github.com/bkdigitalworld",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-04-07T14:42:57 | 2024-05-15T00:05:17 | 2024-05-15T00:05:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Could not open the interface of ollama:
time=2024-04-07T22:40:43.318+08:00 level=WARN source=server.go:113 msg="server crash 13 - exit code 1 - respawning"
time=2024-04-07T22:40:43.820+08:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-04-07T22:40:56.824+08:00 l... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3525/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2475 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2475/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2475/comments | https://api.github.com/repos/ollama/ollama/issues/2475/events | https://github.com/ollama/ollama/issues/2475 | 2,132,503,645 | I_kwDOJ0Z1Ps5_G2xd | 2,475 | Request to add leo-hessianai to ollama | {
"login": "arsenij-ust",
"id": 61419866,
"node_id": "MDQ6VXNlcjYxNDE5ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/61419866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arsenij-ust",
"html_url": "https://github.com/arsenij-ust",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396205,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2abQ",
"url": "https://api.github.com/repos/ollama/ollama/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
}
] | open | false | null | [] | null | 2 | 2024-02-13T14:50:03 | 2024-12-31T22:47:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi guys,
I tried to use the leo-hessianai-7B model on Ollama. I use the GGUF file (Q4_K_M.gguf from here https://huggingface.co/TheBloke/leo-hessianai-7B-GGUF/tree/main) and followed the instructions from Ollama (https://github.com/ollama/ollama/blob/main/docs/import.md). I already managed to generate answers with th... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2475/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2475/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6223 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6223/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6223/comments | https://api.github.com/repos/ollama/ollama/issues/6223/events | https://github.com/ollama/ollama/pull/6223 | 2,452,425,684 | PR_kwDOJ0Z1Ps53p9b9 | 6,223 | feat: add gin BasicAuth using OLLAMA_BASIC_AUTH_KEY setup in env | {
"login": "kemalelmizan",
"id": 15223219,
"node_id": "MDQ6VXNlcjE1MjIzMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/15223219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kemalelmizan",
"html_url": "https://github.com/kemalelmizan",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 2 | 2024-08-07T05:06:21 | 2024-11-25T00:03:03 | 2024-11-25T00:03:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6223",
"html_url": "https://github.com/ollama/ollama/pull/6223",
"diff_url": "https://github.com/ollama/ollama/pull/6223.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6223.patch",
"merged_at": null
} | This adds gin BasicAuth for username:password setup in env. I checked ollama server is using gin, and gin offers [basic auth middleware](https://gin-gonic.com/docs/examples/using-basicauth-middleware/). In this PR I attempted to use this middleware to validate request from using env var `OLLAMA_BASIC_AUTH_KEY`. Inputs,... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6223/reactions",
"total_count": 17,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6223/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6790 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6790/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6790/comments | https://api.github.com/repos/ollama/ollama/issues/6790/events | https://github.com/ollama/ollama/issues/6790 | 2,524,252,141 | I_kwDOJ0Z1Ps6WdQft | 6,790 | openai tools streaming support coming soon? | {
"login": "LuckLittleBoy",
"id": 17702771,
"node_id": "MDQ6VXNlcjE3NzAyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/17702771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LuckLittleBoy",
"html_url": "https://github.com/LuckLittleBoy",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 13 | 2024-09-13T08:45:09 | 2024-09-19T06:36:27 | 2024-09-14T01:57:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In which version of the openai tools streaming support feature is planned to be supported?
When will it be supported?

| {
"login": "LuckLittleBoy",
"id": 17702771,
"node_id": "MDQ6VXNlcjE3NzAyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/17702771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LuckLittleBoy",
"html_url": "https://github.com/LuckLittleBoy",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6790/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5744 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5744/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5744/comments | https://api.github.com/repos/ollama/ollama/issues/5744/events | https://github.com/ollama/ollama/issues/5744 | 2,413,433,875 | I_kwDOJ0Z1Ps6P2hQT | 5,744 | Model Cold Storage and user manual management possibility | {
"login": "nikhil-swamix",
"id": 54004431,
"node_id": "MDQ6VXNlcjU0MDA0NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/54004431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhil-swamix",
"html_url": "https://github.com/nikhil-swamix",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-07-17T12:00:57 | 2024-08-31T08:29:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
# model management
its nearly impossible to manage models by manual method, and it generates hash values,
what i was trying to do was to move some models to cold storage, ie HDD, and some to SSD. but couldnt find a way rat... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5744/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5536 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5536/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5536/comments | https://api.github.com/repos/ollama/ollama/issues/5536/events | https://github.com/ollama/ollama/issues/5536 | 2,394,280,218 | I_kwDOJ0Z1Ps6OtdEa | 5,536 | gemma2 27b is too slow | {
"login": "codeMonkey-shin",
"id": 80636401,
"node_id": "MDQ6VXNlcjgwNjM2NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80636401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeMonkey-shin",
"html_url": "https://github.com/codeMonkey-shin",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | open | false | null | [] | null | 4 | 2024-07-07T23:53:42 | 2024-10-16T16:18:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Compared to 9b, 27b is ridiculously slow. Is it because of the structure?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.49 Pre-release | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5536/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/474 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/474/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/474/comments | https://api.github.com/repos/ollama/ollama/issues/474/events | https://github.com/ollama/ollama/pull/474 | 1,882,978,743 | PR_kwDOJ0Z1Ps5ZoL22 | 474 | add show command | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-06T01:13:42 | 2023-09-06T18:04:18 | 2023-09-06T18:04:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/474",
"html_url": "https://github.com/ollama/ollama/pull/474",
"diff_url": "https://github.com/ollama/ollama/pull/474.diff",
"patch_url": "https://github.com/ollama/ollama/pull/474.patch",
"merged_at": "2023-09-06T18:04:17"
} | This change adds the ability to inspect various parts of a given model. It adds functionality from both the CLI (via the `ollama show` command) and through the REPL (through the various `/show ...` commands).
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/474/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4469 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4469/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4469/comments | https://api.github.com/repos/ollama/ollama/issues/4469/events | https://github.com/ollama/ollama/issues/4469 | 2,299,562,084 | I_kwDOJ0Z1Ps6JEIhk | 4,469 | Ollama memory consumption | {
"login": "hugefrog",
"id": 83398604,
"node_id": "MDQ6VXNlcjgzMzk4NjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/83398604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugefrog",
"html_url": "https://github.com/hugefrog",
"followers_url": "https://api.github.com/users/hug... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-05-16T07:22:16 | 2024-07-25T22:53:45 | 2024-07-25T22:53:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Why does Ollama consume so much memory? With a 3090 graphics card and 24GB of VRAM, after loading a yi-34b-4bit model of around 20GB in size, both the system memory and VRAM are consumed by approximately 20GB simultaneously.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4469/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1057 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1057/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1057/comments | https://api.github.com/repos/ollama/ollama/issues/1057/events | https://github.com/ollama/ollama/issues/1057 | 1,985,785,851 | I_kwDOJ0Z1Ps52XK_7 | 1,057 | Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": EOF | {
"login": "fabianslife",
"id": 49265757,
"node_id": "MDQ6VXNlcjQ5MjY1NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabianslife",
"html_url": "https://github.com/fabianslife",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2023-11-09T14:48:39 | 2023-12-24T21:52:43 | 2023-12-24T21:52:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am running Ubuntu 20.04 and wanted to try out ollama, but the oneliner does not seem to work:
When installing ollama with `curl https://ollama.ai/install.sh | sh` everything is ok, and the installation runs fine:
```
% Total % Received % Xferd Average Speed Time Time Time Current
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1057/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/528 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/528/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/528/comments | https://api.github.com/repos/ollama/ollama/issues/528/events | https://github.com/ollama/ollama/issues/528 | 1,896,514,625 | I_kwDOJ0Z1Ps5xCoRB | 528 | 416 response when pulling a model | {
"login": "codazoda",
"id": 527246,
"node_id": "MDQ6VXNlcjUyNzI0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/527246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codazoda",
"html_url": "https://github.com/codazoda",
"followers_url": "https://api.github.com/users/codazod... | [] | closed | false | null | [] | null | 4 | 2023-09-14T12:52:46 | 2023-09-30T05:07:52 | 2023-09-30T05:07:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm getting the following error when I try to pull the lllama2-uncensored model.
```
$ollama pull llama2-uncensored
pulling manifest
Error: download failed: on download registry responded with code 416:
```
This might be a registry problem or a problem with the model I'm pulling. I'm not really sure the appro... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/528/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/584 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/584/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/584/comments | https://api.github.com/repos/ollama/ollama/issues/584/events | https://github.com/ollama/ollama/issues/584 | 1,910,303,715 | I_kwDOJ0Z1Ps5x3Ovj | 584 | Adhere to the MacOS File System Programming Guide | {
"login": "offsetcyan",
"id": 49906709,
"node_id": "MDQ6VXNlcjQ5OTA2NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49906709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/offsetcyan",
"html_url": "https://github.com/offsetcyan",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677279472,
"node_id": ... | open | false | null | [] | null | 4 | 2023-09-24T16:53:40 | 2024-03-11T19:30:47 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The user's home directory is not the place to dump program data, and for future cross-platform compatibility handling this would be inappropriate. Currently Ollama stores user data in `~/.ollama`, however Apple have a specification for where to place files of various types ([link](https://developer.apple.com/library/ar... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/584/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/584/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2638 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2638/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2638/comments | https://api.github.com/repos/ollama/ollama/issues/2638/events | https://github.com/ollama/ollama/issues/2638 | 2,146,978,852 | I_kwDOJ0Z1Ps5_-Ewk | 2,638 | on windows skipping models | {
"login": "stream74",
"id": 7672121,
"node_id": "MDQ6VXNlcjc2NzIxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7672121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stream74",
"html_url": "https://github.com/stream74",
"followers_url": "https://api.github.com/users/strea... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 4 | 2024-02-21T15:03:54 | 2024-03-12T01:59:09 | 2024-03-12T01:59:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Sorry for bad english
i set environnement variable in windows to models folders
if i pull new models it go to the folder i set
but i have already a lot of models but ollama can't see it when i ask him with "ollama list"
th server log indicate
[GIN] 2024/02/21 - 15:51:59 | 200 | 6.082ms | 127.0.0.1 |... | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2638/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7346 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7346/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7346/comments | https://api.github.com/repos/ollama/ollama/issues/7346/events | https://github.com/ollama/ollama/issues/7346 | 2,612,137,611 | I_kwDOJ0Z1Ps6bsg6L | 7,346 | Ollama does not run on GPU at 0.4.0-rc5-rocm version | {
"login": "chiehpower",
"id": 32332200,
"node_id": "MDQ6VXNlcjMyMzMyMjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32332200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiehpower",
"html_url": "https://github.com/chiehpower",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-10-24T17:04:49 | 2024-10-26T16:02:44 | 2024-10-25T21:44:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi all,
I was testing a very new version (`0.4.0-rc5-rocm`) that the server was deployed by docker container.
```
docker run -itd --name=ollama --gpus=all --shm-size=100GB \
-v ollama:/root/.ollama -p 11434:11434 \
ollama/ollama:0.4.0-rc5-rocm
```
The client was using this pro... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7346/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5067 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5067/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5067/comments | https://api.github.com/repos/ollama/ollama/issues/5067/events | https://github.com/ollama/ollama/pull/5067 | 2,355,062,343 | PR_kwDOJ0Z1Ps5ykqTH | 5,067 | Add LoongArch64 ISA Support | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 2 | 2024-06-15T17:37:04 | 2024-08-04T23:39:01 | 2024-08-04T23:39:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5067",
"html_url": "https://github.com/ollama/ollama/pull/5067",
"diff_url": "https://github.com/ollama/ollama/pull/5067.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5067.patch",
"merged_at": null
} | 1. fixed go build . failed on LoongArch -> go.mod: replace github.com/chewxy/math32 v1.10.1 to github.com/chewxy/math32 v1.10.2-0.20240509203351, fixed https://github.com/chewxy/math32/issues/23
2. go.sum fixed;
3. llm.go add loong64 support;
4. gen_common.sh add 64bit LoongArch support;
5. gen_linux.sh add loongar... | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5067/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5067/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6733 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6733/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6733/comments | https://api.github.com/repos/ollama/ollama/issues/6733/events | https://github.com/ollama/ollama/issues/6733 | 2,517,227,844 | I_kwDOJ0Z1Ps6WCdlE | 6,733 | curl | {
"login": "ayttop",
"id": 178673810,
"node_id": "U_kgDOCqZYkg",
"avatar_url": "https://avatars.githubusercontent.com/u/178673810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayttop",
"html_url": "https://github.com/ayttop",
"followers_url": "https://api.github.com/users/ayttop/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-10T18:08:22 | 2024-09-10T18:13:43 | 2024-09-10T18:13:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
how to run
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"prompt":"Why is the sky blue?"
}'
on cmd
In the same way
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
last | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6733/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3917 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3917/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3917/comments | https://api.github.com/repos/ollama/ollama/issues/3917/events | https://github.com/ollama/ollama/issues/3917 | 2,264,178,975 | I_kwDOJ0Z1Ps6G9KEf | 3,917 | I have noticed something extremely strange about what ollama does with Phi-3 models. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-04-25T17:56:41 | 2024-06-02T00:06:55 | 2024-06-02T00:06:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
(Pythogora) developer@ai:~/PROJECTS/gpt-pilot/pilot$ ~/ollama/ollama list
NAME ID SIZE MODIFIED
Meta-Llama-3-70B-Instruct-.Q5_K_M:latest 746bce3a52ed 49 GB 2 days ago
hermes-2-Pro-Mistral-7B.Q8_0:latest 86624... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3917/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2294 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2294/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2294/comments | https://api.github.com/repos/ollama/ollama/issues/2294/events | https://github.com/ollama/ollama/pull/2294 | 2,111,131,194 | PR_kwDOJ0Z1Ps5loFSM | 2,294 | update slog handler options | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-31T23:00:55 | 2024-01-31T23:29:12 | 2024-01-31T23:29:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2294",
"html_url": "https://github.com/ollama/ollama/pull/2294",
"diff_url": "https://github.com/ollama/ollama/pull/2294.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2294.patch",
"merged_at": "2024-01-31T23:29:11"
} | - consistent format by using text handler for debug and non-debug
- truncate source file to just the file name
sample outputs:
```
time=2024-01-31T15:01:02.632-08:00 level=INFO source=routes.go:983 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-01-31T15:01:02.632-08:00 level=INFO source=payload_c... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2294/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2284 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2284/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2284/comments | https://api.github.com/repos/ollama/ollama/issues/2284/events | https://github.com/ollama/ollama/pull/2284 | 2,109,086,249 | PR_kwDOJ0Z1Ps5lhHXn | 2,284 | remove unnecessary parse raw | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-31T01:02:05 | 2024-01-31T17:40:49 | 2024-01-31T17:40:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2284",
"html_url": "https://github.com/ollama/ollama/pull/2284",
"diff_url": "https://github.com/ollama/ollama/pull/2284.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2284.patch",
"merged_at": "2024-01-31T17:40:48"
} | There's no point parsing the raw private key when all it's doing is creating a ssh key | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2284/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6416 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6416/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6416/comments | https://api.github.com/repos/ollama/ollama/issues/6416/events | https://github.com/ollama/ollama/issues/6416 | 2,472,843,119 | I_kwDOJ0Z1Ps6TZJdv | 6,416 | Computer crashes after switching several Ollama models in a relatively short amount of time | {
"login": "elsatch",
"id": 653433,
"node_id": "MDQ6VXNlcjY1MzQzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsatch",
"html_url": "https://github.com/elsatch",
"followers_url": "https://api.github.com/users/elsatch/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-08-19T09:03:04 | 2024-11-05T23:22:35 | 2024-11-05T23:22:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I love to run tests to compare different model outputs. To do so, I've used tools like promptfoo or langfuse (over Haystack or Langchain). In these tools, you set a list of models and then the program calls Ollama to load the models one after the other. I am using a Linux computer with Ubuntu 22... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6416/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6274 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6274/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6274/comments | https://api.github.com/repos/ollama/ollama/issues/6274/events | https://github.com/ollama/ollama/issues/6274 | 2,457,095,928 | I_kwDOJ0Z1Ps6SdE74 | 6,274 | Binary files (*.png, *.ico, *.icns) listed as modified upon cloning the repository | {
"login": "PAN-Chuwen",
"id": 70949152,
"node_id": "MDQ6VXNlcjcwOTQ5MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/70949152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PAN-Chuwen",
"html_url": "https://github.com/PAN-Chuwen",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-08-09T04:49:44 | 2024-08-10T03:54:07 | 2024-08-10T03:54:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### Description
#### Steps to Reproduce
1. Clone the repository:
```sh
git clone https://github.com/ollama/ollama.git
cd ollama
```
2. Check the status of the repository:
```sh
git status
```
#### Expected Behavior
No files should be listed as mo... | {
"login": "PAN-Chuwen",
"id": 70949152,
"node_id": "MDQ6VXNlcjcwOTQ5MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/70949152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PAN-Chuwen",
"html_url": "https://github.com/PAN-Chuwen",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6274/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7759 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7759/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7759/comments | https://api.github.com/repos/ollama/ollama/issues/7759/events | https://github.com/ollama/ollama/issues/7759 | 2,675,604,783 | I_kwDOJ0Z1Ps6fen0v | 7,759 | The Way to the light | {
"login": "SnappCred",
"id": 179581325,
"node_id": "U_kgDOCrQxjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/179581325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SnappCred",
"html_url": "https://github.com/SnappCred",
"followers_url": "https://api.github.com/users/SnappC... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-11-20T11:40:40 | 2024-11-20T13:46:08 | 2024-11-20T13:46:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am who I am
| {
"login": "SnappCred",
"id": 179581325,
"node_id": "U_kgDOCrQxjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/179581325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SnappCred",
"html_url": "https://github.com/SnappCred",
"followers_url": "https://api.github.com/users/SnappC... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7759/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4102 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4102/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4102/comments | https://api.github.com/repos/ollama/ollama/issues/4102/events | https://github.com/ollama/ollama/issues/4102 | 2,276,105,700 | I_kwDOJ0Z1Ps6Hqp3k | 4,102 | Ollama running in docker with concurrent requests doesn't work | {
"login": "BBjie",
"id": 55565844,
"node_id": "MDQ6VXNlcjU1NTY1ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/55565844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BBjie",
"html_url": "https://github.com/BBjie",
"followers_url": "https://api.github.com/users/BBjie/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 10 | 2024-05-02T17:31:50 | 2024-06-21T23:23:34 | 2024-06-21T23:23:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have tried to use Ollama in Docker and tested the handling of concurrent requests feature. I have added `OLLAMA_NUM_PARALLEL` and `OLLAMA_MAX_LOADED_MODELS` as env variables. The env values have successfully passed but it didn't work.
can anyone kindly help me out
```
services:
ollama:
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4102/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4102/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/831 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/831/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/831/comments | https://api.github.com/repos/ollama/ollama/issues/831/events | https://github.com/ollama/ollama/issues/831 | 1,948,691,226 | I_kwDOJ0Z1Ps50Jqsa | 831 | Context modification | {
"login": "VladimirKras",
"id": 47093374,
"node_id": "MDQ6VXNlcjQ3MDkzMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47093374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VladimirKras",
"html_url": "https://github.com/VladimirKras",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6100196012,
"node_id": "LA_kwDOJ0Z1Ps8AAAABa5... | closed | false | null | [] | null | 6 | 2023-10-18T03:02:53 | 2024-01-16T22:27:32 | 2024-01-16T22:27:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Sometimes I would like to steer a dialogue in a certain direction by adding a fake message on behalf of the LLM. How to achieve that with Ollama seems quite opaque:
1. The context that is sent is just an array of token ids, which is hard to manipulate.
2. The tokenizer and de-tokenizer aren't exposed. | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/831/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5957 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5957/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5957/comments | https://api.github.com/repos/ollama/ollama/issues/5957/events | https://github.com/ollama/ollama/issues/5957 | 2,430,554,233 | I_kwDOJ0Z1Ps6Q31B5 | 5,957 | Llama 3.1 base models for text completion | {
"login": "kaetemi",
"id": 1581053,
"node_id": "MDQ6VXNlcjE1ODEwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1581053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaetemi",
"html_url": "https://github.com/kaetemi",
"followers_url": "https://api.github.com/users/kaetemi/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 10 | 2024-07-25T16:49:30 | 2024-08-11T16:44:33 | 2024-08-11T16:44:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently only the instruct models appear to be in the library, the text completion models would be appreciated too. Thanks! :) | {
"login": "kaetemi",
"id": 1581053,
"node_id": "MDQ6VXNlcjE1ODEwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1581053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaetemi",
"html_url": "https://github.com/kaetemi",
"followers_url": "https://api.github.com/users/kaetemi/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5957/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5957/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6169 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6169/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6169/comments | https://api.github.com/repos/ollama/ollama/issues/6169/events | https://github.com/ollama/ollama/issues/6169 | 2,447,725,798 | I_kwDOJ0Z1Ps6R5VTm | 6,169 | How to fix the default settings of the model? | {
"login": "wszgrcy",
"id": 9607121,
"node_id": "MDQ6VXNlcjk2MDcxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9607121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wszgrcy",
"html_url": "https://github.com/wszgrcy",
"followers_url": "https://api.github.com/users/wszgrcy/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2024-08-05T06:34:10 | 2024-08-15T00:03:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I found that the template for 'yi: 9b-v1.5-q8-0' is missing and different from the 'yi: 9b' template
Where should the fix be carried out?

 do. Running Command-R from the terminal
```
$ ollama run command-r
>>> Hey, how are you?
3O>FCMID7BBBM<=>PJT@@FNURWKL=8@N;GWHP6:GJ>F76N86EL5DKLFJFADJ;ESQAV7OBDJTK8HT@Q>Q8@BCJ:I9NJEW=?C>BHIJ3U@87L^C
`... | {
"login": "phischde",
"id": 5195734,
"node_id": "MDQ6VXNlcjUxOTU3MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5195734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phischde",
"html_url": "https://github.com/phischde",
"followers_url": "https://api.github.com/users/phisc... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3698/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2177 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2177/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2177/comments | https://api.github.com/repos/ollama/ollama/issues/2177/events | https://github.com/ollama/ollama/pull/2177 | 2,099,031,721 | PR_kwDOJ0Z1Ps5k_lKZ | 2,177 | added example tests to document client and improve coverage | {
"login": "TimothyStiles",
"id": 7042260,
"node_id": "MDQ6VXNlcjcwNDIyNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7042260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimothyStiles",
"html_url": "https://github.com/TimothyStiles",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | 2 | 2024-01-24T20:27:09 | 2024-11-22T16:45:55 | 2024-11-21T09:15:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2177",
"html_url": "https://github.com/ollama/ollama/pull/2177",
"diff_url": "https://github.com/ollama/ollama/pull/2177.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2177.patch",
"merged_at": null
} | Hey y'all,
Pleasure meeting @jmorganca and some of you at last night's event!
This PR fixes #2159 by adding example tests to the client `api` package that will also render in the go docs. These examples show how to check server heartbeat, get server version, list out available models, sort those models by size, a... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2177/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3606 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3606/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3606/comments | https://api.github.com/repos/ollama/ollama/issues/3606/events | https://github.com/ollama/ollama/issues/3606 | 2,238,768,364 | I_kwDOJ0Z1Ps6FcOTs | 3,606 | multilingual-e5-large and multilingual-e5-base Embedding Model Support | {
"login": "awilhelm-projects",
"id": 126177372,
"node_id": "U_kgDOB4VQXA",
"avatar_url": "https://avatars.githubusercontent.com/u/126177372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awilhelm-projects",
"html_url": "https://github.com/awilhelm-projects",
"followers_url": "https://api... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 22 | 2024-04-11T23:27:37 | 2024-11-15T16:57:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I want use multilingual-e5-large or multilingual-e5-base as embedding model, because all other embed models dont work for other languages as english.
### How should we solve this?
Convert multilingual-e5-large and multilingual-e5-base (https://huggingface.co/intfloat/multilingual-e5-ba... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3606/reactions",
"total_count": 42,
"+1": 42,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3606/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7372 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7372/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7372/comments | https://api.github.com/repos/ollama/ollama/issues/7372/events | https://github.com/ollama/ollama/issues/7372 | 2,615,651,076 | I_kwDOJ0Z1Ps6b56sE | 7,372 | crash after OLLAMA_MULTIUSER_CACHE=1 | {
"login": "y-tor",
"id": 38348782,
"node_id": "MDQ6VXNlcjM4MzQ4Nzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/38348782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y-tor",
"html_url": "https://github.com/y-tor",
"followers_url": "https://api.github.com/users/y-tor/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [
{
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://... | null | 1 | 2024-10-26T08:12:44 | 2024-10-28T23:26:07 | 2024-10-28T23:26:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I start loading a model, such as granite3-dense, I get this error:
error: unknown argument: --multiuser-cache
usage: /usr/lib/ollama/runners/cuda_v12/ollama_llama_server [options]
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.14 | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7372/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3400 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3400/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3400/comments | https://api.github.com/repos/ollama/ollama/issues/3400/events | https://github.com/ollama/ollama/pull/3400 | 2,214,362,190 | PR_kwDOJ0Z1Ps5rHXnU | 3,400 | Community Integration: ChatOllama | {
"login": "sugarforever",
"id": 404421,
"node_id": "MDQ6VXNlcjQwNDQyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/404421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sugarforever",
"html_url": "https://github.com/sugarforever",
"followers_url": "https://api.github.com/u... | [] | closed | false | null | [] | null | 0 | 2024-03-29T00:02:53 | 2024-03-31T02:46:51 | 2024-03-31T02:46:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3400",
"html_url": "https://github.com/ollama/ollama/pull/3400",
"diff_url": "https://github.com/ollama/ollama/pull/3400.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3400.patch",
"merged_at": "2024-03-31T02:46:50"
} | # Community Integration - ChatOllama
[ChatOllama](https://github.com/sugarforever/chat-ollama) is an open source chatbot based on LLMs. It supports a wide range of language models including:
- Ollama served models
- OpenAI
- Azure OpenAI
- Anthropic
ChatOllama supports multiple types of chat:
- Free chat... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3400/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5080 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5080/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5080/comments | https://api.github.com/repos/ollama/ollama/issues/5080/events | https://github.com/ollama/ollama/pull/5080 | 2,355,812,772 | PR_kwDOJ0Z1Ps5ynM7- | 5,080 | Add some more debugging logs for intel discovery | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-06-16T14:43:54 | 2024-06-16T21:42:44 | 2024-06-16T21:42:42 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5080",
"html_url": "https://github.com/ollama/ollama/pull/5080",
"diff_url": "https://github.com/ollama/ollama/pull/5080.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5080.patch",
"merged_at": "2024-06-16T21:42:42"
} | Also removes an unused overall count variable
Until we can find a repro to fully root cause the crash, this may help narrow the search space.
Related to #5073 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5080/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2985 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2985/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2985/comments | https://api.github.com/repos/ollama/ollama/issues/2985/events | https://github.com/ollama/ollama/pull/2985 | 2,174,521,850 | PR_kwDOJ0Z1Ps5pAGDj | 2,985 | remove empty examples | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-07T18:40:51 | 2024-03-07T18:49:41 | 2024-03-07T18:49:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2985",
"html_url": "https://github.com/ollama/ollama/pull/2985",
"diff_url": "https://github.com/ollama/ollama/pull/2985.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2985.patch",
"merged_at": "2024-03-07T18:49:40"
} | resolves #2984 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2985/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8409 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8409/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8409/comments | https://api.github.com/repos/ollama/ollama/issues/8409/events | https://github.com/ollama/ollama/issues/8409 | 2,785,953,412 | I_kwDOJ0Z1Ps6mDkaE | 8,409 | Support model alias | {
"login": "1zilc",
"id": 44715458,
"node_id": "MDQ6VXNlcjQ0NzE1NDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/44715458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1zilc",
"html_url": "https://github.com/1zilc",
"followers_url": "https://api.github.com/users/1zilc/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2025-01-14T01:45:38 | 2025-01-14T02:22:22 | 2025-01-14T02:21:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Thanks very much for Ollama's outstanding work, which allows us AI novices to quickly experience the most advanced AI.
Is it possible to provide the following directives to create aliases for models and manage them
```bash
# create
ollama alias create coder qwen2.5-coder:32b
# remove
ollama alias remove coder
# ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8409/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2994 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2994/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2994/comments | https://api.github.com/repos/ollama/ollama/issues/2994/events | https://github.com/ollama/ollama/pull/2994 | 2,175,004,503 | PR_kwDOJ0Z1Ps5pBwEi | 2,994 | tune concurrency manager | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-07T23:27:51 | 2024-08-20T20:15:20 | 2024-08-20T20:15:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2994",
"html_url": "https://github.com/ollama/ollama/pull/2994",
"diff_url": "https://github.com/ollama/ollama/pull/2994.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2994.patch",
"merged_at": null
} | - higher initial concurrency
- lower cooldown after ramping up
- lower threshold for ramp up | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2994/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/971 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/971/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/971/comments | https://api.github.com/repos/ollama/ollama/issues/971/events | https://github.com/ollama/ollama/issues/971 | 1,973,901,785 | I_kwDOJ0Z1Ps51p1nZ | 971 | docker build fails with `not a git repository` | {
"login": "j2l",
"id": 65325,
"node_id": "MDQ6VXNlcjY1MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/65325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j2l",
"html_url": "https://github.com/j2l",
"followers_url": "https://api.github.com/users/j2l/followers",
"following... | [] | closed | false | null | [] | null | 2 | 2023-11-02T10:01:24 | 2023-11-02T16:58:20 | 2023-11-02T16:58:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Following the issue https://github.com/jmorganca/ollama/issues/797 I tried to build a local gpu version:
```
docker build -t ollama/ollama:gpu .
[+] Building 29.0s (17/18)
=> [internal] load build definition from Dockerfile 0.0s
=> => ... | {
"login": "j2l",
"id": 65325,
"node_id": "MDQ6VXNlcjY1MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/65325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j2l",
"html_url": "https://github.com/j2l",
"followers_url": "https://api.github.com/users/j2l/followers",
"following... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/971/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1336 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1336/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1336/comments | https://api.github.com/repos/ollama/ollama/issues/1336/events | https://github.com/ollama/ollama/pull/1336 | 2,019,782,990 | PR_kwDOJ0Z1Ps5g1zzD | 1,336 | docker: set PATH, LD_LIBRARY_PATH, and capabilities | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-01T00:32:32 | 2023-12-01T05:16:57 | 2023-12-01T05:16:56 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1336",
"html_url": "https://github.com/ollama/ollama/pull/1336",
"diff_url": "https://github.com/ollama/ollama/pull/1336.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1336.patch",
"merged_at": "2023-12-01T05:16:56"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1336/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7398 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7398/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7398/comments | https://api.github.com/repos/ollama/ollama/issues/7398/events | https://github.com/ollama/ollama/pull/7398 | 2,618,440,856 | PR_kwDOJ0Z1Ps6AGiZY | 7,398 | Janpf version | {
"login": "to-sora",
"id": 60461394,
"node_id": "MDQ6VXNlcjYwNDYxMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/60461394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/to-sora",
"html_url": "https://github.com/to-sora",
"followers_url": "https://api.github.com/users/to-sor... | [] | closed | false | null | [] | null | 0 | 2024-10-28T13:32:22 | 2024-10-28T13:34:22 | 2024-10-28T13:32:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7398",
"html_url": "https://github.com/ollama/ollama/pull/7398",
"diff_url": "https://github.com/ollama/ollama/pull/7398.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7398.patch",
"merged_at": null
} | null | {
"login": "to-sora",
"id": 60461394,
"node_id": "MDQ6VXNlcjYwNDYxMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/60461394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/to-sora",
"html_url": "https://github.com/to-sora",
"followers_url": "https://api.github.com/users/to-sor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7398/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1191 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1191/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1191/comments | https://api.github.com/repos/ollama/ollama/issues/1191/events | https://github.com/ollama/ollama/issues/1191 | 2,000,459,309 | I_kwDOJ0Z1Ps53PJYt | 1,191 | JSON mode when used from LangChain RAG | {
"login": "abaranovskis-redsamurai",
"id": 19287736,
"node_id": "MDQ6VXNlcjE5Mjg3NzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/19287736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abaranovskis-redsamurai",
"html_url": "https://github.com/abaranovskis-redsamurai",
"foll... | [] | closed | false | null | [] | null | 3 | 2023-11-18T15:09:50 | 2023-11-20T18:32:34 | 2023-11-20T18:32:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I would like to ask if there are any plans to support JSON mode response, when Ollama is called from LangChain RAG?
Thanks. | {
"login": "abaranovskis-redsamurai",
"id": 19287736,
"node_id": "MDQ6VXNlcjE5Mjg3NzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/19287736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abaranovskis-redsamurai",
"html_url": "https://github.com/abaranovskis-redsamurai",
"foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1191/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8196 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8196/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8196/comments | https://api.github.com/repos/ollama/ollama/issues/8196/events | https://github.com/ollama/ollama/pull/8196 | 2,753,835,404 | PR_kwDOJ0Z1Ps6F-Opf | 8,196 | chore: upgrade to gods v2 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 4 | 2024-12-21T08:06:14 | 2025-01-10T21:50:14 | 2025-01-10T21:50:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8196",
"html_url": "https://github.com/ollama/ollama/pull/8196",
"diff_url": "https://github.com/ollama/ollama/pull/8196.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8196.patch",
"merged_at": "2025-01-10T21:50:11"
} | gods v2 uses go generics rather than interfaces which simplifies the code considerably | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8196/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1809 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1809/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1809/comments | https://api.github.com/repos/ollama/ollama/issues/1809/events | https://github.com/ollama/ollama/issues/1809 | 2,067,540,347 | I_kwDOJ0Z1Ps57PCl7 | 1,809 | [ENHANCEMENT] Add more tests to avoid regressions | {
"login": "rgaidot",
"id": 5269,
"node_id": "MDQ6VXNlcjUyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgaidot",
"html_url": "https://github.com/rgaidot",
"followers_url": "https://api.github.com/users/rgaidot/followers"... | [] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 3 | 2024-01-05T15:26:03 | 2024-01-06T11:49:22 | 2024-01-05T22:07:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For example on this file https://github.com/jmorganca/ollama/blob/main/parser/parser.go
_Warning: I did not validate my code, I did it blind._
```go
package main
import (
"strings"
"testing"
)
func TestParser(t *testing.T) {
input :=
`
FROM model1
ADAPTER adapter1
... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1809/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6479 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6479/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6479/comments | https://api.github.com/repos/ollama/ollama/issues/6479/events | https://github.com/ollama/ollama/issues/6479 | 2,483,490,821 | I_kwDOJ0Z1Ps6UBxAF | 6,479 | v0.3.7-rc5 no longer uses multiple GPUs for a single model | {
"login": "Maltz42",
"id": 20978744,
"node_id": "MDQ6VXNlcjIwOTc4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maltz42",
"html_url": "https://github.com/Maltz42",
"followers_url": "https://api.github.com/users/Maltz4... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-08-23T16:36:25 | 2024-08-23T22:11:57 | 2024-08-23T22:11:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Moving from 0.3.6 to 0.3.7-rc5, Ollama no longer uses both GPUs for a single model when the model will not fit on one card. If I load two models, though, it will use the second card to load the second model. Output of "ollama ps" and "nvidia-smi" below.
```
ollama ps:
NAME ID ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6479/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8247 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8247/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8247/comments | https://api.github.com/repos/ollama/ollama/issues/8247/events | https://github.com/ollama/ollama/issues/8247 | 2,759,806,638 | I_kwDOJ0Z1Ps6kf06u | 8,247 | Enhanced System Observability for Multi-Server Environments (Unified Endpoints?) | {
"login": "dezoito",
"id": 6494010,
"node_id": "MDQ6VXNlcjY0OTQwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6494010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dezoito",
"html_url": "https://github.com/dezoito",
"followers_url": "https://api.github.com/users/dezoito/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2024-12-26T13:57:26 | 2025-01-13T01:47:21 | 2025-01-13T01:47:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As Ollama adoption grows, the lack of comprehensive system metrics makes it challenging to meet standard operational requirements - monitoring, alerting, and planning across development, staging, and production environments.
This can also prevent a wider adoption in commercial and production applications.
While t... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8247/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8247/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2023 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2023/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2023/comments | https://api.github.com/repos/ollama/ollama/issues/2023/events | https://github.com/ollama/ollama/issues/2023 | 2,084,931,185 | I_kwDOJ0Z1Ps58RYZx | 2,023 | Enable Prompt Caching by Default | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 3 | 2024-01-16T21:06:43 | 2024-07-09T15:27:54 | 2024-05-06T23:48:07 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I had to disable prompt caching due to requests getting stuck: #1994
We should bring this back when we have a mitigation for the inference issue:
https://github.com/ggerganov/llama.cpp/issues/4989 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2023/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/697 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/697/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/697/comments | https://api.github.com/repos/ollama/ollama/issues/697/events | https://github.com/ollama/ollama/issues/697 | 1,926,053,967 | I_kwDOJ0Z1Ps5yzUBP | 697 | Can not download the model of codellama:13b | {
"login": "danny-su",
"id": 12178855,
"node_id": "MDQ6VXNlcjEyMTc4ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/12178855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danny-su",
"html_url": "https://github.com/danny-su",
"followers_url": "https://api.github.com/users/dan... | [] | closed | false | null | [] | null | 9 | 2023-10-04T11:58:47 | 2023-10-06T07:15:41 | 2023-10-06T07:15:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | <img width="1004" alt="image" src="https://github.com/jmorganca/ollama/assets/12178855/0de2b8b6-6b26-4f67-b70e-b73de8020852">
| {
"login": "danny-su",
"id": 12178855,
"node_id": "MDQ6VXNlcjEyMTc4ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/12178855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danny-su",
"html_url": "https://github.com/danny-su",
"followers_url": "https://api.github.com/users/dan... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/697/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/574 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/574/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/574/comments | https://api.github.com/repos/ollama/ollama/issues/574/events | https://github.com/ollama/ollama/pull/574 | 1,909,265,837 | PR_kwDOJ0Z1Ps5bAqKz | 574 | Added a new community project | {
"login": "TwanLuttik",
"id": 19343894,
"node_id": "MDQ6VXNlcjE5MzQzODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19343894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TwanLuttik",
"html_url": "https://github.com/TwanLuttik",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2023-09-22T17:17:40 | 2023-09-25T14:42:01 | 2023-09-25T14:40:59 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/574",
"html_url": "https://github.com/ollama/ollama/pull/574",
"diff_url": "https://github.com/ollama/ollama/pull/574.diff",
"patch_url": "https://github.com/ollama/ollama/pull/574.patch",
"merged_at": "2023-09-25T14:40:59"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/574/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2696 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2696/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2696/comments | https://api.github.com/repos/ollama/ollama/issues/2696/events | https://github.com/ollama/ollama/issues/2696 | 2,150,014,256 | I_kwDOJ0Z1Ps6AJp0w | 2,696 | `ollama` process on macOS using up a lot of RAM while being idle | {
"login": "siikdUde",
"id": 10148714,
"node_id": "MDQ6VXNlcjEwMTQ4NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/10148714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siikdUde",
"html_url": "https://github.com/siikdUde",
"followers_url": "https://api.github.com/users/sii... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-02-22T22:01:32 | 2024-05-05T18:43:38 | 2024-05-05T18:43:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | <img width="1081" alt="SCR-20240222-ozbm" src="https://github.com/ollama/ollama/assets/10148714/575001a0-9b9a-4e08-ba8c-f0321ec3e6df">
As you can see, ollama is the second most resource intensive application. I am not actively running any models, just the app is open. Any idea why this is? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2696/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2696/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3800 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3800/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3800/comments | https://api.github.com/repos/ollama/ollama/issues/3800/events | https://github.com/ollama/ollama/issues/3800 | 2,255,172,519 | I_kwDOJ0Z1Ps6GazOn | 3,800 | Auto-Save Functionality | {
"login": "M3cubo",
"id": 1382596,
"node_id": "MDQ6VXNlcjEzODI1OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1382596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M3cubo",
"html_url": "https://github.com/M3cubo",
"followers_url": "https://api.github.com/users/M3cubo/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-04-21T16:56:34 | 2024-05-15T10:10:04 | 2024-05-14T22:50:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, I am currently using Ollama for interactive terminal sessions and I find it to be an extremely useful tool. One of the features I am interested in is the ability to automatically save each addition to the conversation during an `ollama run <model>` session.
### Feature Request
I would like to inquire if ther... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3800/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3800/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2395 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2395/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2395/comments | https://api.github.com/repos/ollama/ollama/issues/2395/events | https://github.com/ollama/ollama/issues/2395 | 2,123,723,886 | I_kwDOJ0Z1Ps5-lXRu | 2,395 | Multi-GPU setup of Tesla P100s is slow | {
"login": "PhilipAmadasun",
"id": 55031054,
"node_id": "MDQ6VXNlcjU1MDMxMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipAmadasun",
"html_url": "https://github.com/PhilipAmadasun",
"followers_url": "https://api.gi... | [
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-02-07T19:27:34 | 2024-03-21T13:58:19 | 2024-03-21T13:58:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | A multi-GPU setup of Tesla P100s is very slow compared to a single RTX 4090. I am using the 0.1.22 version of ollama. Is there something wrong with the Teslas? Are they just bad GPUs? I was told to try to run ollama on just one of them to see what happens, if that might indeed make ollama run faster I am not sure how t... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2395/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8374 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8374/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8374/comments | https://api.github.com/repos/ollama/ollama/issues/8374/events | https://github.com/ollama/ollama/issues/8374 | 2,780,362,293 | I_kwDOJ0Z1Ps6luPY1 | 8,374 | different between Modelfile PARAMETER and API | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-10T14:48:53 | 2025-01-10T15:19:18 | 2025-01-10T15:19:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I wanna know what's the difference between PARAMETER on Modelfile and on API
such as num_ctx 2048 .
If my use case is to call Ollama via API, and for the sake of convenient calling, can I use a Modelfile to create a new model with num_ctx defined that meets my requirements (provided that ... | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8374/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/330 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/330/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/330/comments | https://api.github.com/repos/ollama/ollama/issues/330/events | https://github.com/ollama/ollama/issues/330 | 1,846,771,571 | I_kwDOJ0Z1Ps5uE39z | 330 | ollama pull llama2:70b stuck | {
"login": "sarvagnan",
"id": 860916,
"node_id": "MDQ6VXNlcjg2MDkxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/860916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarvagnan",
"html_url": "https://github.com/sarvagnan",
"followers_url": "https://api.github.com/users/sarv... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 13 | 2023-08-11T12:50:39 | 2024-08-28T22:04:58 | 2023-08-23T18:49:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have tried to pull llama2:70b but ollama appears to be stuck in the "pulling manifest" stage. This repeats after cancelling as well. I tried pulling orca and that downloaded without any issues. I have appended the server log from the logs folder. These logs are repeated with almost identical times each run.
Thank... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/330/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/503 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/503/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/503/comments | https://api.github.com/repos/ollama/ollama/issues/503/events | https://github.com/ollama/ollama/issues/503 | 1,889,076,152 | I_kwDOJ0Z1Ps5wmQO4 | 503 | ollama pull llama2 error | {
"login": "EasonZhaoZ",
"id": 6023767,
"node_id": "MDQ6VXNlcjYwMjM3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6023767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EasonZhaoZ",
"html_url": "https://github.com/EasonZhaoZ",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2023-09-10T09:41:27 | 2023-09-11T03:20:18 | 2023-09-10T13:50:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 404 Client Error: Not Found for url: https://ollama.ai/api/models | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/503/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4475 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4475/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4475/comments | https://api.github.com/repos/ollama/ollama/issues/4475/events | https://github.com/ollama/ollama/issues/4475 | 2,300,311,089 | I_kwDOJ0Z1Ps6JG_Yx | 4,475 | It is possible to enable OpenAI Api in Docker image | {
"login": "Tomichi",
"id": 2265229,
"node_id": "MDQ6VXNlcjIyNjUyMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2265229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tomichi",
"html_url": "https://github.com/Tomichi",
"followers_url": "https://api.github.com/users/Tomichi/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-05-16T12:45:10 | 2024-05-16T21:44:17 | 2024-05-16T18:56:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I want ask you if it is possible to enable openAI api compatibility to official OIlama Docker image. I try that feature works in desktop app well and it's missing in docker image. https://ollama.com/blog/openai-compatibility Desktop app works well.
Thank you for anybody helps. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4475/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8617 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8617/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8617/comments | https://api.github.com/repos/ollama/ollama/issues/8617/events | https://github.com/ollama/ollama/issues/8617 | 2,814,009,755 | I_kwDOJ0Z1Ps6numGb | 8,617 | Support Request for jonatasgrosman/wav2vec2-large-xlsr-53-italian | {
"login": "raphael10-collab",
"id": 70313067,
"node_id": "MDQ6VXNlcjcwMzEzMDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/70313067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raphael10-collab",
"html_url": "https://github.com/raphael10-collab",
"followers_url": "https://... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2025-01-27T20:37:55 | 2025-01-27T20:44:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | (.venv) raphy@raohy:~/llama.cpp$ git clone https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian
Cloning into 'wav2vec2-large-xlsr-53-italian'...
remote: Enumerating objects: 99, done.
remote: Total 99 (delta 0), reused 0 (delta 0), pack-reused 99 (from 1)
Unpacking objects: 100% (99/... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8617/timeline | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.