url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/499 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/499/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/499/comments | https://api.github.com/repos/ollama/ollama/issues/499/events | https://github.com/ollama/ollama/issues/499 | 1,888,438,245 | I_kwDOJ0Z1Ps5wj0fl | 499 | Dedicated hardware for 16b/70b models | {
"login": "zdeneksvarc",
"id": 79550344,
"node_id": "MDQ6VXNlcjc5NTUwMzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/79550344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zdeneksvarc",
"html_url": "https://github.com/zdeneksvarc",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2023-09-08T22:38:23 | 2023-09-09T07:26:36 | 2023-09-08T22:52:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey guys, let's say I want to get a dedicated home server that would run `ollama serve` 13b/70b in Docker. Is there any chance to get such hardware (CPU) to achieve speed at least 5 tok/s? Since Ollama doesn't use GPU acceleration. | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/499/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2734 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2734/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2734/comments | https://api.github.com/repos/ollama/ollama/issues/2734/events | https://github.com/ollama/ollama/issues/2734 | 2,152,405,266 | I_kwDOJ0Z1Ps6ASxkS | 2,734 | Windows portable mode? | {
"login": "DartPower",
"id": 2005369,
"node_id": "MDQ6VXNlcjIwMDUzNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2005369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DartPower",
"html_url": "https://github.com/DartPower",
"followers_url": "https://api.github.com/users/Da... | [] | closed | false | null | [] | null | 9 | 2024-02-24T17:03:52 | 2024-12-12T07:24:28 | 2024-02-25T05:09:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can you do the portable mode?
e.g. zipped variant of installed portable distro of ollama, because i'm have a very small free space on system disk but have external SSD for AI
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2734/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2734/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7237 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7237/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7237/comments | https://api.github.com/repos/ollama/ollama/issues/7237/events | https://github.com/ollama/ollama/issues/7237 | 2,593,961,857 | I_kwDOJ0Z1Ps6anLeB | 7,237 | Suggest adding shibing624/text2vec model | {
"login": "smileyboy2019",
"id": 59221294,
"node_id": "MDQ6VXNlcjU5MjIxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/59221294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smileyboy2019",
"html_url": "https://github.com/smileyboy2019",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-10-17T08:05:41 | 2024-10-17T08:05:41 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Text2vec is used as a vectorized model, but it is currently not found in the library. I don't know how to add the model | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7237/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1672 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1672/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1672/comments | https://api.github.com/repos/ollama/ollama/issues/1672/events | https://github.com/ollama/ollama/issues/1672 | 2,053,923,083 | I_kwDOJ0Z1Ps56bGEL | 1,672 | "api/chat loads the model only when a request is received. Is it possible to add a flag to keep a specific model in memory permanently, to improve response time?" | {
"login": "goldenquant",
"id": 108568777,
"node_id": "U_kgDOBnigyQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108568777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldenquant",
"html_url": "https://github.com/goldenquant",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 4 | 2023-12-22T13:26:32 | 2023-12-27T08:12:48 | 2023-12-26T11:00:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | "api/chat loads the model only when a request is received. Is it possible to add a flag to keep a specific model in memory permanently, to improve response time?" | {
"login": "goldenquant",
"id": 108568777,
"node_id": "U_kgDOBnigyQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108568777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goldenquant",
"html_url": "https://github.com/goldenquant",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1672/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4843 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4843/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4843/comments | https://api.github.com/repos/ollama/ollama/issues/4843/events | https://github.com/ollama/ollama/issues/4843 | 2,336,623,813 | I_kwDOJ0Z1Ps6LRgzF | 4,843 | Ollama running locally with very high latency | {
"login": "vsatyakiran",
"id": 103512987,
"node_id": "U_kgDOBit7mw",
"avatar_url": "https://avatars.githubusercontent.com/u/103512987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vsatyakiran",
"html_url": "https://github.com/vsatyakiran",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-06-05T18:59:02 | 2024-06-18T21:14:18 | 2024-06-18T21:14:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have installed ollama and tried to run llama2, llama3:8b but it is generating just 5 to 8 tokens per second , my system config: windows os, 16gb RAM
I also tried it on ec2 instance in aws with g5.xlarge instance type but facing the same latency, why it is happening?
### OS
Windows
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4843/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4341 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4341/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4341/comments | https://api.github.com/repos/ollama/ollama/issues/4341/events | https://github.com/ollama/ollama/issues/4341 | 2,290,643,191 | I_kwDOJ0Z1Ps6IiHD3 | 4,341 | how to import Meta-Llama-3-120B-Instruct.imatrix | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-05-11T03:55:53 | 2024-08-31T08:35:44 | 2024-08-31T08:35:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
I want to import this model. may I know how to import Meta-Llama-3-120B-Instruct.imatrix?
| {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4341/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2805 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2805/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2805/comments | https://api.github.com/repos/ollama/ollama/issues/2805/events | https://github.com/ollama/ollama/issues/2805 | 2,158,639,357 | I_kwDOJ0Z1Ps6Aqjj9 | 2,805 | ollama gets stuck in an infinite loop sometimes and has to be restarted | {
"login": "boxabirds",
"id": 147305,
"node_id": "MDQ6VXNlcjE0NzMwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/147305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boxabirds",
"html_url": "https://github.com/boxabirds",
"followers_url": "https://api.github.com/users/boxa... | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 17 | 2024-02-28T10:36:07 | 2024-12-04T17:41:46 | 2024-05-10T01:21:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Problem: some prompts trigger an infinite loop where ollama a) doesn't return and b) locks up the API so no other calls can be made.
## Environment
Ollama version: 0.1.26
OS: Ubuntu 22.04
Hardware: RTX 4090/24 with 64MB system RAM
LLM: mistral:7b
```
time=2024-02-28T10:30:51.224Z level=INFO source=images.g... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2805/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2805/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4012 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4012/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4012/comments | https://api.github.com/repos/ollama/ollama/issues/4012/events | https://github.com/ollama/ollama/pull/4012 | 2,267,940,158 | PR_kwDOJ0Z1Ps5t9GDJ | 4,012 | Update README.md to include ollama-r library | {
"login": "hauselin",
"id": 7620977,
"node_id": "MDQ6VXNlcjc2MjA5Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7620977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hauselin",
"html_url": "https://github.com/hauselin",
"followers_url": "https://api.github.com/users/hause... | [] | closed | false | null | [] | null | 2 | 2024-04-29T01:16:58 | 2024-05-07T16:52:30 | 2024-05-07T16:52:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4012",
"html_url": "https://github.com/ollama/ollama/pull/4012",
"diff_url": "https://github.com/ollama/ollama/pull/4012.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4012.patch",
"merged_at": "2024-05-07T16:52:30"
} | Add ollama-r library | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4012/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7292 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7292/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7292/comments | https://api.github.com/repos/ollama/ollama/issues/7292/events | https://github.com/ollama/ollama/issues/7292 | 2,602,065,384 | I_kwDOJ0Z1Ps6bGF3o | 7,292 | 0.3.14 git compile error on arm64 andro termux | {
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2024-10-21T10:19:20 | 2024-10-27T07:35:26 | 2024-10-22T08:22:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
gpu_info_cudart.c:61:13: warning: comparison of different enumeration types ('cudartReturn_t' (aka 'enum cudartReturn_enum') and 'enum cudaError_enum') [-Wenum-compare]
# github.com/ollama/ollama/llama
ggml-quants.c:4023:88: error: always_inline function 'vmmlaq_s32' requires target feature 'i... | {
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7292/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7294 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7294/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7294/comments | https://api.github.com/repos/ollama/ollama/issues/7294/events | https://github.com/ollama/ollama/issues/7294 | 2,602,394,837 | I_kwDOJ0Z1Ps6bHWTV | 7,294 | Ollama cannot find libggml_cuda_v12.so on v0.4.0-rc3 | {
"login": "Blumlaut",
"id": 13604413,
"node_id": "MDQ6VXNlcjEzNjA0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13604413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Blumlaut",
"html_url": "https://github.com/Blumlaut",
"followers_url": "https://api.github.com/users/Blu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-10-21T12:26:22 | 2024-10-21T22:26:08 | 2024-10-21T22:26:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Running Ollama on Debian Bookworm with nvidia-cuda drivers installed (1x 3060, 1x 3060Ti), upon upgrading to v0.4.0-rc3 (0.3.14 works fine!) i can no longer load any models due to the following error:
```
/tmp/ollama1623163346/runners/cuda_v12/ollama_llama_server: error while loading share... | {
"login": "Blumlaut",
"id": 13604413,
"node_id": "MDQ6VXNlcjEzNjA0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13604413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Blumlaut",
"html_url": "https://github.com/Blumlaut",
"followers_url": "https://api.github.com/users/Blu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7294/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3582 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3582/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3582/comments | https://api.github.com/repos/ollama/ollama/issues/3582/events | https://github.com/ollama/ollama/issues/3582 | 2,236,315,021 | I_kwDOJ0Z1Ps6FS3WN | 3,582 | Add Tokenize and Detokenize Endpoints to Ollama Server | {
"login": "ParisNeo",
"id": 827993,
"node_id": "MDQ6VXNlcjgyNzk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/827993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParisNeo",
"html_url": "https://github.com/ParisNeo",
"followers_url": "https://api.github.com/users/ParisNe... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 1 | 2024-04-10T19:48:04 | 2024-12-08T07:04:52 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I would like to propose the addition of tokenize and detokenize endpoints to the Ollama server. This feature is crucial for the Ollama client interfaces (such as lollms) to effectively prepare prompts and accurately estimate the number of tokens for the LLMs. Currently, the client uses... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3582/reactions",
"total_count": 71,
"+1": 71,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3582/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/919 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/919/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/919/comments | https://api.github.com/repos/ollama/ollama/issues/919/events | https://github.com/ollama/ollama/issues/919 | 1,964,173,534 | I_kwDOJ0Z1Ps51Euje | 919 | Congrats on being top open source (per InfoWorld) | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2023-10-26T18:51:31 | 2023-10-26T18:51:37 | 2023-10-26T18:51:37 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://www.infoworld.com/article/3709196/the-best-open-source-software-of-2023.html
Congrats on being mentioned here! That's pretty cool. | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/919/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/919/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4858 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4858/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4858/comments | https://api.github.com/repos/ollama/ollama/issues/4858/events | https://github.com/ollama/ollama/issues/4858 | 2,338,461,429 | I_kwDOJ0Z1Ps6LYhb1 | 4,858 | 能否支持GLM-4-9B-Chat | {
"login": "Forevery1",
"id": 19872771,
"node_id": "MDQ6VXNlcjE5ODcyNzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19872771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Forevery1",
"html_url": "https://github.com/Forevery1",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-06-06T14:52:18 | 2024-06-06T17:34:02 | 2024-06-06T17:34:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 希望支持GLM-4-9B-Chat | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4858/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4858/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/202 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/202/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/202/comments | https://api.github.com/repos/ollama/ollama/issues/202/events | https://github.com/ollama/ollama/pull/202 | 1,819,197,150 | PR_kwDOJ0Z1Ps5WReT1 | 202 | better error message when model not found on pull | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-07-24T21:49:03 | 2023-08-16T17:46:46 | 2023-07-25T14:30:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/202",
"html_url": "https://github.com/ollama/ollama/pull/202",
"diff_url": "https://github.com/ollama/ollama/pull/202.diff",
"patch_url": "https://github.com/ollama/ollama/pull/202.patch",
"merged_at": "2023-07-25T14:30:48"
} | ```
ollama run orca-dne
pulling manifest
Error: pull model manifest: model not found
```
resolves #180 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/202/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1324 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1324/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1324/comments | https://api.github.com/repos/ollama/ollama/issues/1324/events | https://github.com/ollama/ollama/issues/1324 | 2,017,797,182 | I_kwDOJ0Z1Ps54RSQ- | 1,324 | Pulling model causes 99+ download time towards the end of the completing the download | {
"login": "ahmetkca",
"id": 74574469,
"node_id": "MDQ6VXNlcjc0NTc0NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/74574469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmetkca",
"html_url": "https://github.com/ahmetkca",
"followers_url": "https://api.github.com/users/ahm... | [] | closed | false | null | [] | null | 2 | 2023-11-30T03:18:48 | 2024-01-08T02:59:35 | 2024-01-08T02:59:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Here is the output from journalctl
```
Nov 29 22:11:04 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:11:04 llama.go:262: less than 2 GB VRAM available
Nov 29 22:11:04 ahmetkca-Ubuntu-23.10 ollama[23387]: 2023/11/29 22:11:04 routes.go:797: not enough VRAM available, falling back to CPU only
Nov 29 22:12:27 ahme... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1324/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3635 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3635/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3635/comments | https://api.github.com/repos/ollama/ollama/issues/3635/events | https://github.com/ollama/ollama/issues/3635 | 2,241,835,021 | I_kwDOJ0Z1Ps6Fn7AN | 3,635 | jetmoe-8b | {
"login": "Axenide",
"id": 66109459,
"node_id": "MDQ6VXNlcjY2MTA5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/66109459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Axenide",
"html_url": "https://github.com/Axenide",
"followers_url": "https://api.github.com/users/Axenid... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-04-14T00:24:26 | 2024-04-16T07:28:11 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
JetMoE is a Mixture of Experts model that reaches Llama2 performance while having only 2.2B parameters active. I think this has a lot of potential for low end devices and will be good to have it in the Ollama library.
https://huggingface.co/jetmoe/jetmoe-8b
https://huggingface.co/jet... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3635/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3635/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4966 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4966/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4966/comments | https://api.github.com/repos/ollama/ollama/issues/4966/events | https://github.com/ollama/ollama/issues/4966 | 2,344,556,162 | I_kwDOJ0Z1Ps6LvxaC | 4,966 | Llama 3 70b 16bit precision | {
"login": "Aekansh-Ak",
"id": 64459173,
"node_id": "MDQ6VXNlcjY0NDU5MTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/64459173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aekansh-Ak",
"html_url": "https://github.com/Aekansh-Ak",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-06-10T18:19:56 | 2024-06-12T07:01:56 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As per Ollama documentation, it supports Llama 3 70b 4bit precision.
I was wondering if and how can I use 16bit or 32 bit precision model. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4966/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5358 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5358/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5358/comments | https://api.github.com/repos/ollama/ollama/issues/5358/events | https://github.com/ollama/ollama/issues/5358 | 2,380,246,632 | I_kwDOJ0Z1Ps6N365o | 5,358 | LLM Compiler Models | {
"login": "pmatos",
"id": 7911,
"node_id": "MDQ6VXNlcjc5MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmatos",
"html_url": "https://github.com/pmatos",
"followers_url": "https://api.github.com/users/pmatos/followers",
... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-06-28T11:47:47 | 2024-11-06T12:23:30 | 2024-11-06T12:23:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How can I use the newly release models from meta with Ollama?
https://huggingface.co/collections/facebook/llm-compiler-667c5b05557fe99a9edd25cb
Thanks. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5358/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/618 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/618/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/618/comments | https://api.github.com/repos/ollama/ollama/issues/618/events | https://github.com/ollama/ollama/issues/618 | 1,914,743,146 | I_kwDOJ0Z1Ps5yIKlq | 618 | Trying to load too many layers, vram oom, reverts to cpu only. | {
"login": "aaroncoffey",
"id": 3649791,
"node_id": "MDQ6VXNlcjM2NDk3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3649791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaroncoffey",
"html_url": "https://github.com/aaroncoffey",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 7 | 2023-09-27T05:47:25 | 2023-12-16T21:47:56 | 2023-12-04T19:54:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi there,
Based on the logs, it appears that ollama is trying to load too many layers and crashing OOM, this is causing it to revert to CPU only mode, which is not desirable.
Logs:
```
2023/09/26 21:40:42 llama.go:310: starting llama runner
2023/09/26 21:40:42 llama.go:346: waiting for llama runner to start r... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/618/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2944 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2944/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2944/comments | https://api.github.com/repos/ollama/ollama/issues/2944/events | https://github.com/ollama/ollama/issues/2944 | 2,170,443,201 | I_kwDOJ0Z1Ps6BXlXB | 2,944 | Add ENVIRONMENT section to CLI usage | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 1 | 2024-03-06T01:19:30 | 2024-03-08T05:35:39 | 2024-03-07T21:57:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This issue is to track the addition of a help section for configuring the Ollama CLI with environment variables. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2944/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2816 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2816/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2816/comments | https://api.github.com/repos/ollama/ollama/issues/2816/events | https://github.com/ollama/ollama/issues/2816 | 2,159,685,731 | I_kwDOJ0Z1Ps6AujBj | 2,816 | Ubuntu install not ending | {
"login": "Fastidious",
"id": 8352292,
"node_id": "MDQ6VXNlcjgzNTIyOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8352292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fastidious",
"html_url": "https://github.com/Fastidious",
"followers_url": "https://api.github.com/users... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-02-28T19:24:35 | 2024-07-15T23:34:43 | 2024-03-20T16:22:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Trying to install ollama on Ubuntu 23.04, it gets stuck like this:
```bash
>>> Downloading ollama...
######################################################################## 100.0%##O#-#
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to render group...
>>> Adding current user to ollama group.... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2816/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2952 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2952/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2952/comments | https://api.github.com/repos/ollama/ollama/issues/2952/events | https://github.com/ollama/ollama/issues/2952 | 2,171,462,793 | I_kwDOJ0Z1Ps6BbeSJ | 2,952 | Windows CUDA OOM running llama2 on dual RTX 2070 | {
"login": "iamtechysandy",
"id": 65868620,
"node_id": "MDQ6VXNlcjY1ODY4NjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/65868620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamtechysandy",
"html_url": "https://github.com/iamtechysandy",
"followers_url": "https://api.githu... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-03-06T13:03:13 | 2024-03-12T07:25:01 | 2024-03-12T07:25:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | C:\Users\admin>ollama run llama2
Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:52764->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
Getting these error While running | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2952/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1615 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1615/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1615/comments | https://api.github.com/repos/ollama/ollama/issues/1615/events | https://github.com/ollama/ollama/issues/1615 | 2,049,390,297 | I_kwDOJ0Z1Ps56JzbZ | 1,615 | 0.1.17: inconsistent vendoring in /build/source | {
"login": "quag",
"id": 35086,
"node_id": "MDQ6VXNlcjM1MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/35086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quag",
"html_url": "https://github.com/quag",
"followers_url": "https://api.github.com/users/quag/followers",
"follo... | [] | closed | false | null | [] | null | 0 | 2023-12-19T20:31:15 | 2023-12-21T20:04:35 | 2023-12-21T20:04:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm building ollama on NixOS. 0.1.16 worked fine, but bumping to use 0.1.17 fails with this error:
```
go: inconsistent vendoring in /build/source:
github.com/stretchr/testify@v1.8.3: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
github.com/davecgh/go-spew@v1.1.1... | {
"login": "quag",
"id": 35086,
"node_id": "MDQ6VXNlcjM1MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/35086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quag",
"html_url": "https://github.com/quag",
"followers_url": "https://api.github.com/users/quag/followers",
"follo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1615/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2544 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2544/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2544/comments | https://api.github.com/repos/ollama/ollama/issues/2544/events | https://github.com/ollama/ollama/issues/2544 | 2,139,029,534 | I_kwDOJ0Z1Ps5_fwAe | 2,544 | API enhancement - create endpoint to fetch hosted models | {
"login": "aroffe99",
"id": 22308552,
"node_id": "MDQ6VXNlcjIyMzA4NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/22308552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aroffe99",
"html_url": "https://github.com/aroffe99",
"followers_url": "https://api.github.com/users/aro... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-02-16T17:11:27 | 2024-05-10T21:37:08 | 2024-05-10T21:37:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'd like to create a UI where users have the option of clicking any model listed in https://ollama.com/library and that model would be pulled in the background. Right now I provide free text for users to insert the model and tag but `model:tag` can get pretty long and error prone for users to type it.
Alternatively ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2544/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2544/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1786 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1786/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1786/comments | https://api.github.com/repos/ollama/ollama/issues/1786/events | https://github.com/ollama/ollama/pull/1786 | 2,066,084,789 | PR_kwDOJ0Z1Ps5jPxbK | 1,786 | add faq about quant and context | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 1 | 2024-01-04T17:46:13 | 2024-02-20T03:17:13 | 2024-02-20T03:17:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1786",
"html_url": "https://github.com/ollama/ollama/pull/1786",
"diff_url": "https://github.com/ollama/ollama/pull/1786.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1786.patch",
"merged_at": null
} | This adds a short faq to describe quantization and context. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1786/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2194 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2194/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2194/comments | https://api.github.com/repos/ollama/ollama/issues/2194/events | https://github.com/ollama/ollama/issues/2194 | 2,101,230,295 | I_kwDOJ0Z1Ps59PjrX | 2,194 | Change the default 11434 port? | {
"login": "CHesketh76",
"id": 38713764,
"node_id": "MDQ6VXNlcjM4NzEzNzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/38713764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CHesketh76",
"html_url": "https://github.com/CHesketh76",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 25 | 2024-01-25T21:54:45 | 2025-01-25T12:52:28 | 2024-01-25T23:17:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am getting this error message ```Error: listen tcp 127.0.0.1:11434: bind: address already in use``` every time I run ```ollama serve```. Would it be possible to have the option to change the port?
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2194/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7963 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7963/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7963/comments | https://api.github.com/repos/ollama/ollama/issues/7963/events | https://github.com/ollama/ollama/pull/7963 | 2,722,061,023 | PR_kwDOJ0Z1Ps6ERd7P | 7,963 | openai: finish streaming tool calls as tool_calls | {
"login": "anuraaga",
"id": 198344,
"node_id": "MDQ6VXNlcjE5ODM0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/198344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anuraaga",
"html_url": "https://github.com/anuraaga",
"followers_url": "https://api.github.com/users/anuraag... | [] | open | false | null | [] | null | 8 | 2024-12-06T04:52:25 | 2025-01-26T09:00:15 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7963",
"html_url": "https://github.com/ollama/ollama/pull/7963",
"diff_url": "https://github.com/ollama/ollama/pull/7963.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7963.patch",
"merged_at": null
} | When a response contains tool_calls it finishes the chat, and we see this already happening in Ollama in non-chunk mode. This ensures that the chunk with tool calls contains the finish reason, not a following one, while any following ones are not sent - their choice with empty content will conflict with the tool call r... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7963/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7963/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2459 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2459/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2459/comments | https://api.github.com/repos/ollama/ollama/issues/2459/events | https://github.com/ollama/ollama/pull/2459 | 2,129,360,739 | PR_kwDOJ0Z1Ps5mmB7W | 2,459 | Always add token to cache_tokens | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-12T03:29:38 | 2024-02-12T16:10:17 | 2024-02-12T16:10:16 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2459",
"html_url": "https://github.com/ollama/ollama/pull/2459",
"diff_url": "https://github.com/ollama/ollama/pull/2459.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2459.patch",
"merged_at": "2024-02-12T16:10:16"
} | The diff is a bit hard to read, but this is the actual fix for our `01` patch that fixes due to the kv cache being full
I believe this fixes #2339 and #1458 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2459/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2459/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1383 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1383/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1383/comments | https://api.github.com/repos/ollama/ollama/issues/1383/events | https://github.com/ollama/ollama/pull/1383 | 2,025,051,507 | PR_kwDOJ0Z1Ps5hHfZX | 1,383 | revert cli to use /api/generate | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-05T00:27:13 | 2023-12-05T00:35:31 | 2023-12-05T00:35:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1383",
"html_url": "https://github.com/ollama/ollama/pull/1383",
"diff_url": "https://github.com/ollama/ollama/pull/1383.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1383.patch",
"merged_at": "2023-12-05T00:35:30"
} | This change reverts the CLI to use `/api/generate` instead of `/api/chat`. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1383/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8528 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8528/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8528/comments | https://api.github.com/repos/ollama/ollama/issues/8528/events | https://github.com/ollama/ollama/issues/8528 | 2,803,290,989 | I_kwDOJ0Z1Ps6nFtNt | 8,528 | don't show the thinking process | {
"login": "sunburst-yz",
"id": 37734140,
"node_id": "MDQ6VXNlcjM3NzM0MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/37734140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunburst-yz",
"html_url": "https://github.com/sunburst-yz",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2025-01-22T03:40:56 | 2025-01-26T18:33:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use DeepSeek-R1, the thinking process shown does not make sense to me, I only want to see the final result.
 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8528/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8528/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2804 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2804/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2804/comments | https://api.github.com/repos/ollama/ollama/issues/2804/events | https://github.com/ollama/ollama/issues/2804 | 2,158,388,581 | I_kwDOJ0Z1Ps6ApmVl | 2,804 | Feature: Mistral Next | {
"login": "Dimfred",
"id": 29997904,
"node_id": "MDQ6VXNlcjI5OTk3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/29997904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dimfred",
"html_url": "https://github.com/Dimfred",
"followers_url": "https://api.github.com/users/Dimfre... | [] | closed | false | null | [] | null | 2 | 2024-02-28T08:28:54 | 2024-02-29T14:14:03 | 2024-02-29T14:14:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Mistral has recently released their new `mistral-next` model.
I am not sure whether this is the place to ask for model requests, but it would be great to get that integrated.
Thank you for all the work you have done so far!
https://www.reddit.com/r/LocalLLaMA/comments/1as15p1/new_mistralnext_model_at_httpschatlmsy... | {
"login": "Dimfred",
"id": 29997904,
"node_id": "MDQ6VXNlcjI5OTk3OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/29997904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dimfred",
"html_url": "https://github.com/Dimfred",
"followers_url": "https://api.github.com/users/Dimfre... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2804/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2804/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8625 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8625/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8625/comments | https://api.github.com/repos/ollama/ollama/issues/8625/events | https://github.com/ollama/ollama/issues/8625 | 2,814,698,214 | I_kwDOJ0Z1Ps6nxOLm | 8,625 | Individual quantized model download count | {
"login": "Abubakkar13",
"id": 45032674,
"node_id": "MDQ6VXNlcjQ1MDMyNjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/45032674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abubakkar13",
"html_url": "https://github.com/Abubakkar13",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | null | [] | null | 0 | 2025-01-28T05:52:07 | 2025-01-28T17:13:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey,
I was been exploring the models on site, It would be great to have a total download count for each quantized version (e.g., q8_0, q4_K_M) to show how many times they’ve been downloaded. This would help users gauge the popularity and reliability of different models. Having clear download statistics for each versi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8625/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3894 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3894/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3894/comments | https://api.github.com/repos/ollama/ollama/issues/3894/events | https://github.com/ollama/ollama/issues/3894 | 2,262,363,816 | I_kwDOJ0Z1Ps6G2O6o | 3,894 | I have tested 4-5 phi-3-128K-Instruct models from different providers with different quants, all GGUF files, none are runnable with ollama | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2024-04-24T23:47:13 | 2024-05-01T22:03:54 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama can import them, but not run them.
(Pythogora) developer@ai:~/PROJECTS/autogen$ ~/ollama/ollama run phi-3-mini-128k-instruct.Q6_K
Error: llama runner process no longer running: 1 error:failed to create context with model '/home/developer/.ollama/models/blobs/sha256-78f928e77e2470c7c09... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3894/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2917 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2917/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2917/comments | https://api.github.com/repos/ollama/ollama/issues/2917/events | https://github.com/ollama/ollama/pull/2917 | 2,167,272,871 | PR_kwDOJ0Z1Ps5onPPL | 2,917 | Add SemanticFinder to README.md | {
"login": "do-me",
"id": 47481567,
"node_id": "MDQ6VXNlcjQ3NDgxNTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/47481567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/do-me",
"html_url": "https://github.com/do-me",
"followers_url": "https://api.github.com/users/do-me/follow... | [] | closed | false | null | [] | null | 1 | 2024-03-04T16:38:43 | 2024-11-21T08:45:25 | 2024-11-21T08:45:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2917",
"html_url": "https://github.com/ollama/ollama/pull/2917",
"diff_url": "https://github.com/ollama/ollama/pull/2917.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2917.patch",
"merged_at": null
} | [SemanticFinder](https://github.com/do-me/SemanticFinder) is an in-browser tool for semantic search and now offers an Ollama integration to help understand the search results.
Announcement on Ollama [r/ollama/](https://www.reddit.com/r/ollama/comments/1b79c23/inbrowser_rag_feeding_ollama/) | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2917/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2611 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2611/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2611/comments | https://api.github.com/repos/ollama/ollama/issues/2611/events | https://github.com/ollama/ollama/issues/2611 | 2,144,048,287 | I_kwDOJ0Z1Ps5_y5Sf | 2,611 | Support for moondream? | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2024-02-20T10:28:32 | 2024-04-06T12:07:18 | 2024-02-20T18:55:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there a support for [moondream](https://github.com/vikhyat/moondream)?
Its like small llava.

| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2611/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3327 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3327/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3327/comments | https://api.github.com/repos/ollama/ollama/issues/3327/events | https://github.com/ollama/ollama/issues/3327 | 2,204,451,995 | I_kwDOJ0Z1Ps6DZUSb | 3,327 | Module name is out of date and prevents import from other projects | {
"login": "smxlong",
"id": 9043733,
"node_id": "MDQ6VXNlcjkwNDM3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9043733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smxlong",
"html_url": "https://github.com/smxlong",
"followers_url": "https://api.github.com/users/smxlong/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 1 | 2024-03-24T16:53:08 | 2024-03-26T20:04:18 | 2024-03-26T20:04:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Attempting to import parts of the ollama tree (such as the `api` directory) into an external project fails, because the module name declared in `go.mod` is still `github.com/jmorganca/ollama`. This apparently doesn't impact building the project itself, but causes errors when attempting to import... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3327/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5955 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5955/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5955/comments | https://api.github.com/repos/ollama/ollama/issues/5955/events | https://github.com/ollama/ollama/issues/5955 | 2,430,533,623 | I_kwDOJ0Z1Ps6Q3v_3 | 5,955 | Model request for Llama Guard 3 | {
"login": "prane-eth",
"id": 48318416,
"node_id": "MDQ6VXNlcjQ4MzE4NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/48318416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prane-eth",
"html_url": "https://github.com/prane-eth",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-07-25T16:40:21 | 2024-09-04T18:25:15 | 2024-09-04T18:25:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Llama Guard 3 is a model for AI Safety. It is released along with Llama 3.1.
https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3/
https://huggingface.co/meta-llama/Llama-Guard-3-8B
https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/MODEL_CARD.md | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5955/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5955/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1475 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1475/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1475/comments | https://api.github.com/repos/ollama/ollama/issues/1475/events | https://github.com/ollama/ollama/pull/1475 | 2,036,653,462 | PR_kwDOJ0Z1Ps5hvCGm | 1,475 | Add support for mixture of experts (MoE) and Mixtral | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 14 | 2023-12-11T22:38:33 | 2023-12-13T22:15:11 | 2023-12-13T22:15:10 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1475",
"html_url": "https://github.com/ollama/ollama/pull/1475",
"diff_url": "https://github.com/ollama/ollama/pull/1475.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1475.patch",
"merged_at": "2023-12-13T22:15:10"
} | To build this branch:
```
go generate ./...
go build .
```
```
./ollama serve
# in another terminal
./ollama run jmorgan/mixtral
```
resolves #1470
resolves #1457
resolves #1502 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1475/reactions",
"total_count": 26,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 22,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1475/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4851 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4851/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4851/comments | https://api.github.com/repos/ollama/ollama/issues/4851/events | https://github.com/ollama/ollama/issues/4851 | 2,337,811,989 | I_kwDOJ0Z1Ps6LWC4V | 4,851 | Add `strings` module from Go for template processing | {
"login": "qbit-",
"id": 4794088,
"node_id": "MDQ6VXNlcjQ3OTQwODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4794088?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qbit-",
"html_url": "https://github.com/qbit-",
"followers_url": "https://api.github.com/users/qbit-/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-06-06T09:38:07 | 2024-06-06T09:38:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, the `TEMPLATE` parameter in the `Modelfile` is a simple Go template. For example, I can do something like this to print first 25 characters of the model's response:
```go
{{ printf "%.*s" 25 .Response }}
```
However, this basic processing is not usable. What I'm trying to do is to remove the BOS tok... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4851/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4494 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4494/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4494/comments | https://api.github.com/repos/ollama/ollama/issues/4494/events | https://github.com/ollama/ollama/issues/4494 | 2,302,141,566 | I_kwDOJ0Z1Ps6JN-R- | 4,494 | How to load a model from local disk path? | {
"login": "quzhixue-Kimi",
"id": 8235746,
"node_id": "MDQ6VXNlcjgyMzU3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8235746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quzhixue-Kimi",
"html_url": "https://github.com/quzhixue-Kimi",
"followers_url": "https://api.github.... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2024-05-17T08:37:59 | 2024-10-24T11:50:53 | 2024-05-20T07:44:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hi there,
I have two ubuntu 20.04 server (one is local machine, the another one is product server.) with latest ollama binary installed based on the document via https://github.com/ollama/ollama/blob/main/docs/linux.md
My local ubuntu 20.04 has got the internet to run the command and download the llama3 and llama... | {
"login": "quzhixue-Kimi",
"id": 8235746,
"node_id": "MDQ6VXNlcjgyMzU3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8235746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quzhixue-Kimi",
"html_url": "https://github.com/quzhixue-Kimi",
"followers_url": "https://api.github.... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4494/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4494/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3604 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3604/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3604/comments | https://api.github.com/repos/ollama/ollama/issues/3604/events | https://github.com/ollama/ollama/pull/3604 | 2,238,449,709 | PR_kwDOJ0Z1Ps5sZksd | 3,604 | Fix rocm deps with new subprocess paths | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-11T19:52:29 | 2024-04-11T20:08:35 | 2024-04-11T20:08:29 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3604",
"html_url": "https://github.com/ollama/ollama/pull/3604",
"diff_url": "https://github.com/ollama/ollama/pull/3604.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3604.patch",
"merged_at": "2024-04-11T20:08:29"
} | This fixes a regression on main and in 0.1.32-rc1 where the rocm dependency file was missing the libraries. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3604/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5103 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5103/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5103/comments | https://api.github.com/repos/ollama/ollama/issues/5103/events | https://github.com/ollama/ollama/pull/5103 | 2,358,233,283 | PR_kwDOJ0Z1Ps5yvdEx | 5,103 | Revert powershell jobs, but keep nvcc and cmake parallelism | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-06-17T20:50:05 | 2024-06-17T21:23:21 | 2024-06-17T21:23:18 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5103",
"html_url": "https://github.com/ollama/ollama/pull/5103",
"diff_url": "https://github.com/ollama/ollama/pull/5103.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5103.patch",
"merged_at": "2024-06-17T21:23:18"
} | It doesn't look like the added complexity of trying to parallelize in powershell is worth it, so remove that, but retain the other parallelism flags for cmake and nvcc. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5103/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6262 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6262/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6262/comments | https://api.github.com/repos/ollama/ollama/issues/6262/events | https://github.com/ollama/ollama/issues/6262 | 2,456,598,555 | I_kwDOJ0Z1Ps6SbLgb | 6,262 | Batch embeddings get progressively worse with larger batches | {
"login": "jorgetrejo36",
"id": 65737813,
"node_id": "MDQ6VXNlcjY1NzM3ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/65737813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorgetrejo36",
"html_url": "https://github.com/jorgetrejo36",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.git... | null | 17 | 2024-08-08T20:47:39 | 2024-11-05T13:31:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am using the ollama Python library for all the results I am getting.
As I create embeddings using ollama.embed() I get progressively worse embeddings as the batches are larger. This is compared against creating embeddings one at a time. There seems to be a jump that happens at batch sizes... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6262/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3044 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3044/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3044/comments | https://api.github.com/repos/ollama/ollama/issues/3044/events | https://github.com/ollama/ollama/pull/3044 | 2,177,824,915 | PR_kwDOJ0Z1Ps5pLJjA | 3,044 | convert: fix shape | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-10T17:44:26 | 2024-03-11T16:56:58 | 2024-03-11T16:56:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3044",
"html_url": "https://github.com/ollama/ollama/pull/3044",
"diff_url": "https://github.com/ollama/ollama/pull/3044.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3044.patch",
"merged_at": "2024-03-11T16:56:57"
} | This commit reverts 18979ad4a1d40d04e3b981a477fa6323a40304b6 which was merged in #3014
#3014 broke convert by setting dimensions to an array filled with 1s which is incorrect. while this is how the reader works, the writer only writes the array items if it's greater than zero[^1]. filling with 1s incorrectly potenti... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3044/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1127 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1127/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1127/comments | https://api.github.com/repos/ollama/ollama/issues/1127/events | https://github.com/ollama/ollama/pull/1127 | 1,993,341,766 | PR_kwDOJ0Z1Ps5fcVEG | 1,127 | Move /generate format to optional parameters | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-11-14T18:44:30 | 2023-11-14T21:12:31 | 2023-11-14T21:12:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1127",
"html_url": "https://github.com/ollama/ollama/pull/1127",
"diff_url": "https://github.com/ollama/ollama/pull/1127.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1127.patch",
"merged_at": "2023-11-14T21:12:30"
} | This field is optional and should be under the `Advanced parameters` header | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1127/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3978 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3978/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3978/comments | https://api.github.com/repos/ollama/ollama/issues/3978/events | https://github.com/ollama/ollama/issues/3978 | 2,267,001,000 | I_kwDOJ0Z1Ps6HH7Co | 3,978 | Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: | {
"login": "jannoname",
"id": 168279140,
"node_id": "U_kgDOCge8ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/168279140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannoname",
"html_url": "https://github.com/jannoname",
"followers_url": "https://api.github.com/users/jannon... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-04-27T12:46:00 | 2024-10-17T12:26:51 | 2024-05-21T18:18:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The last windows version of ollama didnt work on my laptop. It cant connect to 11434 anymore - even if it is free or block the port itself.
```
C:\Users\XXX>ollama list
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: Es konnte keine Verbindung hergestellt werd... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3978/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3978/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1153 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1153/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1153/comments | https://api.github.com/repos/ollama/ollama/issues/1153/events | https://github.com/ollama/ollama/issues/1153 | 1,996,751,591 | I_kwDOJ0Z1Ps53BALn | 1,153 | CodeGPT extension cannot connect to locally served ollama Error: connect ECONNREFUSED ::1:11434 | {
"login": "wahreChrist",
"id": 61061924,
"node_id": "MDQ6VXNlcjYxMDYxOTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/61061924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahreChrist",
"html_url": "https://github.com/wahreChrist",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 5 | 2023-11-16T12:28:31 | 2023-11-17T00:36:36 | 2023-11-17T00:36:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Im trying to make CodeGPT extension work, to interact with Ollama in VS code, but it gives me this error in devtools console:
```
[Extension Host] No active text editor found.
log.ts:441 ERR [Extension Host] Error: Error: connect ECONNREFUSED ::1:11434
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1153/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6244 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6244/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6244/comments | https://api.github.com/repos/ollama/ollama/issues/6244/events | https://github.com/ollama/ollama/issues/6244 | 2,454,467,251 | I_kwDOJ0Z1Ps6STDKz | 6,244 | 1001st Issue | {
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-08-07T22:38:15 | 2024-08-07T22:40:16 | 2024-08-07T22:40:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | yay 😁 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6244/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5939 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5939/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5939/comments | https://api.github.com/repos/ollama/ollama/issues/5939/events | https://github.com/ollama/ollama/issues/5939 | 2,428,885,261 | I_kwDOJ0Z1Ps6QxdkN | 5,939 | Error: invalid file magic when trying to import gte-Qwen2-7B-instruct gguf model to ollama instance | {
"login": "CHNVigny",
"id": 9402746,
"node_id": "MDQ6VXNlcjk0MDI3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9402746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CHNVigny",
"html_url": "https://github.com/CHNVigny",
"followers_url": "https://api.github.com/users/CHNVi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2024-07-25T03:13:37 | 2024-11-11T16:52:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**I got this error:**
root@bccf6f1eb00f:/data/models# ollama create gte_qwen2:7b -f Modelfile
transferring model data
Error: invalid file magic
**This is my ModelFile:**
FROM gte_qwen2.gguf
TEMPLATE "{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5939/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/281 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/281/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/281/comments | https://api.github.com/repos/ollama/ollama/issues/281/events | https://github.com/ollama/ollama/issues/281 | 1,836,820,475 | I_kwDOJ0Z1Ps5te6f7 | 281 | Consider a non streaming api for `/api/generate` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 4 | 2023-08-04T14:10:53 | 2023-10-11T16:54:28 | 2023-10-11T16:54:28 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If `Content-Type: application/json` is set, we should consider returning a single large json object vs an event stream. This would be an elegant design as there are no new flags | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/281/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/281/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7858 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7858/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7858/comments | https://api.github.com/repos/ollama/ollama/issues/7858/events | https://github.com/ollama/ollama/issues/7858 | 2,698,142,313 | I_kwDOJ0Z1Ps6g0mJp | 7,858 | Can you make the normalize optional for embeddings? | {
"login": "BeNhNp",
"id": 33339730,
"node_id": "MDQ6VXNlcjMzMzM5NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/33339730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeNhNp",
"html_url": "https://github.com/BeNhNp",
"followers_url": "https://api.github.com/users/BeNhNp/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-11-27T11:02:28 | 2024-11-27T15:29:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
https://ollama.com/library/nomic-embed-text:v1.5
```shell
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "The sky is blue because of Rayleigh scattering"
}'
```
access "http://127.0.0.1:%d/embedding" is ok, [ollama_llama_server](https://g... | {
"login": "BeNhNp",
"id": 33339730,
"node_id": "MDQ6VXNlcjMzMzM5NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/33339730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeNhNp",
"html_url": "https://github.com/BeNhNp",
"followers_url": "https://api.github.com/users/BeNhNp/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7858/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/2352 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2352/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2352/comments | https://api.github.com/repos/ollama/ollama/issues/2352/events | https://github.com/ollama/ollama/issues/2352 | 2,117,262,900 | I_kwDOJ0Z1Ps5-Mt40 | 2,352 | API streaming and non streaming mode produces garbage output after the first query | {
"login": "nextdimension",
"id": 3390177,
"node_id": "MDQ6VXNlcjMzOTAxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3390177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nextdimension",
"html_url": "https://github.com/nextdimension",
"followers_url": "https://api.github.... | [] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 3 | 2024-02-04T19:10:22 | 2024-03-12T16:55:11 | 2024-03-12T16:55:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use the ollama API the first response works fine, then without changing anything, subsequent requests give a response as if its ignoring the system prompt and spits out garbage. Restarting the ollama service works, until the second query.
Latest version and one version previous had the same issue.
My setup... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2352/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2477 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2477/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2477/comments | https://api.github.com/repos/ollama/ollama/issues/2477/events | https://github.com/ollama/ollama/pull/2477 | 2,133,008,998 | PR_kwDOJ0Z1Ps5myepF | 2,477 | Update README.md to include link to Ollama-ex Elixir library | {
"login": "lebrunel",
"id": 124721263,
"node_id": "U_kgDOB28Ybw",
"avatar_url": "https://avatars.githubusercontent.com/u/124721263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lebrunel",
"html_url": "https://github.com/lebrunel",
"followers_url": "https://api.github.com/users/lebrunel/... | [] | closed | false | null | [] | null | 1 | 2024-02-13T19:37:20 | 2024-02-13T19:40:51 | 2024-02-13T19:40:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2477",
"html_url": "https://github.com/ollama/ollama/pull/2477",
"diff_url": "https://github.com/ollama/ollama/pull/2477.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2477.patch",
"merged_at": "2024-02-13T19:40:44"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2477/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5869 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5869/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5869/comments | https://api.github.com/repos/ollama/ollama/issues/5869/events | https://github.com/ollama/ollama/issues/5869 | 2,424,561,280 | I_kwDOJ0Z1Ps6Qg96A | 5,869 | `Error: file does not exist` but it exists | {
"login": "DevLLM",
"id": 131604629,
"node_id": "U_kgDOB9gglQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131604629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevLLM",
"html_url": "https://github.com/DevLLM",
"followers_url": "https://api.github.com/users/DevLLM/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 5 | 2024-07-23T08:14:44 | 2024-11-18T01:17:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello, I want to push my model to ollama but I got the error
`retrieving manifest`
`Error: file does not exist `
but the problem is that I have the file, specifically "C:\Users\User\.ollama\models\manifests\registry.ollama.ai\_\mymodel\latest"
and my username is _ (link: [https://olla... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5869/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1095 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1095/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1095/comments | https://api.github.com/repos/ollama/ollama/issues/1095/events | https://github.com/ollama/ollama/pull/1095 | 1,989,220,342 | PR_kwDOJ0Z1Ps5fOaW4 | 1,095 | Add JSON mode to `ollama run` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2023-11-12T03:29:21 | 2023-11-14T02:54:03 | 2023-11-14T02:54:02 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1095",
"html_url": "https://github.com/ollama/ollama/pull/1095",
"diff_url": "https://github.com/ollama/ollama/pull/1095.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1095.patch",
"merged_at": "2023-11-14T02:54:02"
} | Allow using JSON mode from the `ollama run` command line
* `--format json`: a new command line flag
* `/set format json`: in the interactive `ollama run` terminal | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1095/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/412 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/412/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/412/comments | https://api.github.com/repos/ollama/ollama/issues/412/events | https://github.com/ollama/ollama/pull/412 | 1,867,533,219 | PR_kwDOJ0Z1Ps5Y0jCH | 412 | update README.md | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-08-25T18:45:02 | 2023-08-27T04:26:35 | 2023-08-27T04:26:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/412",
"html_url": "https://github.com/ollama/ollama/pull/412",
"diff_url": "https://github.com/ollama/ollama/pull/412.diff",
"patch_url": "https://github.com/ollama/ollama/pull/412.patch",
"merged_at": "2023-08-27T04:26:34"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/412/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3598 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3598/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3598/comments | https://api.github.com/repos/ollama/ollama/issues/3598/events | https://github.com/ollama/ollama/issues/3598 | 2,237,947,026 | I_kwDOJ0Z1Ps6FZFyS | 3,598 | Allow users the ability to manage website access without using terminal commands | {
"login": "dahjson",
"id": 8768601,
"node_id": "MDQ6VXNlcjg3Njg2MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8768601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dahjson",
"html_url": "https://github.com/dahjson",
"followers_url": "https://api.github.com/users/dahjson/... | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 3 | 2024-04-11T14:59:32 | 2024-11-15T17:15:27 | 2024-11-15T17:15:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
Improve the user experience for allowing website access to Ollama models. Currently, users are required to run terminal commands to get this working. If the user restarts their computer they have to run these commands again, which is not ideal.
### How should we solve this?
Add configu... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3598/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2823 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2823/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2823/comments | https://api.github.com/repos/ollama/ollama/issues/2823/events | https://github.com/ollama/ollama/issues/2823 | 2,160,213,223 | I_kwDOJ0Z1Ps6Awjzn | 2,823 | rocm crashes on `Illegal seek for GPU arch : gfx1032` | {
"login": "turlapati",
"id": 4550654,
"node_id": "MDQ6VXNlcjQ1NTA2NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4550654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turlapati",
"html_url": "https://github.com/turlapati",
"followers_url": "https://api.github.com/users/tu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-02-29T02:09:53 | 2024-03-02T01:30:17 | 2024-03-02T01:30:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | user@HTML:~$ ollama run gemma
Error: Post "http://127.0.0.1:11434/api/chat": EOF
...
[crash_0.1.27_gemma_rcom.txt](https://github.com/ollama/ollama/files/14441945/crash_0.1.27_gemma_rcom.txt)
loading library /tmp/ollama3347055972/rocm_v6/libext_server.so
time=2024-02-28T20:59:58.907-05:00 level=INFO source=dyn... | {
"login": "turlapati",
"id": 4550654,
"node_id": "MDQ6VXNlcjQ1NTA2NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4550654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turlapati",
"html_url": "https://github.com/turlapati",
"followers_url": "https://api.github.com/users/tu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2823/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4128 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4128/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4128/comments | https://api.github.com/repos/ollama/ollama/issues/4128/events | https://github.com/ollama/ollama/issues/4128 | 2,277,793,272 | I_kwDOJ0Z1Ps6HxF34 | 4,128 | Normalization of output from embedding model | {
"login": "hagemon",
"id": 15187235,
"node_id": "MDQ6VXNlcjE1MTg3MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/15187235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hagemon",
"html_url": "https://github.com/hagemon",
"followers_url": "https://api.github.com/users/hagemo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-05-03T14:24:45 | 2024-07-02T14:42:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use Ollama Embedding together with Langchain Retriever's `get_relevant_documents`, I always get a score that around 200. However, when I use HuggingFaceEmbedding, this value is between 0 and 1.
So I continued to explore the reason and, according to the official documentation, used OllamaEmbedding to vectorize... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4128/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4128/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1946/comments | https://api.github.com/repos/ollama/ollama/issues/1946/events | https://github.com/ollama/ollama/issues/1946 | 2,078,214,003 | I_kwDOJ0Z1Ps573wdz | 1,946 | `SIGSEGV: segmentation violation` when shutting down server with ctrl+c | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-01-12T07:14:00 | 2024-03-12T18:14:33 | 2024-03-12T18:14:33 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
[GIN] 2024/01/12 - 12:38:39 | 200 | 5.985573917s | 127.0.0.1 | POST "/api/chat"
2024/01/12 12:38:52 ext_server_common.go:158: loaded 0 images
^Cggml_metal_free: deallocating
SIGSEGV: segmentation violation
``` | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1946/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3275 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3275/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3275/comments | https://api.github.com/repos/ollama/ollama/issues/3275/events | https://github.com/ollama/ollama/issues/3275 | 2,198,497,833 | I_kwDOJ0Z1Ps6DCmop | 3,275 | Resumable `ollama push` | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677370291,
"node_id": ... | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 1 | 2024-03-20T20:27:36 | 2024-09-04T04:43:05 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When pushing models to ollama.com's registry, if they fail part way through - when you go to continue the push they start from scratch again.
This is quite painful when you've spent hours trying to upload from ~~Australian~~ a slow internet link as it can mean many hours of uploading and hopi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3275/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5359 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5359/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5359/comments | https://api.github.com/repos/ollama/ollama/issues/5359/events | https://github.com/ollama/ollama/issues/5359 | 2,380,263,043 | I_kwDOJ0Z1Ps6N3-6D | 5,359 | Both Gemma2 model fail with cudaMalloc error despite available GPU memory, while other models run successfully. | {
"login": "chiragbharambe",
"id": 30945307,
"node_id": "MDQ6VXNlcjMwOTQ1MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/30945307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragbharambe",
"html_url": "https://github.com/chiragbharambe",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2024-06-28T11:55:21 | 2024-06-28T14:15:14 | 2024-06-28T14:15:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Archlinux 6.6.35-2-lts
Ollama version 0.1.47
latest ollama-cuda installed via pacman. Ollama system service is active. All other models I have work as expected. Both gemma2 9b and 27b giving me the same error. Ram is not issue. I can run mixtral8x7b.
Hardware
- CPU: 5800HS
- GPU: RTX 30... | {
"login": "chiragbharambe",
"id": 30945307,
"node_id": "MDQ6VXNlcjMwOTQ1MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/30945307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragbharambe",
"html_url": "https://github.com/chiragbharambe",
"followers_url": "https://api.gi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5359/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3409 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3409/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3409/comments | https://api.github.com/repos/ollama/ollama/issues/3409/events | https://github.com/ollama/ollama/issues/3409 | 2,215,805,241 | I_kwDOJ0Z1Ps6EEoE5 | 3,409 | API to terminate the running job before the completion | {
"login": "ansis-m",
"id": 78793148,
"node_id": "MDQ6VXNlcjc4NzkzMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/78793148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ansis-m",
"html_url": "https://github.com/ansis-m",
"followers_url": "https://api.github.com/users/ansis-... | [] | closed | false | null | [] | null | 2 | 2024-03-29T18:22:36 | 2024-04-15T19:32:53 | 2024-04-15T19:32:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I am using Ollama via REST api interface. Sometimes when model streams a long response (can be quite slow on my computer) I would like to terminate the process before the completion. I checked the API documentation and I did not find an option for this. Even if I unsubscribe from the str... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3409/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8065 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8065/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8065/comments | https://api.github.com/repos/ollama/ollama/issues/8065/events | https://github.com/ollama/ollama/issues/8065 | 2,735,086,935 | I_kwDOJ0Z1Ps6jBh1X | 8,065 | dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: server misbehaving | {
"login": "szzhh",
"id": 78521539,
"node_id": "MDQ6VXNlcjc4NTIxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szzhh",
"html_url": "https://github.com/szzhh",
"followers_url": "https://api.github.com/users/szzhh/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-12-12T07:30:04 | 2024-12-12T08:02:25 | 2024-12-12T08:02:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when I ran "ollama pull mxbai-embed-large ", I got:
'''
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/mxbai-embed-large/manifests/latest": dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: server misbehaving
'''
### OS
Linux
### GPU
Nvidia
... | {
"login": "szzhh",
"id": 78521539,
"node_id": "MDQ6VXNlcjc4NTIxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szzhh",
"html_url": "https://github.com/szzhh",
"followers_url": "https://api.github.com/users/szzhh/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8065/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4993 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4993/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4993/comments | https://api.github.com/repos/ollama/ollama/issues/4993/events | https://github.com/ollama/ollama/issues/4993 | 2,347,847,973 | I_kwDOJ0Z1Ps6L8VEl | 4,993 | AI Models stop working after few user only messages. | {
"login": "TheUntitledGoose",
"id": 75637597,
"node_id": "MDQ6VXNlcjc1NjM3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/75637597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheUntitledGoose",
"html_url": "https://github.com/TheUntitledGoose",
"followers_url": "https://... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-06-12T05:34:51 | 2024-06-13T21:24:56 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've tested this with `dolphin-llama3:latest` and `llama-dolphin:latest`. I believe this also might be an issue with other models.
This is less of a Ollama issue, but after supplying the messages field with a few user messages such as:
```js
const history = [
{ role: 'user', content: 're... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4993/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2524 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2524/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2524/comments | https://api.github.com/repos/ollama/ollama/issues/2524/events | https://github.com/ollama/ollama/issues/2524 | 2,137,467,945 | I_kwDOJ0Z1Ps5_Zywp | 2,524 | "CPU does not have AVX or AVX2, disabling GPU support" | {
"login": "khromov",
"id": 1207507,
"node_id": "MDQ6VXNlcjEyMDc1MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1207507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khromov",
"html_url": "https://github.com/khromov",
"followers_url": "https://api.github.com/users/khromov/... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-02-15T21:19:27 | 2024-02-16T16:25:56 | 2024-02-16T16:25:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 👋 Just downloaded the latest Windows preview. Ollama does work, but GPU is not being used at all as per the title message. Using Windows 11, RTX 2070 and latest Nvidia game ready drivers.
Command:
```
ollama run llama2
>>> Hello!
...
```
Log:
```
time=2024-02-15T22:13:55.132+01:00 level=INFO source=im... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2524/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3511 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3511/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3511/comments | https://api.github.com/repos/ollama/ollama/issues/3511/events | https://github.com/ollama/ollama/issues/3511 | 2,229,135,074 | I_kwDOJ0Z1Ps6E3ebi | 3,511 | On Windows, launching ollama from the shortcut or executable by clicking causes very slow tokens generation, but launching from commandline is fast | {
"login": "lrq3000",
"id": 1118942,
"node_id": "MDQ6VXNlcjExMTg5NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1118942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lrq3000",
"html_url": "https://github.com/lrq3000",
"followers_url": "https://api.github.com/users/lrq3000/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 41 | 2024-04-06T08:08:35 | 2024-10-17T16:58:52 | 2024-09-21T23:54:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Since I installed ollama (v0.1.30) on Windows 11 Pro, I run into a peculiar issue. When I launch ollama from the installed shortcut, which launches "ollama app.exe", or when I boot up my OS (which also starts up the same shortcut as configured by the ollama installer), ollama is extremely slow... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3511/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3511/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5828 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5828/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5828/comments | https://api.github.com/repos/ollama/ollama/issues/5828/events | https://github.com/ollama/ollama/issues/5828 | 2,421,327,515 | I_kwDOJ0Z1Ps6QUoab | 5,828 | Will paged attention be added when OLLAMA_NUM_PARALLEL is set higher than 1? | {
"login": "b-Snaas",
"id": 117536828,
"node_id": "U_kgDOBwF4PA",
"avatar_url": "https://avatars.githubusercontent.com/u/117536828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/b-Snaas",
"html_url": "https://github.com/b-Snaas",
"followers_url": "https://api.github.com/users/b-Snaas/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-07-21T10:04:29 | 2024-10-09T13:15:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I experimented with ollama_num_parallel on GPUs with a large amount of VRAM, but I could not get a real benefit in terms of total aggregated tokens per second when posting 10 requests at the same time. I assume this is due to ollama not having pagedattention. Are there plans to optimize inference for large amount of co... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5828/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5828/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6650 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6650/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6650/comments | https://api.github.com/repos/ollama/ollama/issues/6650/events | https://github.com/ollama/ollama/issues/6650 | 2,506,719,889 | I_kwDOJ0Z1Ps6VaYKR | 6,650 | ollama serve does not finished after long waiting | {
"login": "lifelongeeek",
"id": 127937907,
"node_id": "U_kgDOB6Atcw",
"avatar_url": "https://avatars.githubusercontent.com/u/127937907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lifelongeeek",
"html_url": "https://github.com/lifelongeeek",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-05T04:05:30 | 2024-09-05T04:25:52 | 2024-09-05T04:25:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried `ollama serve` in a container. But it does not completed after waiting for a very long time. Could anyone suggest related solution to this?
Here is the log.
```
root@d39fcb3d6754: # ollama serve
Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6650/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7240 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7240/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7240/comments | https://api.github.com/repos/ollama/ollama/issues/7240/events | https://github.com/ollama/ollama/issues/7240 | 2,594,920,904 | I_kwDOJ0Z1Ps6aq1nI | 7,240 | Pull Private Huggingface Model | {
"login": "DaddyCodesAlot",
"id": 176133641,
"node_id": "U_kgDOCn-WCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/176133641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaddyCodesAlot",
"html_url": "https://github.com/DaddyCodesAlot",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-10-17T14:31:26 | 2024-11-22T14:14:54 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, so I believe it's now possible to pull huggingface models directly by prepending hf.co to the pull statement. I would just like to get clarity on how this works with private models? I have my huggingface token set as an environment variable, but I can't seem to pull a private model. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7240/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/7240/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5423 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5423/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5423/comments | https://api.github.com/repos/ollama/ollama/issues/5423/events | https://github.com/ollama/ollama/issues/5423 | 2,384,900,396 | I_kwDOJ0Z1Ps6OJrEs | 5,423 | `ollama create` progress | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.... | null | 0 | 2024-07-01T23:23:05 | 2024-07-16T23:49:29 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama create doesn't report back any progress similar to `ollama pull` or `ollama push`.
- [x] Copying files (transferring model data)
- [x] Quantization
- [ ] Converting
Note, a full progress bar isn't required, it can be as simple as adding a percentage:
```
% ollama create -f Modelfile test
transferrin... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5423/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5423/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/486 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/486/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/486/comments | https://api.github.com/repos/ollama/ollama/issues/486/events | https://github.com/ollama/ollama/pull/486 | 1,886,428,605 | PR_kwDOJ0Z1Ps5Zz6Fe | 486 | fix: retry push on expired token | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-07T19:04:36 | 2023-09-07T20:58:35 | 2023-09-07T20:58:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/486",
"html_url": "https://github.com/ollama/ollama/pull/486",
"diff_url": "https://github.com/ollama/ollama/pull/486.diff",
"patch_url": "https://github.com/ollama/ollama/pull/486.patch",
"merged_at": "2023-09-07T20:58:34"
} | There's two bug that need to be fixed:
1. `makeRequest` to the `redirectURL` should not supply `regOpts` since it's not the registry. This erroneously overrides the `Authorization` Header making the request invalid.
2. The upload chunk was not resetting the section correctly. It also should to interrupt the gorouti... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/486/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2997 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2997/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2997/comments | https://api.github.com/repos/ollama/ollama/issues/2997/events | https://github.com/ollama/ollama/issues/2997 | 2,175,155,254 | I_kwDOJ0Z1Ps6Bpjw2 | 2,997 | Can I force ollama to produce shorter responses? | {
"login": "Anirudh257",
"id": 16001446,
"node_id": "MDQ6VXNlcjE2MDAxNDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/16001446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anirudh257",
"html_url": "https://github.com/Anirudh257",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2024-03-08T02:25:55 | 2024-03-13T17:25:20 | 2024-03-11T22:21:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I want to use the llama2 model available in Ollama to produce shorter outputs. I want to use ``max_new_tokens``, ``max_length``
parameters in https://huggingface.co/docs/transformers/en/main_classes/text_generation. Can I prompt the LLM to generate shorter sequences while keeping the meaning same?
There a... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2997/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7732 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7732/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7732/comments | https://api.github.com/repos/ollama/ollama/issues/7732/events | https://github.com/ollama/ollama/issues/7732 | 2,670,576,134 | I_kwDOJ0Z1Ps6fLcIG | 7,732 | Why is the generated content missing when reader 1.5b processes html | {
"login": "gubinjie",
"id": 37869445,
"node_id": "MDQ6VXNlcjM3ODY5NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/37869445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gubinjie",
"html_url": "https://github.com/gubinjie",
"followers_url": "https://api.github.com/users/gub... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-11-19T03:05:29 | 2024-11-20T02:46:34 | 2024-11-20T02:46:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
info = '''<body class="body-color">
<div class="p14-product-2-list">
<dl>
<dt>
<a href="https://www." target="_blank" title="651524 200mah " src="https://www.16b1_140.jpg" title="651524"> </a>
</dt>
<dd>
<h4><a href="https://w524.html" target="_blank" title="651524 20</a... | {
"login": "gubinjie",
"id": 37869445,
"node_id": "MDQ6VXNlcjM3ODY5NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/37869445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gubinjie",
"html_url": "https://github.com/gubinjie",
"followers_url": "https://api.github.com/users/gub... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7732/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4928 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4928/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4928/comments | https://api.github.com/repos/ollama/ollama/issues/4928/events | https://github.com/ollama/ollama/issues/4928 | 2,341,543,572 | I_kwDOJ0Z1Ps6LkR6U | 4,928 | Support for Qwen2-7B-Instruct | {
"login": "Leroy-X",
"id": 13515498,
"node_id": "MDQ6VXNlcjEzNTE1NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13515498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leroy-X",
"html_url": "https://github.com/Leroy-X",
"followers_url": "https://api.github.com/users/Leroy-... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 0 | 2024-06-08T08:11:22 | 2024-06-08T10:44:15 | 2024-06-08T10:44:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [https://huggingface.co/Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)
Thanks. | {
"login": "Leroy-X",
"id": 13515498,
"node_id": "MDQ6VXNlcjEzNTE1NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13515498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leroy-X",
"html_url": "https://github.com/Leroy-X",
"followers_url": "https://api.github.com/users/Leroy-... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4928/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/1425 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1425/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1425/comments | https://api.github.com/repos/ollama/ollama/issues/1425/events | https://github.com/ollama/ollama/pull/1425 | 2,031,727,648 | PR_kwDOJ0Z1Ps5heYcg | 1,425 | fix: restore modelfile system in prompt template | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-12-08T00:28:05 | 2023-12-08T19:20:20 | 2023-12-08T19:20:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1425",
"html_url": "https://github.com/ollama/ollama/pull/1425",
"diff_url": "https://github.com/ollama/ollama/pull/1425.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1425.patch",
"merged_at": "2023-12-08T19:20:19"
} | In #1244 this line which sets the modelfile system variable in the template got removed. It must still be there to apply the system template from the modelfile. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1425/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5834 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5834/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5834/comments | https://api.github.com/repos/ollama/ollama/issues/5834/events | https://github.com/ollama/ollama/issues/5834 | 2,421,599,819 | I_kwDOJ0Z1Ps6QVq5L | 5,834 | Windows Client: Provide a way to allow connections to Ollama from web browser origins other than localhost and 0.0.0.0 | {
"login": "Dinkh",
"id": 658372,
"node_id": "MDQ6VXNlcjY1ODM3Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/658372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dinkh",
"html_url": "https://github.com/Dinkh",
"followers_url": "https://api.github.com/users/Dinkh/followers"... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 4 | 2024-07-21T21:01:46 | 2024-11-06T01:03:08 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Running my WebApp on my machine works.
```
import ollama from "ollama/browser"
ollama.list().then(...)
// => http:127.0.0.1:11434/api/tags
```
Running it from my web host does not work
```
ollama.list().then(...)
// options => 204
// get => GET http://127.0.0.1:11434/api/tags net::ERR_FAILED
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5834/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7663 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7663/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7663/comments | https://api.github.com/repos/ollama/ollama/issues/7663/events | https://github.com/ollama/ollama/issues/7663 | 2,657,672,454 | I_kwDOJ0Z1Ps6eaN0G | 7,663 | Ollama API Multiple thread error reporting, abnormal thread time consumption | {
"login": "jamine2024",
"id": 168888350,
"node_id": "U_kgDOChEIHg",
"avatar_url": "https://avatars.githubusercontent.com/u/168888350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamine2024",
"html_url": "https://github.com/jamine2024",
"followers_url": "https://api.github.com/users/jam... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-11-14T05:42:36 | 2024-11-14T18:20:23 | 2024-11-14T18:20:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Error:
msg="failed to decode batch" error="could not find a KV slot for the batch - try reducing the size of the batch or increase the context. code: 1"
time=2024-11-13T20:41:50.757+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
USE PYTHO... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7663/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4158 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4158/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4158/comments | https://api.github.com/repos/ollama/ollama/issues/4158/events | https://github.com/ollama/ollama/issues/4158 | 2,279,264,039 | I_kwDOJ0Z1Ps6H2s8n | 4,158 | On Windows , with version 0.1.33 assembling two models creates a path error. Version 0.1.32 works correctly. | {
"login": "amonpaike",
"id": 884282,
"node_id": "MDQ6VXNlcjg4NDI4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/884282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amonpaike",
"html_url": "https://github.com/amonpaike",
"followers_url": "https://api.github.com/users/amon... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 1 | 2024-05-05T00:55:59 | 2024-05-06T20:01:40 | 2024-05-06T20:01:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
On Windows with version 0.1.33 assembling two models creates a path error.
Version 0.1.32 works correctly.
You can manifest the bug by assembling this model: (there is also the modelfile for ollama) [llava-llama-3-8b-v1_1-gguf](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf/tree/mai... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4158/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8656 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8656/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8656/comments | https://api.github.com/repos/ollama/ollama/issues/8656/events | https://github.com/ollama/ollama/pull/8656 | 2,818,099,446 | PR_kwDOJ0Z1Ps6JWxCt | 8,656 | Add DeepSeek R1 in README | {
"login": "zakk616",
"id": 26119949,
"node_id": "MDQ6VXNlcjI2MTE5OTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/26119949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zakk616",
"html_url": "https://github.com/zakk616",
"followers_url": "https://api.github.com/users/zakk61... | [] | open | false | null | [] | null | 2 | 2025-01-29T12:38:21 | 2025-01-30T05:37:47 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8656",
"html_url": "https://github.com/ollama/ollama/pull/8656",
"diff_url": "https://github.com/ollama/ollama/pull/8656.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8656.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8656/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8100 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8100/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8100/comments | https://api.github.com/repos/ollama/ollama/issues/8100/events | https://github.com/ollama/ollama/issues/8100 | 2,740,106,490 | I_kwDOJ0Z1Ps6jUrT6 | 8,100 | Ollama in docker container returns empty content on api/chat stream request made with http POST | {
"login": "MMaicki",
"id": 46030081,
"node_id": "MDQ6VXNlcjQ2MDMwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/46030081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MMaicki",
"html_url": "https://github.com/MMaicki",
"followers_url": "https://api.github.com/users/MMaick... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-12-14T19:16:12 | 2024-12-14T22:37:01 | 2024-12-14T22:37:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When HTTP POST is done with Headers "Content-Type" "application/json" and a JSON body the message content is empty:
```
STREAM {:cached nil, :request-time 39709, :repeatable? false, :protocol-version {:name HTTP, :major 1, :minor 1}, :streaming? true, :http-client #object[org.apache.http.imp... | {
"login": "MMaicki",
"id": 46030081,
"node_id": "MDQ6VXNlcjQ2MDMwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/46030081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MMaicki",
"html_url": "https://github.com/MMaicki",
"followers_url": "https://api.github.com/users/MMaick... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8100/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4603 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4603/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4603/comments | https://api.github.com/repos/ollama/ollama/issues/4603/events | https://github.com/ollama/ollama/issues/4603 | 2,314,239,286 | I_kwDOJ0Z1Ps6J8H02 | 4,603 | Import module faild: pip install -r llm/llama.cpp/requirements.txt | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-05-24T03:04:15 | 2024-05-26T12:35:13 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Archlinux, python3.12
```
(ollama) ╭─hougelangley at Arch-Legion in ~/ollama on main✘✘✘ 24-05-24 - 11:00:23
╰─(ollama) ⠠⠵ pip install -r llm/llama.cpp/requirements.txt
Collecting numpy~=1.24.4 (from -r llm/llama.cpp/./requirements/requirements-convert.txt (line 1))
Downloading numpy-1.2... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4603/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8621 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8621/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8621/comments | https://api.github.com/repos/ollama/ollama/issues/8621/events | https://github.com/ollama/ollama/pull/8621 | 2,814,253,630 | PR_kwDOJ0Z1Ps6JJqW1 | 8,621 | Small typo in api.md | {
"login": "KeerthiNingegowda",
"id": 31515752,
"node_id": "MDQ6VXNlcjMxNTE1NzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31515752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeerthiNingegowda",
"html_url": "https://github.com/KeerthiNingegowda",
"followers_url": "https... | [] | closed | false | null | [] | null | 1 | 2025-01-27T23:13:57 | 2025-01-28T16:14:31 | 2025-01-28T06:17:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8621",
"html_url": "https://github.com/ollama/ollama/pull/8621",
"diff_url": "https://github.com/ollama/ollama/pull/8621.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8621.patch",
"merged_at": null
} | The /api/chat endpoint has 'messages' as a parameter - the subsequent description of 'messages' object is misspelled as 'message'. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8621/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5812 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5812/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5812/comments | https://api.github.com/repos/ollama/ollama/issues/5812/events | https://github.com/ollama/ollama/issues/5812 | 2,420,943,738 | I_kwDOJ0Z1Ps6QTKt6 | 5,812 | Mistral-Nemo support/bug | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-20T15:26:13 | 2024-07-23T18:03:45 | 2024-07-23T18:03:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
(Pythogora) developer@ai:~/PROJECTS$ ~/ollama/ollama run mistral-Nemo-Instruct-2407-f16:latest
Error: llama runner process has terminated: signal: aborted (core dumped) error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected 5120, 5120, got 5120, 4096,... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5812/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6066/comments | https://api.github.com/repos/ollama/ollama/issues/6066/events | https://github.com/ollama/ollama/pull/6066 | 2,436,617,604 | PR_kwDOJ0Z1Ps52z8GT | 6,066 | Patch for Tool Stream Compatibility | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 4 | 2024-07-30T00:00:15 | 2024-09-27T09:29:12 | 2024-08-12T17:31:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6066",
"html_url": "https://github.com/ollama/ollama/pull/6066",
"diff_url": "https://github.com/ollama/ollama/pull/6066.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6066.patch",
"merged_at": null
} | !! not real streaming, but pretty much dumps stream of objects at complete time
```
{"model":"mistral","created_at":"2024-07-30T00:11:33.125585Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"celsius","location":"Paris, France"}}}]},"done":... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6066/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/6066/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2129 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2129/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2129/comments | https://api.github.com/repos/ollama/ollama/issues/2129/events | https://github.com/ollama/ollama/issues/2129 | 2,092,814,524 | I_kwDOJ0Z1Ps58vdC8 | 2,129 | High CPU and GPU usage, even when noone is interacting with ollama | {
"login": "ThatCoffeeGuy",
"id": 24213618,
"node_id": "MDQ6VXNlcjI0MjEzNjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/24213618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThatCoffeeGuy",
"html_url": "https://github.com/ThatCoffeeGuy",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-01-21T22:39:34 | 2024-06-21T12:12:43 | 2024-03-11T17:38:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey.
I have used ollama a few hours ago... only to notice now, that the CPU usage is quite high and the GPU usage is around 30% while the model and web are doing absolutely nothing.
lsof is showing 1.8k open files and the processes keep renewing their PIDs, it's impossible to strace them. What's going on?
![... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2129/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/759 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/759/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/759/comments | https://api.github.com/repos/ollama/ollama/issues/759/events | https://github.com/ollama/ollama/pull/759 | 1,938,616,682 | PR_kwDOJ0Z1Ps5cj4k_ | 759 | deprecate modelfile embed command | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 4 | 2023-10-11T19:53:25 | 2023-10-18T08:02:57 | 2023-10-16T15:07:37 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/759",
"html_url": "https://github.com/ollama/ollama/pull/759",
"diff_url": "https://github.com/ollama/ollama/pull/759.diff",
"patch_url": "https://github.com/ollama/ollama/pull/759.patch",
"merged_at": "2023-10-16T15:07:37"
} | Embeddings in Modelfiles are a convenient idea, allowing the model to be packaged with embeddings created for it specifically, but the user-experience of this implementation isn't up to par.
This change leaves the `/embed` endpoint, but deprecates `EMBED` in the modelfile.
- Ollama doesn't have any models designed ... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/759/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/759/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1938 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1938/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1938/comments | https://api.github.com/repos/ollama/ollama/issues/1938/events | https://github.com/ollama/ollama/issues/1938 | 2,077,794,956 | I_kwDOJ0Z1Ps572KKM | 1,938 | ollama --version 0.1.20 not working | {
"login": "PhilipAmadasun",
"id": 55031054,
"node_id": "MDQ6VXNlcjU1MDMxMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55031054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipAmadasun",
"html_url": "https://github.com/PhilipAmadasun",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 14 | 2024-01-11T23:58:02 | 2024-02-16T16:55:14 | 2024-02-16T16:55:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Our ollama no longer works once upgrading to version `0.1.20`. All the commands, for instance:
```
curl http://localhost:11434/api/chat -d '{
> "model": "llama2",
> "messages": [
> {
> "role": "user",
> "content": "why is the sky blue?"
> }
> ]
> }'
```
Just gets stuck and doe... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1938/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1938/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8034 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8034/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8034/comments | https://api.github.com/repos/ollama/ollama/issues/8034/events | https://github.com/ollama/ollama/pull/8034 | 2,731,497,287 | PR_kwDOJ0Z1Ps6Ex7WN | 8,034 | cmd: Add --base2 option to ps to show model sizes in KiB/MiB/GiB | {
"login": "theasp",
"id": 7775024,
"node_id": "MDQ6VXNlcjc3NzUwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7775024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theasp",
"html_url": "https://github.com/theasp",
"followers_url": "https://api.github.com/users/theasp/foll... | [] | open | false | null | [] | null | 0 | 2024-12-11T00:04:55 | 2024-12-11T00:04:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8034",
"html_url": "https://github.com/ollama/ollama/pull/8034",
"diff_url": "https://github.com/ollama/ollama/pull/8034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8034.patch",
"merged_at": null
} | Add `--base2` option to ps to show model sizes in KiB/MiB/GiB. It also shows a decimal place, but I consider this a feature.
```
industrial:~/projects/ollama-src$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
DEFAULT/mistral-small-2409-22b:latest 671ad04c2... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8034/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5137 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5137/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5137/comments | https://api.github.com/repos/ollama/ollama/issues/5137/events | https://github.com/ollama/ollama/issues/5137 | 2,361,755,199 | I_kwDOJ0Z1Ps6MxYY_ | 5,137 | A problem with "ollama create" | {
"login": "Udacv",
"id": 126667614,
"node_id": "U_kgDOB4zLXg",
"avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udacv",
"html_url": "https://github.com/Udacv",
"followers_url": "https://api.github.com/users/Udacv/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-19T08:44:51 | 2024-10-17T20:42:01 | 2024-06-29T23:20:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
If I want to create a model with two part of gguf, how can I write my Modelfile.

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.44 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5137/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7891 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7891/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7891/comments | https://api.github.com/repos/ollama/ollama/issues/7891/events | https://github.com/ollama/ollama/issues/7891 | 2,707,109,591 | I_kwDOJ0Z1Ps6hWzbX | 7,891 | Ubuntu Server 22.04 with `Out of memory` boot failure. | {
"login": "vahid67",
"id": 10948576,
"node_id": "MDQ6VXNlcjEwOTQ4NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10948576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vahid67",
"html_url": "https://github.com/vahid67",
"followers_url": "https://api.github.com/users/vahid6... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-11-30T10:03:21 | 2024-12-04T07:23:39 | 2024-12-04T07:23:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
This bug is related to a boot failure on Ubuntu Server 22.04 with an `Out of memory` error.
I'm trying to install Ollama on Ubuntu Server 22.04 to run a local dedicated server and the specs are Corei9 12900 + 32GB DDR5 + 3080 RTX.
I can install nvidia driver using this method:
`... | {
"login": "vahid67",
"id": 10948576,
"node_id": "MDQ6VXNlcjEwOTQ4NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10948576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vahid67",
"html_url": "https://github.com/vahid67",
"followers_url": "https://api.github.com/users/vahid6... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7891/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7364 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7364/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7364/comments | https://api.github.com/repos/ollama/ollama/issues/7364/events | https://github.com/ollama/ollama/issues/7364 | 2,614,953,776 | I_kwDOJ0Z1Ps6b3Qcw | 7,364 | Data persistence | {
"login": "multiplicity-16",
"id": 186337493,
"node_id": "U_kgDOCxtI1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/186337493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/multiplicity-16",
"html_url": "https://github.com/multiplicity-16",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-10-25T19:35:03 | 2024-12-02T14:44:32 | 2024-12-02T14:44:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I love that I can load extensive public domain resources directly from the internet into the sessions and add hundreds of thousands of data point. I can then run knowledge graph optimizations, as well as precision config changes all direct in the session. However I unable to get any of this data to persist. Other e... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7364/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6909 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6909/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6909/comments | https://api.github.com/repos/ollama/ollama/issues/6909/events | https://github.com/ollama/ollama/issues/6909 | 2,541,079,767 | I_kwDOJ0Z1Ps6XdczX | 6,909 | InternVL 2.0 models | {
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 4 | 2024-09-22T13:23:06 | 2025-01-28T13:33:35 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The models are listed here:
https://huggingface.co/collections/OpenGVLab/internvl-20-667d3961ab5eb12c7ed1463e
1B: https://huggingface.co/OpenGVLab/InternVL2-1B
2B: https://huggingface.co/OpenGVLab/InternVL2-2B
4B: https://huggingface.co/OpenGVLab/InternVL2-4B
8B: https://huggingface.co/OpenGVLab/InternVL2-8B
26... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6909/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2324 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2324/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2324/comments | https://api.github.com/repos/ollama/ollama/issues/2324/events | https://github.com/ollama/ollama/issues/2324 | 2,114,629,685 | I_kwDOJ0Z1Ps5-CrA1 | 2,324 | Running Ollama with mixtral on Macbook pro m1 pro is incredibly slow | {
"login": "azurwastaken",
"id": 30268138,
"node_id": "MDQ6VXNlcjMwMjY4MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/30268138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/azurwastaken",
"html_url": "https://github.com/azurwastaken",
"followers_url": "https://api.github.c... | [] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 5 | 2024-02-02T10:42:15 | 2024-03-11T23:45:53 | 2024-03-11T23:45:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, I tried to install ollama on my macbook today and give it a try but the model is taking 10+ min just to answer to an Hello.
Did i missed something in config ? | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2324/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6143 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6143/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6143/comments | https://api.github.com/repos/ollama/ollama/issues/6143/events | https://github.com/ollama/ollama/issues/6143 | 2,445,334,161 | I_kwDOJ0Z1Ps6RwNaR | 6,143 | Support for AWS Neuron Inferentia GPU | {
"login": "mavwolverine",
"id": 316111,
"node_id": "MDQ6VXNlcjMxNjExMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/316111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mavwolverine",
"html_url": "https://github.com/mavwolverine",
"followers_url": "https://api.github.com/u... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-08-02T16:17:47 | 2024-08-08T20:25:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This would add the ability to run ollama on inf2 instance types in AWS. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6143/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6143/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2571 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2571/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2571/comments | https://api.github.com/repos/ollama/ollama/issues/2571/events | https://github.com/ollama/ollama/issues/2571 | 2,140,803,288 | I_kwDOJ0Z1Ps5_mhDY | 2,571 | Storing models on external drive | {
"login": "shersoni610",
"id": 57876250,
"node_id": "MDQ6VXNlcjU3ODc2MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/57876250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shersoni610",
"html_url": "https://github.com/shersoni610",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-02-18T07:59:48 | 2024-06-23T13:12:01 | 2024-04-12T22:23:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I have limited memory on the OS hard drive. So I want to store all the models
in /usr/share/ollama/.ollama/models/blobs on an external drive. After downloading
the models, I made a softlink as:
sudo ln -s ~/Disk2/Models/Ollama/blob /usr/share/ollama/.ollama/models/blobs
but when I rurn the code, I ge... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2571/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/235 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/235/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/235/comments | https://api.github.com/repos/ollama/ollama/issues/235/events | https://github.com/ollama/ollama/pull/235 | 1,826,960,797 | PR_kwDOJ0Z1Ps5Wrv8Z | 235 | remove io/ioutil import | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-28T19:07:44 | 2023-07-28T19:19:07 | 2023-07-28T19:19:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/235",
"html_url": "https://github.com/ollama/ollama/pull/235",
"diff_url": "https://github.com/ollama/ollama/pull/235.diff",
"patch_url": "https://github.com/ollama/ollama/pull/235.patch",
"merged_at": "2023-07-28T19:19:06"
} | ioutil is deprecated | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/235/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4688 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4688/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4688/comments | https://api.github.com/repos/ollama/ollama/issues/4688/events | https://github.com/ollama/ollama/issues/4688 | 2,321,906,519 | I_kwDOJ0Z1Ps6KZXtX | 4,688 | Can't down the Ollama .exe file for Windows | {
"login": "Tarhex",
"id": 56320309,
"node_id": "MDQ6VXNlcjU2MzIwMzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/56320309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tarhex",
"html_url": "https://github.com/Tarhex",
"followers_url": "https://api.github.com/users/Tarhex/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-28T20:55:02 | 2024-05-28T21:47:09 | 2024-05-28T21:47:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I can't download the .exe file for windows. I have tried all I could but no success.
The link: https://ollama.com/download/OllamaSetup.exe doesn't work.
### OS
Windows
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "Tarhex",
"id": 56320309,
"node_id": "MDQ6VXNlcjU2MzIwMzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/56320309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tarhex",
"html_url": "https://github.com/Tarhex",
"followers_url": "https://api.github.com/users/Tarhex/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4688/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4087 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4087/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4087/comments | https://api.github.com/repos/ollama/ollama/issues/4087/events | https://github.com/ollama/ollama/pull/4087 | 2,274,042,292 | PR_kwDOJ0Z1Ps5uR6MZ | 4,087 | types/model: fix name for hostport | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-01T19:15:25 | 2024-05-01T19:42:08 | 2024-05-01T19:42:07 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4087",
"html_url": "https://github.com/ollama/ollama/pull/4087",
"diff_url": "https://github.com/ollama/ollama/pull/4087.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4087.patch",
"merged_at": "2024-05-01T19:42:07"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4087/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/915 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/915/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/915/comments | https://api.github.com/repos/ollama/ollama/issues/915/events | https://github.com/ollama/ollama/issues/915 | 1,964,007,421 | I_kwDOJ0Z1Ps51EF_9 | 915 | Cannot download models behind a proxy | {
"login": "beettlle",
"id": 428052,
"node_id": "MDQ6VXNlcjQyODA1Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/428052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beettlle",
"html_url": "https://github.com/beettlle",
"followers_url": "https://api.github.com/users/beettll... | [] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 14 | 2023-10-26T17:14:02 | 2024-04-03T06:17:10 | 2023-11-17T00:00:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Seems like #769 doesn't catch all the corner cases when users are behind a proxy. Both @reactivetype and I can reproduce in `0.1.3` and `0.1.5`.
```
$ ollama -v
ollama version 0.1.5
$ ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/late... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/915/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.