url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/509 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/509/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/509/comments | https://api.github.com/repos/ollama/ollama/issues/509/events | https://github.com/ollama/ollama/pull/509 | 1,891,425,301 | PR_kwDOJ0Z1Ps5aEmqk | 509 | support for packaging in multiple cuda runners | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-11T23:38:14 | 2023-09-14T19:08:15 | 2023-09-14T19:08:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/509",
"html_url": "https://github.com/ollama/ollama/pull/509",
"diff_url": "https://github.com/ollama/ollama/pull/509.diff",
"patch_url": "https://github.com/ollama/ollama/pull/509.patch",
"merged_at": "2023-09-14T19:08:13"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/509/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5023 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5023/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5023/comments | https://api.github.com/repos/ollama/ollama/issues/5023/events | https://github.com/ollama/ollama/issues/5023 | 2,351,350,308 | I_kwDOJ0Z1Ps6MJsIk | 5,023 | DeepSeek-V2-Lite-Chat - ERROR [validate_model_chat_template] The chat template comes with this model is not yet supported | {
"login": "OldishCoder",
"id": 142182836,
"node_id": "U_kgDOCHmJtA",
"avatar_url": "https://avatars.githubusercontent.com/u/142182836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OldishCoder",
"html_url": "https://github.com/OldishCoder",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-13T14:47:48 | 2024-06-30T05:41:16 | 2024-06-13T16:39:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I load DeepSeek-V2 I see this message:
`ERROR [validate_model_chat_template] The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses | tid="139815279423488" timestamp=1718280244`
I do not know if it's... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5023/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7982 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7982/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7982/comments | https://api.github.com/repos/ollama/ollama/issues/7982/events | https://github.com/ollama/ollama/issues/7982 | 2,724,302,558 | I_kwDOJ0Z1Ps6iYY7e | 7,982 | exit status 2 | {
"login": "bb159",
"id": 128159516,
"node_id": "U_kgDOB6OPHA",
"avatar_url": "https://avatars.githubusercontent.com/u/128159516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bb159",
"html_url": "https://github.com/bb159",
"followers_url": "https://api.github.com/users/bb159/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2024-12-07T03:25:46 | 2024-12-23T08:06:16 | 2024-12-23T08:06:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
exit status 2
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.5.1 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7982/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/773 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/773/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/773/comments | https://api.github.com/repos/ollama/ollama/issues/773/events | https://github.com/ollama/ollama/pull/773 | 1,940,896,513 | PR_kwDOJ0Z1Ps5crty9 | 773 | add how to quantize doc | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-10-12T22:35:56 | 2023-10-15T21:06:47 | 2023-10-14T15:29:39 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/773",
"html_url": "https://github.com/ollama/ollama/pull/773",
"diff_url": "https://github.com/ollama/ollama/pull/773.diff",
"patch_url": "https://github.com/ollama/ollama/pull/773.patch",
"merged_at": "2023-10-14T15:29:39"
} | null | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/773/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6276/comments | https://api.github.com/repos/ollama/ollama/issues/6276/events | https://github.com/ollama/ollama/pull/6276 | 2,457,229,099 | PR_kwDOJ0Z1Ps536Xcl | 6,276 | feat: K/V cache quantisation (massive vRAM improvement!) | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [] | closed | false | null | [] | null | 0 | 2024-08-09T06:49:35 | 2024-08-09T07:18:52 | 2024-08-09T07:18:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6276",
"html_url": "https://github.com/ollama/ollama/pull/6276",
"diff_url": "https://github.com/ollama/ollama/pull/6276.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6276.patch",
"merged_at": null
} | This PR introduces optional K/V (context) cache quantisation.
(PR recreated after Github broke https://github.com/ollama/ollama/pull/5894 🤦)
In addition the deprecated `F16KV` parameter has been removed, if a user wishes for some reason to run the KV at f32, they can provide that as an option.
## Impact
- ... | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6276/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3159 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3159/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3159/comments | https://api.github.com/repos/ollama/ollama/issues/3159/events | https://github.com/ollama/ollama/issues/3159 | 2,187,627,079 | I_kwDOJ0Z1Ps6CZIpH | 3,159 | A way to communicate reasons for low performance to users of CLI & API | {
"login": "easp",
"id": 414705,
"node_id": "MDQ6VXNlcjQxNDcwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/414705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/easp",
"html_url": "https://github.com/easp",
"followers_url": "https://api.github.com/users/easp/followers",
... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 1 | 2024-03-15T03:02:34 | 2024-05-18T03:18:55 | 2024-05-18T03:18:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | People are often concerned about what they perceive to be low performance and/or whether Ollama is making optimal use of their RAM/VRAM/GPU/CPU cores. This comes up frequently in github issues, the main Discord channel and the Discord help channel.
I think a lot of these queries could be avoided if Ollama communicat... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3159/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3159/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5128 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5128/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5128/comments | https://api.github.com/repos/ollama/ollama/issues/5128/events | https://github.com/ollama/ollama/pull/5128 | 2,361,020,058 | PR_kwDOJ0Z1Ps5y4-lV | 5,128 | Fix levelzero empty symbol detect | {
"login": "zhewang1-intc",
"id": 72838274,
"node_id": "MDQ6VXNlcjcyODM4Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/72838274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhewang1-intc",
"html_url": "https://github.com/zhewang1-intc",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2024-06-19T01:30:27 | 2024-07-02T05:46:36 | 2024-06-19T15:33:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5128",
"html_url": "https://github.com/ollama/ollama/pull/5128",
"diff_url": "https://github.com/ollama/ollama/pull/5128.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5128.patch",
"merged_at": "2024-06-19T15:33:16"
} | we notice many users report ollama crash on Windows platform when ollama try to discover Intel GPUs after introducing Intel GPUs support.
this issue may occur on some older Intel CPUs which before 11Gen.
this is due to the driver library of iGPUs before 11Gen processors don't have the symbols which initial level-zer... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5128/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8027 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8027/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8027/comments | https://api.github.com/repos/ollama/ollama/issues/8027/events | https://github.com/ollama/ollama/issues/8027 | 2,730,726,874 | I_kwDOJ0Z1Ps6iw5Xa | 8,027 | ollama_llama_server not signed | {
"login": "disenchant",
"id": 686611,
"node_id": "MDQ6VXNlcjY4NjYxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/686611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/disenchant",
"html_url": "https://github.com/disenchant",
"followers_url": "https://api.github.com/users/d... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677279472,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-12-10T17:24:04 | 2024-12-10T18:29:26 | 2024-12-10T18:29:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The `ollama_llama_server` binary for macOS doesn't seem to be signed. This causes problems e.g. when using binary authorization tools like [Santa](https://github.com/northpolesec/santa) to allowlist the usage of Ollama on machines.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama versi... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8027/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/527 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/527/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/527/comments | https://api.github.com/repos/ollama/ollama/issues/527/events | https://github.com/ollama/ollama/pull/527 | 1,895,416,452 | PR_kwDOJ0Z1Ps5aSFdO | 527 | Update API docs | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-09-14T00:00:30 | 2023-09-14T15:51:27 | 2023-09-14T15:51:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/527",
"html_url": "https://github.com/ollama/ollama/pull/527",
"diff_url": "https://github.com/ollama/ollama/pull/527.diff",
"patch_url": "https://github.com/ollama/ollama/pull/527.patch",
"merged_at": "2023-09-14T15:51:26"
} | cleanup docs, add show and push. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/527/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1536 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1536/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1536/comments | https://api.github.com/repos/ollama/ollama/issues/1536/events | https://github.com/ollama/ollama/issues/1536 | 2,042,796,710 | I_kwDOJ0Z1Ps55wpqm | 1,536 | Ability to keep a model in memory for longer | {
"login": "helloimcx",
"id": 27777511,
"node_id": "MDQ6VXNlcjI3Nzc3NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/27777511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helloimcx",
"html_url": "https://github.com/helloimcx",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 21 | 2023-12-15T02:45:45 | 2025-01-18T05:04:20 | 2024-01-26T23:53:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | is there a way to keep the model in memory or gpu memory ? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1536/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1536/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/857 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/857/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/857/comments | https://api.github.com/repos/ollama/ollama/issues/857/events | https://github.com/ollama/ollama/issues/857 | 1,954,737,189 | I_kwDOJ0Z1Ps50guwl | 857 | Manually download and upload models | {
"login": "dawnpatrol04",
"id": 10486066,
"node_id": "MDQ6VXNlcjEwNDg2MDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/10486066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dawnpatrol04",
"html_url": "https://github.com/dawnpatrol04",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 12 | 2023-10-20T17:22:32 | 2024-08-28T19:19:34 | 2023-10-30T22:07:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hey guys. Having issues getting with this part a work with corporate proxy: docker exec -it ollama ollama run llama2.
2 issues.
1) When I set a proxy something breaks.
2) model url / cert not allowed / blocked. To work around this I will need to manually download model files upload to the container.
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/857/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/857/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/878 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/878/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/878/comments | https://api.github.com/repos/ollama/ollama/issues/878/events | https://github.com/ollama/ollama/issues/878 | 1,957,264,872 | I_kwDOJ0Z1Ps50qX3o | 878 | llama2 image not really llama2? | {
"login": "sharpner",
"id": 574362,
"node_id": "MDQ6VXNlcjU3NDM2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/574362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sharpner",
"html_url": "https://github.com/sharpner",
"followers_url": "https://api.github.com/users/sharpne... | [] | closed | false | null | [] | null | 1 | 2023-10-23T14:07:47 | 2023-10-24T06:31:52 | 2023-10-24T06:31:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Today I tested all models listed on this page: https://ollama.ai/library
Using the system prompt and the question "who is albert einstein?":
```
I want you to act like Ludwig van Beethoven.
I want you to respond and answer like Ludwig van Beethoven, using the tone, manner and vocabulary Ludwig van Beethoven woul... | {
"login": "sharpner",
"id": 574362,
"node_id": "MDQ6VXNlcjU3NDM2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/574362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sharpner",
"html_url": "https://github.com/sharpner",
"followers_url": "https://api.github.com/users/sharpne... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/878/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1291 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1291/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1291/comments | https://api.github.com/repos/ollama/ollama/issues/1291/events | https://github.com/ollama/ollama/issues/1291 | 2,013,270,341 | I_kwDOJ0Z1Ps54ABFF | 1,291 | ollama runpod serverless template | {
"login": "devsaturn",
"id": 43364322,
"node_id": "MDQ6VXNlcjQzMzY0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/43364322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devsaturn",
"html_url": "https://github.com/devsaturn",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 6 | 2023-11-27T22:56:18 | 2023-12-11T15:19:34 | 2023-12-11T15:19:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | is there an example of deploy a model using an ollama in a runpod serverless ? | {
"login": "devsaturn",
"id": 43364322,
"node_id": "MDQ6VXNlcjQzMzY0MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/43364322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devsaturn",
"html_url": "https://github.com/devsaturn",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1291/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2208 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2208/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2208/comments | https://api.github.com/repos/ollama/ollama/issues/2208/events | https://github.com/ollama/ollama/issues/2208 | 2,102,429,567 | I_kwDOJ0Z1Ps59UId_ | 2,208 | Feature: API error response in case of exceeding context length | {
"login": "Jurik-001",
"id": 123458520,
"node_id": "U_kgDOB1vT2A",
"avatar_url": "https://avatars.githubusercontent.com/u/123458520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jurik-001",
"html_url": "https://github.com/Jurik-001",
"followers_url": "https://api.github.com/users/Jurik-... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 0 | 2024-01-26T15:41:30 | 2024-11-06T19:02:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be good to get a error response back from the API if the prompt exceeds the context length of the model, similar to openai API. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2208/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4026 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4026/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4026/comments | https://api.github.com/repos/ollama/ollama/issues/4026/events | https://github.com/ollama/ollama/issues/4026 | 2,269,143,895 | I_kwDOJ0Z1Ps6HQGNX | 4,026 | Llama 3 BPE tokenization needs improvement | {
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder54... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-04-29T14:04:06 | 2024-05-13T15:53:02 | 2024-05-11T13:41:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
[This PR](https://github.com/ggerganov/llama.cpp/pull/6920) just merged on llama.cpp, which contained important improvements to how tokenization worked for Llama 3 and other models. An example of the issue is [noted here](https://github.com/ggerganov/llama.cpp/issues/6914).
Hopefully ollama c... | {
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder54... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4026/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4026/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6591 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6591/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6591/comments | https://api.github.com/repos/ollama/ollama/issues/6591/events | https://github.com/ollama/ollama/issues/6591 | 2,500,418,861 | I_kwDOJ0Z1Ps6VCV0t | 6,591 | Ollama failing with `CUDA error: PTX JIT compiler library not found` | {
"login": "leobenkel",
"id": 4960573,
"node_id": "MDQ6VXNlcjQ5NjA1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4960573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leobenkel",
"html_url": "https://github.com/leobenkel",
"followers_url": "https://api.github.com/users/le... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-09-02T09:06:01 | 2024-09-26T19:55:34 | 2024-09-03T21:14:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The GPU seems to be detected:
```
CUDA driver version: 11.4
time=2024-08-30T16:57:23.032Z level=DEBUG source=gpu.go:123 msg="detected GPUs" count=1 library=/usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1
[GPU-d90c0d9d-5e59-56b1-b519-6439b1d74328] CUDA totalMem 30990 mb
[GPU-d90c0d9d-5e5... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6591/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3014 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3014/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3014/comments | https://api.github.com/repos/ollama/ollama/issues/3014/events | https://github.com/ollama/ollama/pull/3014 | 2,176,931,782 | PR_kwDOJ0Z1Ps5pIVTe | 3,014 | decode ggla | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-08T23:47:08 | 2024-03-09T00:14:54 | 2024-03-09T00:14:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3014",
"html_url": "https://github.com/ollama/ollama/pull/3014",
"diff_url": "https://github.com/ollama/ollama/pull/3014.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3014.patch",
"merged_at": "2024-03-09T00:14:53"
} | split from #2926 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3014/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7470 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7470/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7470/comments | https://api.github.com/repos/ollama/ollama/issues/7470/events | https://github.com/ollama/ollama/issues/7470 | 2,630,256,963 | I_kwDOJ0Z1Ps6cxolD | 7,470 | [Model request] The First-Ever Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection | {
"login": "monkeycc",
"id": 6490927,
"node_id": "MDQ6VXNlcjY0OTA5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6490927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monkeycc",
"html_url": "https://github.com/monkeycc",
"followers_url": "https://api.github.com/users/monke... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-11-02T06:25:49 | 2024-11-02T06:25:49 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/jam-cc/MMAD

The First-Ever Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7470/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8652 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8652/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8652/comments | https://api.github.com/repos/ollama/ollama/issues/8652/events | https://github.com/ollama/ollama/issues/8652 | 2,817,696,992 | I_kwDOJ0Z1Ps6n8qTg | 8,652 | Could not install deepseek-r1:7b version | {
"login": "chinmayc0707",
"id": 139462368,
"node_id": "U_kgDOCFAG4A",
"avatar_url": "https://avatars.githubusercontent.com/u/139462368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chinmayc0707",
"html_url": "https://github.com/chinmayc0707",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 1 | 2025-01-29T09:42:18 | 2025-01-30T08:54:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/aa/aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%2... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8652/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2691 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2691/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2691/comments | https://api.github.com/repos/ollama/ollama/issues/2691/events | https://github.com/ollama/ollama/issues/2691 | 2,149,719,194 | I_kwDOJ0Z1Ps6AIhya | 2,691 | Not an issue, just a thank you. | {
"login": "robert-mcdermott",
"id": 7399563,
"node_id": "MDQ6VXNlcjczOTk1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7399563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robert-mcdermott",
"html_url": "https://github.com/robert-mcdermott",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | 0 | 2024-02-22T18:41:56 | 2024-02-22T18:47:11 | 2024-02-22T18:47:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The Ollama project is awesome, thanks for all the work you do. | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2691/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2691/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4804 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4804/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4804/comments | https://api.github.com/repos/ollama/ollama/issues/4804/events | https://github.com/ollama/ollama/pull/4804 | 2,332,456,590 | PR_kwDOJ0Z1Ps5xXsXj | 4,804 | Created a couple C# example applications using OllamaSharp | {
"login": "JerrettDavis",
"id": 2610199,
"node_id": "MDQ6VXNlcjI2MTAxOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JerrettDavis",
"html_url": "https://github.com/JerrettDavis",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 0 | 2024-06-04T03:27:00 | 2024-06-14T03:34:18 | 2024-06-14T03:34:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4804",
"html_url": "https://github.com/ollama/ollama/pull/4804",
"diff_url": "https://github.com/ollama/ollama/pull/4804.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4804.patch",
"merged_at": null
} | I've added C# examples leveraging the [OllamaSharp](https://github.com/awaescher/OllamaSharp) wrapper library. | {
"login": "JerrettDavis",
"id": 2610199,
"node_id": "MDQ6VXNlcjI2MTAxOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JerrettDavis",
"html_url": "https://github.com/JerrettDavis",
"followers_url": "https://api.github.com... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4804/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1129 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1129/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1129/comments | https://api.github.com/repos/ollama/ollama/issues/1129/events | https://github.com/ollama/ollama/pull/1129 | 1,993,464,753 | PR_kwDOJ0Z1Ps5fcwS1 | 1,129 | fix: parse ggml model types | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 4 | 2023-11-14T19:56:54 | 2023-12-06T23:54:41 | 2023-11-22T21:09:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1129",
"html_url": "https://github.com/ollama/ollama/pull/1129",
"diff_url": "https://github.com/ollama/ollama/pull/1129.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1129.patch",
"merged_at": null
} | I'm kind of surprised this didn't come up earlier, noticed this while importing a less common ggml model type. Model parsing was returning `nil` for uncommon types which results in a nil pointer dereference. Fix this by moving model parsing up a level and try to decode as a llama model in the default case. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1129/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7337 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7337/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7337/comments | https://api.github.com/repos/ollama/ollama/issues/7337/events | https://github.com/ollama/ollama/pull/7337 | 2,609,976,414 | PR_kwDOJ0Z1Ps5_sAY- | 7,337 | Update README.md - Revised vnc-lm project description | {
"login": "jake83741",
"id": 125723241,
"node_id": "U_kgDOB35iaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125723241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jake83741",
"html_url": "https://github.com/jake83741",
"followers_url": "https://api.github.com/users/jake83... | [] | closed | false | null | [] | null | 0 | 2024-10-23T22:39:29 | 2024-10-27T22:15:54 | 2024-10-27T22:15:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7337",
"html_url": "https://github.com/ollama/ollama/pull/7337",
"diff_url": "https://github.com/ollama/ollama/pull/7337.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7337.patch",
"merged_at": null
} | Hi,
This is just a small change updating the description for my project, [vnc-lm](https://github.com/jake83741/vnc-lm).
Thanks,
Jake | {
"login": "jake83741",
"id": 125723241,
"node_id": "U_kgDOB35iaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125723241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jake83741",
"html_url": "https://github.com/jake83741",
"followers_url": "https://api.github.com/users/jake83... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7337/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2346 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2346/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2346/comments | https://api.github.com/repos/ollama/ollama/issues/2346/events | https://github.com/ollama/ollama/issues/2346 | 2,117,126,222 | I_kwDOJ0Z1Ps5-MMhO | 2,346 | Use llama2-chinese or yi to ask questions in Chinese. After about 15 requests, the response request cannot be returned. | {
"login": "wyang-code",
"id": 130064485,
"node_id": "U_kgDOB8CgZQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130064485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wyang-code",
"html_url": "https://github.com/wyang-code",
"followers_url": "https://api.github.com/users/wya... | [] | closed | false | null | [] | null | 1 | 2024-02-04T14:28:08 | 2024-02-20T04:04:26 | 2024-02-20T04:04:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Use llama2-chinese or yi to ask questions in Chinese. After about 15 requests, the response request cannot be returned.
<img width="1494" alt="image" src="https://github.com/ollama/ollama/assets/130064485/b850c5c6-d09e-472b-9592-5f2bb115c2ed">
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2346/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5810 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5810/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5810/comments | https://api.github.com/repos/ollama/ollama/issues/5810/events | https://github.com/ollama/ollama/issues/5810 | 2,420,883,131 | I_kwDOJ0Z1Ps6QS767 | 5,810 | Tinyllama has issues understanding the Modelfile | {
"login": "DuilioPerez",
"id": 135564120,
"node_id": "U_kgDOCBSLWA",
"avatar_url": "https://avatars.githubusercontent.com/u/135564120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuilioPerez",
"html_url": "https://github.com/DuilioPerez",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-20T12:40:58 | 2024-09-12T22:17:53 | 2024-09-12T22:17:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I start a conversation with Tinyllama with a greeting, it answer with a revision of the Modelfile. That's strange. Here is an example

### OS
Linux
### G... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5810/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3593 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3593/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3593/comments | https://api.github.com/repos/ollama/ollama/issues/3593/events | https://github.com/ollama/ollama/issues/3593 | 2,237,573,743 | I_kwDOJ0Z1Ps6FXqpv | 3,593 | Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:59108->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. | {
"login": "liquorLiu",
"id": 125868659,
"node_id": "U_kgDOB4Cacw",
"avatar_url": "https://avatars.githubusercontent.com/u/125868659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liquorLiu",
"html_url": "https://github.com/liquorLiu",
"followers_url": "https://api.github.com/users/liquor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 13 | 2024-04-11T12:06:28 | 2024-05-21T18:30:34 | 2024-05-21T18:30:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | here is my app.log:
time=2024-04-11T19:53:53.911+08:00 level=INFO source=logging.go:45 msg="ollama app started"
time=2024-04-11T19:53:53.940+08:00 level=DEBUG source=lifecycle.go:32 msg="starting callback loop"
time=2024-04-11T19:53:53.940+08:00 level=DEBUG source=store.go:61 msg="loaded existing store C:\\Users\\17... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3593/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2464 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2464/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2464/comments | https://api.github.com/repos/ollama/ollama/issues/2464/events | https://github.com/ollama/ollama/issues/2464 | 2,130,347,351 | I_kwDOJ0Z1Ps5--oVX | 2,464 | Error: invalid version -- when attempting to run llava | {
"login": "gr-b",
"id": 18386065,
"node_id": "MDQ6VXNlcjE4Mzg2MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18386065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gr-b",
"html_url": "https://github.com/gr-b",
"followers_url": "https://api.github.com/users/gr-b/followers"... | [] | closed | false | null | [] | null | 2 | 2024-02-12T15:15:06 | 2024-02-12T15:30:58 | 2024-02-12T15:30:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I attempted to install and run llava on an m1 mac and got the following:
~ % ollama run llava
pulling manifest
pulling 170370233dd5... 100% |███████████████| (4.1/4.1 GB, 46 MB/s)
pulling 72d6f08a42f6... 100% |█████████████████| (624/624 MB, 49 MB/s)
pulling 43070e2d4e53... 100% |████████████... | {
"login": "gr-b",
"id": 18386065,
"node_id": "MDQ6VXNlcjE4Mzg2MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18386065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gr-b",
"html_url": "https://github.com/gr-b",
"followers_url": "https://api.github.com/users/gr-b/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2464/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8005 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8005/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8005/comments | https://api.github.com/repos/ollama/ollama/issues/8005/events | https://github.com/ollama/ollama/issues/8005 | 2,725,801,345 | I_kwDOJ0Z1Ps6ieG2B | 8,005 | Why is OllamaSetup.exe so large | {
"login": "xgdgsc",
"id": 1189869,
"node_id": "MDQ6VXNlcjExODk4Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1189869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xgdgsc",
"html_url": "https://github.com/xgdgsc",
"followers_url": "https://api.github.com/users/xgdgsc/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-12-09T02:38:40 | 2024-12-18T09:14:28 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I also tried downloading [ollama-windows-arm64.zip](https://github.com/ollama/ollama/releases/download/v0.5.1/ollama-windows-arm64.zip) and replacing some files in my user folder which doesn' t fix the tray icon still notify the update. Could there be a smaller OllamaSetup.exe ?
This combined with the tray "restart... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8005/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8057 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8057/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8057/comments | https://api.github.com/repos/ollama/ollama/issues/8057/events | https://github.com/ollama/ollama/issues/8057 | 2,734,339,182 | I_kwDOJ0Z1Ps6i-rRu | 8,057 | where can I find ollama model path? | {
"login": "keyuchen21",
"id": 54015474,
"node_id": "MDQ6VXNlcjU0MDE1NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keyuchen21",
"html_url": "https://github.com/keyuchen21",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-12T00:22:51 | 2024-12-14T06:28:20 | 2024-12-14T06:28:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
| {
"login": "keyuchen21",
"id": 54015474,
"node_id": "MDQ6VXNlcjU0MDE1NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keyuchen21",
"html_url": "https://github.com/keyuchen21",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8057/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5458 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5458/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5458/comments | https://api.github.com/repos/ollama/ollama/issues/5458/events | https://github.com/ollama/ollama/issues/5458 | 2,388,291,561 | I_kwDOJ0Z1Ps6OWm_p | 5,458 | Enable Row Split Support | {
"login": "datacrystals",
"id": 49183477,
"node_id": "MDQ6VXNlcjQ5MTgzNDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/49183477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datacrystals",
"html_url": "https://github.com/datacrystals",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-07-03T10:40:24 | 2024-07-03T10:40:24 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For multi-gpu setups, enabling row-split can have very significant performance improvements. On my machine with 3xP40, I was able to get a speedup from ~3t/s to ~10t/s.
Unfortunately, it doesn't look like there's any way to get this flag (`-sm row`) down to llama.cpp, which should otherwise support this.
Wonderin... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5458/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5458/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1773 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1773/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1773/comments | https://api.github.com/repos/ollama/ollama/issues/1773/events | https://github.com/ollama/ollama/pull/1773 | 2,064,567,365 | PR_kwDOJ0Z1Ps5jKrqy | 1,773 | Add embeddings to API | {
"login": "bmurray",
"id": 574917,
"node_id": "MDQ6VXNlcjU3NDkxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/574917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmurray",
"html_url": "https://github.com/bmurray",
"followers_url": "https://api.github.com/users/bmurray/fo... | [] | closed | false | null | [] | null | 0 | 2024-01-03T19:52:52 | 2024-01-04T20:00:53 | 2024-01-04T20:00:52 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1773",
"html_url": "https://github.com/ollama/ollama/pull/1773",
"diff_url": "https://github.com/ollama/ollama/pull/1773.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1773.patch",
"merged_at": "2024-01-04T20:00:52"
} | The API is missing the embeddings point, so this adds it. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1773/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/636 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/636/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/636/comments | https://api.github.com/repos/ollama/ollama/issues/636/events | https://github.com/ollama/ollama/issues/636 | 1,918,158,040 | I_kwDOJ0Z1Ps5yVMTY | 636 | Unable to create temp file for runner on Windows | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-09-28T19:43:44 | 2023-09-28T19:45:52 | 2023-09-28T19:45:52 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running Ollama on Windows the llama.cpp runner may not be able to be added to the temp directory.
```
llama runner not found: CreateFile C:\Users\...\AppData\Local\Temp\ollama432423342\...\server.exe: The system cannot find the specified path.
``` | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/636/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5034 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5034/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5034/comments | https://api.github.com/repos/ollama/ollama/issues/5034/events | https://github.com/ollama/ollama/pull/5034 | 2,352,066,933 | PR_kwDOJ0Z1Ps5yacph | 5,034 | Re-introduce the `llama` package | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-06-13T21:06:18 | 2024-10-08T15:53:57 | 2024-10-08T15:53:55 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5034",
"html_url": "https://github.com/ollama/ollama/pull/5034",
"diff_url": "https://github.com/ollama/ollama/pull/5034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5034.patch",
"merged_at": "2024-10-08T15:53:54"
} | This PR brings back the `llama` package, making it possible to call llama.cpp and ggml APIs from Go directly via CGo. This has a few advantages:
1. C APIs can be called directly from Go without needing to use the previous "server" REST API
2. On macOS and for CPU builds on Linux and Windows, Ollama can be built wit... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5034/reactions",
"total_count": 9,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5034/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/791 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/791/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/791/comments | https://api.github.com/repos/ollama/ollama/issues/791/events | https://github.com/ollama/ollama/issues/791 | 1,943,655,360 | I_kwDOJ0Z1Ps5z2dPA | 791 | Sending and receiving Context with ollama.call() | {
"login": "rossudev",
"id": 14206857,
"node_id": "MDQ6VXNlcjE0MjA2ODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/14206857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rossudev",
"html_url": "https://github.com/rossudev",
"followers_url": "https://api.github.com/users/ros... | [] | closed | false | null | [] | null | 1 | 2023-10-15T02:28:30 | 2023-10-16T18:05:11 | 2023-10-16T18:05:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is the input and output of context supported yet with ollama.call()?
javascript:
```
import { Ollama } from "langchain/llms/ollama";
const ollama = new Ollama({
baseUrl: "http://localhost:11434",
model: model,
temperature: parseFloat(temperature),
topP: parseFloat(topp)
});
response = await ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/791/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7596 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7596/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7596/comments | https://api.github.com/repos/ollama/ollama/issues/7596/events | https://github.com/ollama/ollama/issues/7596 | 2,647,219,698 | I_kwDOJ0Z1Ps6dyV3y | 7,596 | unable to get images to be read in ollama 0.4.1 for llama3.2-vision | {
"login": "sivaramn",
"id": 1205075,
"node_id": "MDQ6VXNlcjEyMDUwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1205075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sivaramn",
"html_url": "https://github.com/sivaramn",
"followers_url": "https://api.github.com/users/sivar... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-11-10T13:04:45 | 2024-11-17T14:00:26 | 2024-11-17T14:00:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This is my setup on Windows 11

Whatever image i upload, get variations of the same message as shown. What am I doing incorrectly? The png file is 145kb, if it matters
 Xeon(R) Silver 4410Y
GPU: NVIDIA L40S-24Q 24GB
DRAM: 32GB
OS: Ubuntu 24.04.1
GPU Driver: vWS 550.127.05 / 550.90.07
CUDA: 12.4
Ollama version: v0.5.1/v0.4.7/v0.3.14
Model: llama3.1:8b/Gemma2:2b/Qwen2.5:7b
Hello, I'm using to run ollama on a VM with grid GPU, but bot... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8023/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4768 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4768/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4768/comments | https://api.github.com/repos/ollama/ollama/issues/4768/events | https://github.com/ollama/ollama/issues/4768 | 2,329,218,923 | I_kwDOJ0Z1Ps6K1Q9r | 4,768 | Cannot get web to verify service is running | {
"login": "brentfamily51",
"id": 65859500,
"node_id": "MDQ6VXNlcjY1ODU5NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/65859500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brentfamily51",
"html_url": "https://github.com/brentfamily51",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-01T16:12:00 | 2024-06-01T19:31:22 | 2024-06-01T19:31:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have installed ollama on ubuntu (proxmox lxc) successfully using the script, I am able to download and access it on command line, but when I attempt to access the ip:11434 the page fails to load. I appear to have no external access to the service on the machine. Any help is greatly appreci... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4768/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6709 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6709/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6709/comments | https://api.github.com/repos/ollama/ollama/issues/6709/events | https://github.com/ollama/ollama/issues/6709 | 2,513,372,647 | I_kwDOJ0Z1Ps6VzwXn | 6,709 | ERROR unable to locate llm runner directory. Set OLLAMA_RUNNERS_DIR to the location of 'ollama/runners' | {
"login": "Harsha0056",
"id": 153726150,
"node_id": "U_kgDOCSmsxg",
"avatar_url": "https://avatars.githubusercontent.com/u/153726150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harsha0056",
"html_url": "https://github.com/Harsha0056",
"followers_url": "https://api.github.com/users/Har... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-09-09T09:03:14 | 2024-09-09T16:07:14 | 2024-09-09T16:07:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Model is Downloaded but unable to run it. Tried to locate ollama/runners couldn't find it.
2024/09/09 04:48:23 config.go:173: ERROR unable to locate llm runner directory. Set OLLAMA_RUNNERS_DIR to the location of 'ollama/runners'
.......
pulling manifest
verifying sha256 digest
writing... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6709/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3507 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3507/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3507/comments | https://api.github.com/repos/ollama/ollama/issues/3507/events | https://github.com/ollama/ollama/issues/3507 | 2,228,977,493 | I_kwDOJ0Z1Ps6E239V | 3,507 | Switching dynamically between multiple LLM models on VRAM | {
"login": "Q-point",
"id": 5604553,
"node_id": "MDQ6VXNlcjU2MDQ1NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5604553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Q-point",
"html_url": "https://github.com/Q-point",
"followers_url": "https://api.github.com/users/Q-point/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677745918,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-04-06T00:25:39 | 2024-04-23T15:31:41 | 2024-04-23T15:31:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
At the moment, Ollama needs to load LLM one by one. It should be possible to have multiple LLM resident in VRAM memory and switch dynamically between the two.
### How should we solve this?
1. Check if the requested images can be loaded within the current hardware VRAM budget.
2.... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3507/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3507/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/364 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/364/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/364/comments | https://api.github.com/repos/ollama/ollama/issues/364/events | https://github.com/ollama/ollama/pull/364 | 1,853,945,697 | PR_kwDOJ0Z1Ps5YGi0- | 364 | reimplement chunked uploads | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-08-16T21:45:42 | 2023-08-17T16:58:52 | 2023-08-17T16:58:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/364",
"html_url": "https://github.com/ollama/ollama/pull/364",
"diff_url": "https://github.com/ollama/ollama/pull/364.diff",
"patch_url": "https://github.com/ollama/ollama/pull/364.patch",
"merged_at": "2023-08-17T16:58:51"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/364/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1110 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1110/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1110/comments | https://api.github.com/repos/ollama/ollama/issues/1110/events | https://github.com/ollama/ollama/issues/1110 | 1,990,979,824 | I_kwDOJ0Z1Ps52q_Dw | 1,110 | Converting mistral7b finetuned model produces garbage output | {
"login": "amritap-ef",
"id": 91881411,
"node_id": "U_kgDOBXn_ww",
"avatar_url": "https://avatars.githubusercontent.com/u/91881411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amritap-ef",
"html_url": "https://github.com/amritap-ef",
"followers_url": "https://api.github.com/users/amrit... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 3 | 2023-11-13T16:11:23 | 2024-10-16T20:45:34 | 2024-10-16T20:45:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I've finetuned a Mistral-7B-v0.1 model from hugging face using QLoRa as per the typical SFT scripts released by the hugging face team. However, after converting it to gguf format using the docker method, I found that my model produces garbage.
For reference, this is the format of the prompts I trained on:
``... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1110/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1110/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/711 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/711/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/711/comments | https://api.github.com/repos/ollama/ollama/issues/711/events | https://github.com/ollama/ollama/pull/711 | 1,928,874,355 | PR_kwDOJ0Z1Ps5cCqLp | 711 | validate api options fields from map | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 2 | 2023-10-05T18:57:33 | 2023-10-12T15:18:12 | 2023-10-12T15:18:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/711",
"html_url": "https://github.com/ollama/ollama/pull/711",
"diff_url": "https://github.com/ollama/ollama/pull/711.diff",
"patch_url": "https://github.com/ollama/ollama/pull/711.patch",
"merged_at": "2023-10-12T15:18:11"
} | We use a map to set options from the API so that we can see which option fields were specified, otherwise we override default options with zero values. The issue here is that there was no validation that the input option fields were valid, so using an incorrect field by mistake did not return an error.
New response:... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/711/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/711/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4896 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4896/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4896/comments | https://api.github.com/repos/ollama/ollama/issues/4896/events | https://github.com/ollama/ollama/pull/4896 | 2,339,568,092 | PR_kwDOJ0Z1Ps5xwEhd | 4,896 | llm: update llama.cpp commit to `7c26775` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2024-06-07T04:35:53 | 2024-06-17T19:56:17 | 2024-06-17T19:56:16 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4896",
"html_url": "https://github.com/ollama/ollama/pull/4896",
"diff_url": "https://github.com/ollama/ollama/pull/4896.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4896.patch",
"merged_at": "2024-06-17T19:56:16"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4896/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4896/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3841 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3841/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3841/comments | https://api.github.com/repos/ollama/ollama/issues/3841/events | https://github.com/ollama/ollama/issues/3841 | 2,258,383,763 | I_kwDOJ0Z1Ps6GnDOT | 3,841 | Slow inference speed | {
"login": "userandpass",
"id": 26294920,
"node_id": "MDQ6VXNlcjI2Mjk0OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/26294920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/userandpass",
"html_url": "https://github.com/userandpass",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-23T09:24:34 | 2024-05-18T18:25:30 | 2024-05-18T18:25:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
A100 80G
Run qwen1.57B using lmdeploy framework with two processes per card and use two cards to launch qwen1.57B via ollama, which is about 2 times slower than lmdeploy
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.1.32 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3841/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1512 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1512/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1512/comments | https://api.github.com/repos/ollama/ollama/issues/1512/events | https://github.com/ollama/ollama/issues/1512 | 2,040,673,497 | I_kwDOJ0Z1Ps55ojTZ | 1,512 | Generation speed 3x slower after pulling newer docker image | {
"login": "johnp-4dvanalytics",
"id": 54282900,
"node_id": "MDQ6VXNlcjU0MjgyOTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/54282900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnp-4dvanalytics",
"html_url": "https://github.com/johnp-4dvanalytics",
"followers_url": "ht... | [] | closed | false | null | [] | null | 1 | 2023-12-14T00:35:22 | 2024-02-20T01:26:11 | 2024-02-20T01:26:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I updated the docker image recently and the inference was 3x slower (30s for new image vs. 10s for old image). I was able to get back to the previous inference speed by reverting to ollama/ollama:0.1.10. I tried ollama/ollama:0.1.15 and ollama/ollama:0.1.14 and both had the slower inference speed.
EDIT: model used w... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1512/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1512/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2155 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2155/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2155/comments | https://api.github.com/repos/ollama/ollama/issues/2155/events | https://github.com/ollama/ollama/issues/2155 | 2,095,474,558 | I_kwDOJ0Z1Ps585md- | 2,155 | Unable to push: max retries exceeded on slower connections | {
"login": "sqs",
"id": 1976,
"node_id": "MDQ6VXNlcjE5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sqs",
"html_url": "https://github.com/sqs",
"followers_url": "https://api.github.com/users/sqs/followers",
"following_u... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | open | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 14 | 2024-01-23T08:02:09 | 2024-05-14T09:47:34 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was able to push the `q4_0` tag to https://ollama.ai/sqs/starchat, but when I try to push other tags, I am getting an error (see below). Note the `%!F(MISSING)` below in case that is an issue.
The file size of the one that failed is 7.7GB. The `q4_0` push that succeeded was 8.4 GB.
```
$ for i in q3_K_M q4_K_M... | {
"login": "sqs",
"id": 1976,
"node_id": "MDQ6VXNlcjE5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sqs",
"html_url": "https://github.com/sqs",
"followers_url": "https://api.github.com/users/sqs/followers",
"following_u... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2155/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2155/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/518 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/518/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/518/comments | https://api.github.com/repos/ollama/ollama/issues/518/events | https://github.com/ollama/ollama/pull/518 | 1,893,124,094 | PR_kwDOJ0Z1Ps5aKYx_ | 518 | amd64 linux build runner | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-12T19:26:16 | 2023-10-20T16:44:05 | 2023-09-21T13:48:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/518",
"html_url": "https://github.com/ollama/ollama/pull/518",
"diff_url": "https://github.com/ollama/ollama/pull/518.diff",
"patch_url": "https://github.com/ollama/ollama/pull/518.patch",
"merged_at": null
} | Add automation that automatically creates a single ollama binary for amd64 linux builds.
Limitations:
- Requires glibc 2.29 (the glibc version ubuntu 20.04 has packed in), ideally we build on an ubuntu 16.04 or 18.04 runner instead to maximize glibc compatibility, but that will require a custom runner. `glibc` i... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/518/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6135 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6135/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6135/comments | https://api.github.com/repos/ollama/ollama/issues/6135/events | https://github.com/ollama/ollama/issues/6135 | 2,443,942,428 | I_kwDOJ0Z1Ps6Rq5oc | 6,135 | When processing long texts, it freezes and there is no response. | {
"login": "jiaguangqi",
"id": 52811251,
"node_id": "MDQ6VXNlcjUyODExMjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/52811251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaguangqi",
"html_url": "https://github.com/jiaguangqi",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-08-02T03:45:05 | 2024-09-13T18:10:57 | 2024-09-13T18:10:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am using Ollama to run llama3.1:70b. When processing long texts, it freezes and there is no response. I have checked the server’s CPU, memory, and GPU usage, and they are all low. What could be the problem?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.30 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6135/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5784 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5784/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5784/comments | https://api.github.com/repos/ollama/ollama/issues/5784/events | https://github.com/ollama/ollama/issues/5784 | 2,417,619,837 | I_kwDOJ0Z1Ps6QGfN9 | 5,784 | How to Deploy LLM Based on ollama in an offline environment? | {
"login": "RyanOvO",
"id": 31231960,
"node_id": "MDQ6VXNlcjMxMjMxOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/31231960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanOvO",
"html_url": "https://github.com/RyanOvO",
"followers_url": "https://api.github.com/users/RyanOv... | [] | closed | false | null | [] | null | 15 | 2024-07-19T01:03:36 | 2024-08-20T03:03:56 | 2024-07-19T03:36:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ollama is now ready and I am deploying based on binary files. I have downloaded the GLM4 model file and [the glm4 adaptation file for ollama](https://ollama.com/library/glm4:9b). However, GLM4 does not provide guff at present. How can I deploy inference glm4 based on ollama in an offline environment? | {
"login": "RyanOvO",
"id": 31231960,
"node_id": "MDQ6VXNlcjMxMjMxOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/31231960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanOvO",
"html_url": "https://github.com/RyanOvO",
"followers_url": "https://api.github.com/users/RyanOv... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5784/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3625 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3625/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3625/comments | https://api.github.com/repos/ollama/ollama/issues/3625/events | https://github.com/ollama/ollama/issues/3625 | 2,241,574,568 | I_kwDOJ0Z1Ps6Fm7ao | 3,625 | can't be installed on ubuntu runing in a podman container. | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-04-13T14:01:22 | 2024-04-17T19:30:20 | 2024-04-16T16:15:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I spin up a container of ubuntu using podman but i can't install ollama that curl command does not work.
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architec... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3625/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7094 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7094/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7094/comments | https://api.github.com/repos/ollama/ollama/issues/7094/events | https://github.com/ollama/ollama/issues/7094 | 2,565,121,022 | I_kwDOJ0Z1Ps6Y5KP- | 7,094 | dark mode for website | {
"login": "hichemfantar",
"id": 34947993,
"node_id": "MDQ6VXNlcjM0OTQ3OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/34947993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hichemfantar",
"html_url": "https://github.com/hichemfantar",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-10-03T22:51:43 | 2024-10-04T15:54:43 | 2024-10-04T15:54:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | These are the dark mode docs for tailwindcss
https://tailwindcss.com/docs/dark-mode | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7094/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6109 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6109/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6109/comments | https://api.github.com/repos/ollama/ollama/issues/6109/events | https://github.com/ollama/ollama/pull/6109 | 2,441,145,025 | PR_kwDOJ0Z1Ps53DNqX | 6,109 | fix modelfile message quotes | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-31T23:54:18 | 2024-08-01T00:05:45 | 2024-08-01T00:05:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6109",
"html_url": "https://github.com/ollama/ollama/pull/6109",
"diff_url": "https://github.com/ollama/ollama/pull/6109.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6109.patch",
"merged_at": "2024-08-01T00:05:44"
} | message commands should quote the content but it's not correctly formatted so it outputs content verbatim. fix the formatting will fix quoting
https://github.com/ollama/ollama/blob/main/parser/parser.go#L41-L43
resolves #6103 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6109/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2227 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2227/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2227/comments | https://api.github.com/repos/ollama/ollama/issues/2227/events | https://github.com/ollama/ollama/issues/2227 | 2,103,466,374 | I_kwDOJ0Z1Ps59YFmG | 2,227 | model not loading in GPU | {
"login": "muka",
"id": 1021269,
"node_id": "MDQ6VXNlcjEwMjEyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muka",
"html_url": "https://github.com/muka",
"followers_url": "https://api.github.com/users/muka/followers",
... | [] | closed | false | null | [] | null | 1 | 2024-01-27T09:13:12 | 2024-01-27T09:24:30 | 2024-01-27T09:24:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, great project congrats!
I noticed that event if ollama (in docker) logs say it offloaded layers to GPU

nvdia-smi report no actual usage
 to registry. Comes in 1.8, 7 and 20b sizes with both base and chat models.
While [this](https://ollama.com/internlm/internlm2.5) exists, at the moment it is F16 only.
Thank you! | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6184/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/6184/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6958 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6958/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6958/comments | https://api.github.com/repos/ollama/ollama/issues/6958/events | https://github.com/ollama/ollama/issues/6958 | 2,548,423,443 | I_kwDOJ0Z1Ps6X5dsT | 6,958 | molmo by allen ai support | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 32 | 2024-09-25T16:19:43 | 2025-01-17T20:00:22 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
https://huggingface.co/allenai/Molmo-7B-D-0924
https://huggingface.co/allenai/Molmo-72B-0924
This models are really good and have potential and fully open-source please give support for them.
thanks. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6958/reactions",
"total_count": 101,
"+1": 72,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 19,
"eyes": 10
} | https://api.github.com/repos/ollama/ollama/issues/6958/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2216 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2216/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2216/comments | https://api.github.com/repos/ollama/ollama/issues/2216/events | https://github.com/ollama/ollama/issues/2216 | 2,102,825,534 | I_kwDOJ0Z1Ps59VpI- | 2,216 | ollama.ai and registry.ollama.ai does not have IPv6 | {
"login": "miyurusankalpa",
"id": 1833185,
"node_id": "MDQ6VXNlcjE4MzMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1833185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyurusankalpa",
"html_url": "https://github.com/miyurusankalpa",
"followers_url": "https://api.gith... | [
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 3 | 2024-01-26T20:06:48 | 2025-01-30T07:14:40 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ollama.ai and registry.ollama.ai does not have IPv6 and it cannot be installed on AWS IPv6 only servers.
`Error: max retries exceeded: Get "https://registry.ollama.ai/v2/library/codellama/blobs/sha256:3a43f93b78ec50f7c4e4dc8bd1cb3fff5a900e7d574c51a6f7495e48486e0dac": dial tcp 34.120.132.20:443: connect: network is u... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2216/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2216/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1815 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1815/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1815/comments | https://api.github.com/repos/ollama/ollama/issues/1815/events | https://github.com/ollama/ollama/pull/1815 | 2,068,043,423 | PR_kwDOJ0Z1Ps5jWZDc | 1,815 | Add unit tests for Parser | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-05T21:10:42 | 2024-01-05T22:04:32 | 2024-01-05T22:04:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1815",
"html_url": "https://github.com/ollama/ollama/pull/1815",
"diff_url": "https://github.com/ollama/ollama/pull/1815.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1815.patch",
"merged_at": "2024-01-05T22:04:32"
} | This adds a couple of basic unit tests for parsing.
Suggested in #1809 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1815/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/600 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/600/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/600/comments | https://api.github.com/repos/ollama/ollama/issues/600/events | https://github.com/ollama/ollama/issues/600 | 1,912,711,759 | I_kwDOJ0Z1Ps5yAapP | 600 | Support speechless-llama2-hermes-orca-platypus-wizardlm-13b or Qwen-14b models | {
"login": "cheuk-cheng",
"id": 72942052,
"node_id": "MDQ6VXNlcjcyOTQyMDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/72942052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cheuk-cheng",
"html_url": "https://github.com/cheuk-cheng",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 5 | 2023-09-26T05:31:13 | 2023-12-22T03:56:37 | 2023-12-22T03:56:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any plan to support recent models like speechless-llama2-hermes-orca-platypus-wizardlm-13b or Qwen-14b? User reviews seem to indicate that they are powerful and fast even when compared with other llama2 models that have larger sizes. Thanks. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/600/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8013 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8013/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8013/comments | https://api.github.com/repos/ollama/ollama/issues/8013/events | https://github.com/ollama/ollama/issues/8013 | 2,727,588,546 | I_kwDOJ0Z1Ps6ik7LC | 8,013 | go get version mismatch | {
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-12-09T16:14:48 | 2024-12-10T05:58:27 | 2024-12-10T05:58:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i try go get ... and such , just testing go cmds
esecially for newest versions
then well this appeared
go: honnef.co/go/tools@v0.5.1 (matching ...@upgrade) requires github.com/BurntSushi/toml@v1.4.1-0.20240526193622-a339e1f7089c, not github.com/BurntSushi/toml@v1.4.0 (matching ...@upgrade... | {
"login": "fxmbsw7",
"id": 39368685,
"node_id": "MDQ6VXNlcjM5MzY4Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmbsw7",
"html_url": "https://github.com/fxmbsw7",
"followers_url": "https://api.github.com/users/fxmbsw... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8013/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/582 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/582/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/582/comments | https://api.github.com/repos/ollama/ollama/issues/582/events | https://github.com/ollama/ollama/pull/582 | 1,910,038,627 | PR_kwDOJ0Z1Ps5bDIX_ | 582 | fix end-of-line issue with the new prompt | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-24T00:04:07 | 2023-09-24T00:20:31 | 2023-09-24T00:20:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/582",
"html_url": "https://github.com/ollama/ollama/pull/582",
"diff_url": "https://github.com/ollama/ollama/pull/582.diff",
"patch_url": "https://github.com/ollama/ollama/pull/582.patch",
"merged_at": "2023-09-24T00:20:30"
} | The readline library had this fix which overwrote the end of the `S` in "Send a message..." which prevented the cursor from moving up the screen whenever you backspaced through the end of the line. We removed it to fix the placeholder text issue, but then the bug crept back in. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/582/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/576 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/576/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/576/comments | https://api.github.com/repos/ollama/ollama/issues/576/events | https://github.com/ollama/ollama/pull/576 | 1,909,332,077 | PR_kwDOJ0Z1Ps5bA4ML | 576 | ubuntu cuda drivers | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-09-22T18:13:11 | 2023-09-22T18:43:15 | 2023-09-22T18:43:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/576",
"html_url": "https://github.com/ollama/ollama/pull/576",
"diff_url": "https://github.com/ollama/ollama/pull/576.diff",
"patch_url": "https://github.com/ollama/ollama/pull/576.patch",
"merged_at": "2023-09-22T18:43:14"
} | Add automatic cuda driver install to our install script. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/576/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7803 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7803/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7803/comments | https://api.github.com/repos/ollama/ollama/issues/7803/events | https://github.com/ollama/ollama/issues/7803 | 2,684,582,879 | I_kwDOJ0Z1Ps6gA3vf | 7,803 | problem with ollama serve | {
"login": "atb29",
"id": 144506608,
"node_id": "U_kgDOCJz-8A",
"avatar_url": "https://avatars.githubusercontent.com/u/144506608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atb29",
"html_url": "https://github.com/atb29",
"followers_url": "https://api.github.com/users/atb29/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 3 | 2024-11-22T21:12:11 | 2024-12-14T15:33:07 | 2024-12-14T15:33:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
C:\Users\allowdin>ollama serve
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.3.13 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7803/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/58 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/58/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/58/comments | https://api.github.com/repos/ollama/ollama/issues/58/events | https://github.com/ollama/ollama/pull/58 | 1,794,392,513 | PR_kwDOJ0Z1Ps5U9rY3 | 58 | return error in generate response | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-07-07T21:27:14 | 2023-07-10T21:35:18 | 2023-07-10T21:03:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/58",
"html_url": "https://github.com/ollama/ollama/pull/58",
"diff_url": "https://github.com/ollama/ollama/pull/58.diff",
"patch_url": "https://github.com/ollama/ollama/pull/58.patch",
"merged_at": "2023-07-10T21:03:47"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/58/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/58/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4000 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4000/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4000/comments | https://api.github.com/repos/ollama/ollama/issues/4000/events | https://github.com/ollama/ollama/issues/4000 | 2,267,450,441 | I_kwDOJ0Z1Ps6HJoxJ | 4,000 | Error: invalid file magic when trying to import a custom gguf model to ollama instance | {
"login": "atb29",
"id": 144506608,
"node_id": "U_kgDOCJz-8A",
"avatar_url": "https://avatars.githubusercontent.com/u/144506608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atb29",
"html_url": "https://github.com/atb29",
"followers_url": "https://api.github.com/users/atb29/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-28T09:21:07 | 2024-05-01T20:30:09 | 2024-05-01T20:30:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i got this error
"E:\phi3-mini-128k-gguf\model>ollama create phi-3-mini-128k -f Modelfile
transferring model data
creating model layer
Error: invalid file magic"
here is the modelfile used
FROM ./Phi-3-mini-128k-instruct.IQ4_XS.gguf
PARAMETER num_ctx 65536
PARAMETER num_keep 4
PAR... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4000/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6747 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6747/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6747/comments | https://api.github.com/repos/ollama/ollama/issues/6747/events | https://github.com/ollama/ollama/issues/6747 | 2,518,958,255 | I_kwDOJ0Z1Ps6WJECv | 6,747 | ERROR: llama runner process has terminated: error loading modelvocabulary: _Map_base::at | {
"login": "CjhHa1",
"id": 72591262,
"node_id": "MDQ6VXNlcjcyNTkxMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/72591262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CjhHa1",
"html_url": "https://github.com/CjhHa1",
"followers_url": "https://api.github.com/users/CjhHa1/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-09-11T08:19:06 | 2025-01-08T16:06:35 | 2024-09-11T18:38:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried the octopus v4 model using ollama, I have constrcted the model using Modelfile, but it can not work when I try to run it.
```
transferring model data 100%
using existing layer sha256:08fc24194c1805114ed7aefcc8e5cedf7fe1e23b587f1655502afb294263b948
using existing layer sha256:e49180... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6747/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6869 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6869/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6869/comments | https://api.github.com/repos/ollama/ollama/issues/6869/events | https://github.com/ollama/ollama/issues/6869 | 2,535,207,221 | I_kwDOJ0Z1Ps6XHDE1 | 6,869 | An operation on a socket could not be performed | {
"login": "ipzzzzz",
"id": 26899986,
"node_id": "MDQ6VXNlcjI2ODk5OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/26899986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ipzzzzz",
"html_url": "https://github.com/ipzzzzz",
"followers_url": "https://api.github.com/users/ipzzzz... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-09-19T03:42:13 | 2024-10-08T23:28:50 | 2024-10-08T23:28:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The error reported in the terminal: dial tcp [xxx]:443: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6869/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6125 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6125/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6125/comments | https://api.github.com/repos/ollama/ollama/issues/6125/events | https://github.com/ollama/ollama/issues/6125 | 2,443,163,116 | I_kwDOJ0Z1Ps6Rn7Xs | 6,125 | minor bug: ggml/llama.cpp's new Q4_0_4_8 quantized files don't import into ollama | {
"login": "AndreasKunar",
"id": 53893683,
"node_id": "MDQ6VXNlcjUzODkzNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53893683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasKunar",
"html_url": "https://github.com/AndreasKunar",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2024-08-01T18:45:11 | 2024-10-20T13:17:08 | 2024-08-11T11:02:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I built ollama on Ubuntu24.04, running in Windows11's WSL2 on my Surface 11 Pro to try and test Ollama with llama.cpp's Q4_0_4_8 acceleration.
Ollama+llama.cpp builds, imports my local llama-2 Q4_0, and runs it.
But when I try and import a local llama-2 Q4_0_4_8 model (which runs with ll... | {
"login": "AndreasKunar",
"id": 53893683,
"node_id": "MDQ6VXNlcjUzODkzNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53893683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasKunar",
"html_url": "https://github.com/AndreasKunar",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6125/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6125/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3789 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3789/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3789/comments | https://api.github.com/repos/ollama/ollama/issues/3789/events | https://github.com/ollama/ollama/pull/3789 | 2,254,816,644 | PR_kwDOJ0Z1Ps5tQuFX | 3,789 | chore: use errors.New to replace fmt.Errorf with no parameters will much better | {
"login": "ChengenH",
"id": 36215014,
"node_id": "MDQ6VXNlcjM2MjE1MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/36215014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChengenH",
"html_url": "https://github.com/ChengenH",
"followers_url": "https://api.github.com/users/Che... | [] | closed | false | null | [] | null | 0 | 2024-04-21T01:52:14 | 2024-04-21T02:11:06 | 2024-04-21T02:11:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3789",
"html_url": "https://github.com/ollama/ollama/pull/3789",
"diff_url": "https://github.com/ollama/ollama/pull/3789.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3789.patch",
"merged_at": "2024-04-21T02:11:06"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3789/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7533 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7533/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7533/comments | https://api.github.com/repos/ollama/ollama/issues/7533/events | https://github.com/ollama/ollama/pull/7533 | 2,639,283,291 | PR_kwDOJ0Z1Ps6BHFv- | 7,533 | runner.go: Check for zero length images | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-11-06T21:31:16 | 2024-11-08T17:39:33 | 2024-11-08T17:39:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7533",
"html_url": "https://github.com/ollama/ollama/pull/7533",
"diff_url": "https://github.com/ollama/ollama/pull/7533.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7533.patch",
"merged_at": "2024-11-08T17:39:32"
} | If we get a request with a zero length image, it will result in an out-of-bounds error when we pass the data to the image encoder. | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7533/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6858 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6858/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6858/comments | https://api.github.com/repos/ollama/ollama/issues/6858/events | https://github.com/ollama/ollama/issues/6858 | 2,534,264,359 | I_kwDOJ0Z1Ps6XDc4n | 6,858 | Unable to load adapter_model.safetensors for Phi3-Medium-128k | {
"login": "AAndersn",
"id": 19935503,
"node_id": "MDQ6VXNlcjE5OTM1NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/19935503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AAndersn",
"html_url": "https://github.com/AAndersn",
"followers_url": "https://api.github.com/users/AAn... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-18T17:11:53 | 2024-12-02T22:58:01 | 2024-12-02T22:58:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Trying to load a safetensors adapter file for phi3-medium-128k using a .modelfile. I generated an adapter_config.json and adapter_model.safetensors files using lora training and copied them into the ollama docker container.
I have generated the modelfile with `ollama show phi3:medium --model... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6858/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6166 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6166/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6166/comments | https://api.github.com/repos/ollama/ollama/issues/6166/events | https://github.com/ollama/ollama/pull/6166 | 2,447,367,727 | PR_kwDOJ0Z1Ps53Yglf | 6,166 | Added build support to RISC-V | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 0 | 2024-08-05T00:23:26 | 2024-08-27T13:14:46 | 2024-08-27T13:14:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6166",
"html_url": "https://github.com/ollama/ollama/pull/6166",
"diff_url": "https://github.com/ollama/ollama/pull/6166.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6166.patch",
"merged_at": null
} | 1. Build support to RISC-V
2. Fix https://github.com/chewxy/math32/issues/46
cc @mxyng @jmorganca | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6166/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3332 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3332/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3332/comments | https://api.github.com/repos/ollama/ollama/issues/3332/events | https://github.com/ollama/ollama/issues/3332 | 2,204,774,244 | I_kwDOJ0Z1Ps6Dai9k | 3,332 | what should I do after pulling? | {
"login": "jinja27",
"id": 56258048,
"node_id": "MDQ6VXNlcjU2MjU4MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/56258048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinja27",
"html_url": "https://github.com/jinja27",
"followers_url": "https://api.github.com/users/jinja2... | [] | closed | false | null | [] | null | 0 | 2024-03-25T02:49:17 | 2024-03-25T02:53:20 | 2024-03-25T02:53:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jinja27",
"id": 56258048,
"node_id": "MDQ6VXNlcjU2MjU4MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/56258048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinja27",
"html_url": "https://github.com/jinja27",
"followers_url": "https://api.github.com/users/jinja2... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3332/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4198 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4198/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4198/comments | https://api.github.com/repos/ollama/ollama/issues/4198/events | https://github.com/ollama/ollama/issues/4198 | 2,280,584,462 | I_kwDOJ0Z1Ps6H7vUO | 4,198 | Improving the efficiency of using multiple GPU cards. | {
"login": "zhqfdn",
"id": 25156863,
"node_id": "MDQ6VXNlcjI1MTU2ODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/25156863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhqfdn",
"html_url": "https://github.com/zhqfdn",
"followers_url": "https://api.github.com/users/zhqfdn/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-05-06T10:49:23 | 2024-09-09T21:50:14 | 2024-06-14T22:35:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Before v0.1.32, when loading a model, it would be evenly distributed across all GPU cards to improve the use of GPU cards. In v0.1.32 and v0.1.33, it was found that loading a model would automatically use one card.
When used by multiple users simultaneously, it is slower. If evenly distributed across multiple GPU ca... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4198/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/4198/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3115 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3115/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3115/comments | https://api.github.com/repos/ollama/ollama/issues/3115/events | https://github.com/ollama/ollama/issues/3115 | 2,184,462,654 | I_kwDOJ0Z1Ps6CNEE- | 3,115 | Switching between models is very slow? | {
"login": "eliranwong",
"id": 25262722,
"node_id": "MDQ6VXNlcjI1MjYyNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/25262722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliranwong",
"html_url": "https://github.com/eliranwong",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-03-13T16:44:07 | 2024-04-24T13:25:59 | 2024-03-13T17:50:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I noted that switching between models is very slow. Is it possible to load two different models in memory at the same time? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3115/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8360 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8360/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8360/comments | https://api.github.com/repos/ollama/ollama/issues/8360/events | https://github.com/ollama/ollama/issues/8360 | 2,777,264,546 | I_kwDOJ0Z1Ps6libGi | 8,360 | can't pull model from hf | {
"login": "Vacinlando",
"id": 49453175,
"node_id": "MDQ6VXNlcjQ5NDUzMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/49453175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vacinlando",
"html_url": "https://github.com/Vacinlando",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-09T09:18:51 | 2025-01-19T02:13:20 | 2025-01-09T09:20:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
input: ollama run hf.co/bartowski/Qwen2.5-7B-Instruct-GGUF:F16
pulling manifest
pulling 863c978275bc... 100% ▕████████████████████████████████████▏ 15 GB
pulling e94a8ecb9327... 100% ▕████████████████████████████████████▏ 1.6 KB
pulling 9ae... | {
"login": "Vacinlando",
"id": 49453175,
"node_id": "MDQ6VXNlcjQ5NDUzMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/49453175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vacinlando",
"html_url": "https://github.com/Vacinlando",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8360/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8028 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8028/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8028/comments | https://api.github.com/repos/ollama/ollama/issues/8028/events | https://github.com/ollama/ollama/pull/8028 | 2,730,770,439 | PR_kwDOJ0Z1Ps6EvbMZ | 8,028 | readme + examples: add pgai to readme projects and add example for semantic search | {
"login": "Askir",
"id": 10251955,
"node_id": "MDQ6VXNlcjEwMjUxOTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10251955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Askir",
"html_url": "https://github.com/Askir",
"followers_url": "https://api.github.com/users/Askir/follow... | [] | closed | false | null | [] | null | 0 | 2024-12-10T17:43:13 | 2024-12-16T09:02:28 | 2024-12-16T09:02:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8028",
"html_url": "https://github.com/ollama/ollama/pull/8028",
"diff_url": "https://github.com/ollama/ollama/pull/8028.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8028.patch",
"merged_at": "2024-12-16T09:02:28"
} | Hey there,
adding a mention of pgai to the community integrations section in the Readme as well as a small demo script for the example folder.
Let me know if you'd like to change anything! | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8028/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1375 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1375/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1375/comments | https://api.github.com/repos/ollama/ollama/issues/1375/events | https://github.com/ollama/ollama/issues/1375 | 2,024,498,866 | I_kwDOJ0Z1Ps54q2ay | 1,375 | Add option to not stream output in terminal | {
"login": "devinprater",
"id": 15256014,
"node_id": "MDQ6VXNlcjE1MjU2MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15256014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devinprater",
"html_url": "https://github.com/devinprater",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-12-04T18:47:31 | 2024-05-17T13:55:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I use a screen reader, and when Ollama is sending output, it pauses while it's generating text, so, it, sounds, a little, like, this. Not too big a deal, but it would make it sound a bit better to have it output all at once, when done generating, not in a stream.
Thanks! | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1375/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1375/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6555 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6555/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6555/comments | https://api.github.com/repos/ollama/ollama/issues/6555/events | https://github.com/ollama/ollama/issues/6555 | 2,494,487,024 | I_kwDOJ0Z1Ps6Urtnw | 6,555 | /api/embed returns empty embeddings in docker environment | {
"login": "smoothdvd",
"id": 22420,
"node_id": "MDQ6VXNlcjIyNDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/22420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smoothdvd",
"html_url": "https://github.com/smoothdvd",
"followers_url": "https://api.github.com/users/smoothdvd/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-08-29T13:40:15 | 2024-08-29T16:06:51 | 2024-08-29T16:06:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
curl http://localhost:11434/api/embed -d '{
"model": "bge-m3",
"prompt": "Llamas are members of the camelid family"
}'
{"model":"bge-m3","embeddings":[]}%
```
but old /api/embedding endpoint still worked
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.8 | {
"login": "smoothdvd",
"id": 22420,
"node_id": "MDQ6VXNlcjIyNDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/22420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smoothdvd",
"html_url": "https://github.com/smoothdvd",
"followers_url": "https://api.github.com/users/smoothdvd/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6555/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4778 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4778/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4778/comments | https://api.github.com/repos/ollama/ollama/issues/4778/events | https://github.com/ollama/ollama/issues/4778 | 2,329,441,744 | I_kwDOJ0Z1Ps6K2HXQ | 4,778 | Support for jina-embeddings-v2-base-zh | {
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 8 | 2024-06-02T03:06:55 | 2024-11-05T19:02:18 | 2024-10-30T23:20:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/jinaai/jina-embeddings-v2-base-zh | {
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4778/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4778/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2052 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2052/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2052/comments | https://api.github.com/repos/ollama/ollama/issues/2052/events | https://github.com/ollama/ollama/issues/2052 | 2,088,578,302 | I_kwDOJ0Z1Ps58fSz- | 2,052 | Cannot interrupt evaluation `ollama serve` during prompt evaluation | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-01-18T15:52:40 | 2024-03-11T18:10:53 | 2024-03-11T18:10:53 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When trying to interrupt either generation or `ollama serve` with ctrl+c while the prompt is being evaluated, Ollama will wait until prompt evaluation is done | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2052/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2052/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6504 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6504/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6504/comments | https://api.github.com/repos/ollama/ollama/issues/6504/events | https://github.com/ollama/ollama/pull/6504 | 2,485,432,496 | PR_kwDOJ0Z1Ps55XTpO | 6,504 | openai: increase context window when max_tokens is provided | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 9 | 2024-08-25T19:36:12 | 2025-01-30T00:08:22 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6504",
"html_url": "https://github.com/ollama/ollama/pull/6504",
"diff_url": "https://github.com/ollama/ollama/pull/6504.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6504.patch",
"merged_at": null
} | Previously, `/v1/chat/completions` requests were limited to 2048 tokens. This PR extends the context length by setting `num_ctx` to `max_tokens` if it's larger than the default context window of 2048 tokens. It also includes a minor clean up for the OpenAI compatibility unit tests.
Note: this doesn't solve the case ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6504/reactions",
"total_count": 37,
"+1": 25,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6504/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5442 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5442/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5442/comments | https://api.github.com/repos/ollama/ollama/issues/5442/events | https://github.com/ollama/ollama/pull/5442 | 2,387,027,329 | PR_kwDOJ0Z1Ps50PbQf | 5,442 | Add windows radeon concurrency note | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-02T19:46:39 | 2024-07-02T19:47:51 | 2024-07-02T19:47:47 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5442",
"html_url": "https://github.com/ollama/ollama/pull/5442",
"diff_url": "https://github.com/ollama/ollama/pull/5442.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5442.patch",
"merged_at": "2024-07-02T19:47:47"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5442/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7953/comments | https://api.github.com/repos/ollama/ollama/issues/7953/events | https://github.com/ollama/ollama/issues/7953 | 2,721,104,729 | I_kwDOJ0Z1Ps6iMMNZ | 7,953 | Ollama not using GPU (windows) | {
"login": "stormcoph",
"id": 122960804,
"node_id": "U_kgDOB1Q7pA",
"avatar_url": "https://avatars.githubusercontent.com/u/122960804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stormcoph",
"html_url": "https://github.com/stormcoph",
"followers_url": "https://api.github.com/users/stormc... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-12-05T18:20:25 | 2024-12-05T19:47:15 | 2024-12-05T19:47:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
now i know there has been a lot other issues about this problem and most of them has been solved but i have not found an answer for my specific scenario, most of the cases has been on linux.
In other threads i seen people say its because they don't have enough vram that it automatically uses th... | {
"login": "stormcoph",
"id": 122960804,
"node_id": "U_kgDOB1Q7pA",
"avatar_url": "https://avatars.githubusercontent.com/u/122960804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stormcoph",
"html_url": "https://github.com/stormcoph",
"followers_url": "https://api.github.com/users/stormc... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7953/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/82 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/82/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/82/comments | https://api.github.com/repos/ollama/ollama/issues/82/events | https://github.com/ollama/ollama/pull/82 | 1,805,757,906 | PR_kwDOJ0Z1Ps5VkVEJ | 82 | windows build | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-15T00:31:56 | 2023-07-23T10:31:22 | 2023-07-15T03:11:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/82",
"html_url": "https://github.com/ollama/ollama/pull/82",
"diff_url": "https://github.com/ollama/ollama/pull/82.diff",
"patch_url": "https://github.com/ollama/ollama/pull/82.patch",
"merged_at": "2023-07-15T03:11:55"
} | make some minor changes so it builds on windows.
TODO:
moving the `.*.part` to the full file isn't working correctly but #78 will change how that works so temporary workaround is to copy the `.*.part` to the final name | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/82/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/82/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6987 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6987/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6987/comments | https://api.github.com/repos/ollama/ollama/issues/6987/events | https://github.com/ollama/ollama/issues/6987 | 2,551,333,987 | I_kwDOJ0Z1Ps6YEkRj | 6,987 | Way to stop all running models | {
"login": "amytimed",
"id": 52638772,
"node_id": "MDQ6VXNlcjUyNjM4Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/52638772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amytimed",
"html_url": "https://github.com/amytimed",
"followers_url": "https://api.github.com/users/amy... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-09-26T19:01:49 | 2025-01-18T16:03:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | `ollama stop all` or `ollama stop *` etc
would be handy | {
"login": "amytimed",
"id": 52638772,
"node_id": "MDQ6VXNlcjUyNjM4Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/52638772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amytimed",
"html_url": "https://github.com/amytimed",
"followers_url": "https://api.github.com/users/amy... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6987/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6987/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/2899 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2899/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2899/comments | https://api.github.com/repos/ollama/ollama/issues/2899/events | https://github.com/ollama/ollama/issues/2899 | 2,165,487,684 | I_kwDOJ0Z1Ps6BErhE | 2,899 | SIGSEGV: segmentation violation with 0.1.27 and starcoder2:!5b | {
"login": "joseluisamado",
"id": 7227984,
"node_id": "MDQ6VXNlcjcyMjc5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7227984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joseluisamado",
"html_url": "https://github.com/joseluisamado",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | 5 | 2024-03-03T18:37:46 | 2024-03-09T01:00:53 | 2024-03-03T19:15:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, I am having an issue while running the recently released `starcoder2` model. Please find below the details.
**How to reproduce the issue**
Running ollama in a container with Docker Compose
```
ollama:
image: ollama/ollama:0.1.27
container_name: ollama
restart: unless-stopped
network... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2899/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5186 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5186/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5186/comments | https://api.github.com/repos/ollama/ollama/issues/5186/events | https://github.com/ollama/ollama/issues/5186 | 2,364,767,500 | I_kwDOJ0Z1Ps6M830M | 5,186 | AMD Ryzen NPU support | {
"login": "ivanbrash",
"id": 91606112,
"node_id": "U_kgDOBXXMYA",
"avatar_url": "https://avatars.githubusercontent.com/u/91606112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivanbrash",
"html_url": "https://github.com/ivanbrash",
"followers_url": "https://api.github.com/users/ivanbras... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 19 | 2024-06-20T15:48:13 | 2024-12-26T09:30:59 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello! I'm want to buy Lenovo Xiaoxin 14 AI laptop on AMD Ryzen 7 8845H on my birthday and I will install Artix Linux to this. Do you will to add AMD Ryzen NPU support to Ollama on Linux and Windows? If anything, AMD Ryzen NPU driver for Linux is already available on Github:
https://github.com/amd/xdna-driver.git
Sor... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5186/reactions",
"total_count": 27,
"+1": 25,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5186/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6426 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6426/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6426/comments | https://api.github.com/repos/ollama/ollama/issues/6426/events | https://github.com/ollama/ollama/issues/6426 | 2,474,168,354 | I_kwDOJ0Z1Ps6TeNAi | 6,426 | convert: vocab conversion incorrect | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 1 | 2024-08-19T20:40:50 | 2024-08-21T18:48:44 | 2024-08-21T18:48:44 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
To reproduce, clone https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct then create a `Modelfile`:
```
FROM .
```
Lastly, create a model:
```
ollama create test
```
When trying to run the model, an assertion will be fired:
```
Error: llama runner process has terminated:... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6426/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/300 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/300/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/300/comments | https://api.github.com/repos/ollama/ollama/issues/300/events | https://github.com/ollama/ollama/issues/300 | 1,838,346,491 | I_kwDOJ0Z1Ps5tkvD7 | 300 | Provide a way to allow connections to Ollama from web browser origins other than `localhost` and `0.0.0.0` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 14 | 2023-08-06T19:41:30 | 2025-01-29T14:27:00 | 2023-08-08T14:41:44 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, Ollama has CORS rules that allow pages hosted on `localhost` to connect to `localhost:11434`. #282 adds support for `0.0.0.0`, but some hosted web pages want to leverage a local running Ollama.
Simply opening up CORS to all origins wouldn't be secure: any website could call the API by simply browsing to i... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/300/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/300/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/953/comments | https://api.github.com/repos/ollama/ollama/issues/953/events | https://github.com/ollama/ollama/pull/953 | 1,969,295,567 | PR_kwDOJ0Z1Ps5eK6GK | 953 | Changed name of folder and added some more features | {
"login": "suoko",
"id": 3659980,
"node_id": "MDQ6VXNlcjM2NTk5ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3659980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suoko",
"html_url": "https://github.com/suoko",
"followers_url": "https://api.github.com/users/suoko/follower... | [] | closed | false | null | [] | null | 2 | 2023-10-30T22:02:52 | 2024-11-21T08:48:20 | 2024-11-21T08:48:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/953",
"html_url": "https://github.com/ollama/ollama/pull/953",
"diff_url": "https://github.com/ollama/ollama/pull/953.diff",
"patch_url": "https://github.com/ollama/ollama/pull/953.patch",
"merged_at": null
} | I added the model chooser | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/953/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6251 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6251/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6251/comments | https://api.github.com/repos/ollama/ollama/issues/6251/events | https://github.com/ollama/ollama/issues/6251 | 2,454,734,578 | I_kwDOJ0Z1Ps6SUEby | 6,251 | Ollama multiuser scale | {
"login": "jamiabailey",
"id": 85038220,
"node_id": "MDQ6VXNlcjg1MDM4MjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/85038220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamiabailey",
"html_url": "https://github.com/jamiabailey",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5667396220,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-08-08T02:21:08 | 2024-09-02T03:12:33 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm looking for some scale numbers on what ollama supports as far as multi-user environments go. I see the OLLAMA_NUM_PARALLEL for adjusting how many simultaneous requests can be served as well as OLLAMA_MAX_QUEUE for how many requests can be queued before being rejected but nothing that will help me understand how tha... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6251/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/6251/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6569 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6569/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6569/comments | https://api.github.com/repos/ollama/ollama/issues/6569/events | https://github.com/ollama/ollama/issues/6569 | 2,497,904,670 | I_kwDOJ0Z1Ps6U4wAe | 6,569 | TensorRT Support | {
"login": "JonahMMay",
"id": 119529402,
"node_id": "U_kgDOBx_fug",
"avatar_url": "https://avatars.githubusercontent.com/u/119529402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonahMMay",
"html_url": "https://github.com/JonahMMay",
"followers_url": "https://api.github.com/users/JonahM... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-08-30T18:00:15 | 2025-01-25T18:22:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Does ollama leverage TensorRT and if not, can support for it be added? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6569/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6569/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7670 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7670/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7670/comments | https://api.github.com/repos/ollama/ollama/issues/7670/events | https://github.com/ollama/ollama/pull/7670 | 2,660,084,044 | PR_kwDOJ0Z1Ps6B-Xjq | 7,670 | Vendor bump llama.cpp | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-11-14T21:21:05 | 2024-12-02T19:02:09 | 2024-12-02T19:02:04 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7670",
"html_url": "https://github.com/ollama/ollama/pull/7670",
"diff_url": "https://github.com/ollama/ollama/pull/7670.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7670.patch",
"merged_at": null
} | Update to 4a8ccb37ad9c9027cbcfd5548c19cdffe48d5197
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7670/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6876 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6876/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6876/comments | https://api.github.com/repos/ollama/ollama/issues/6876/events | https://github.com/ollama/ollama/issues/6876 | 2,535,981,484 | I_kwDOJ0Z1Ps6XKAGs | 6,876 | Why models don't use full CPU power? | {
"login": "iladshyan",
"id": 103303611,
"node_id": "U_kgDOBihJuw",
"avatar_url": "https://avatars.githubusercontent.com/u/103303611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iladshyan",
"html_url": "https://github.com/iladshyan",
"followers_url": "https://api.github.com/users/iladsh... | [] | closed | false | null | [] | null | 8 | 2024-09-19T10:48:25 | 2024-09-20T11:59:53 | 2024-09-20T11:59:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have noticed in CPU only use cases the models are not using the CPU to the full potential. Are there any way to make the utilize the full power? | {
"login": "iladshyan",
"id": 103303611,
"node_id": "U_kgDOBihJuw",
"avatar_url": "https://avatars.githubusercontent.com/u/103303611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iladshyan",
"html_url": "https://github.com/iladshyan",
"followers_url": "https://api.github.com/users/iladsh... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6876/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5492 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5492/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5492/comments | https://api.github.com/repos/ollama/ollama/issues/5492/events | https://github.com/ollama/ollama/pull/5492 | 2,391,661,938 | PR_kwDOJ0Z1Ps50fNZD | 5,492 | Use slot with cached prompt instead of least recently used | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-05T02:34:53 | 2024-07-05T16:32:49 | 2024-07-05T16:32:47 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5492",
"html_url": "https://github.com/ollama/ollama/pull/5492",
"diff_url": "https://github.com/ollama/ollama/pull/5492.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5492.patch",
"merged_at": "2024-07-05T16:32:47"
} | This chooses the slot with the longest common prompt prefix instead of selecting the least recently used slot – this maximizes cache time for a single "conversation".
Future improvements:
- [ ] Clone slots and their cache
- [ ] Avoid requests "stealing" slots from each other because they have a small but common p... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5492/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.