url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/899 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/899/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/899/comments | https://api.github.com/repos/ollama/ollama/issues/899/events | https://github.com/ollama/ollama/issues/899 | 1,960,224,417 | I_kwDOJ0Z1Ps501qah | 899 | Big performance hit from v0.1.4 | {
"login": "imikod",
"id": 7832990,
"node_id": "MDQ6VXNlcjc4MzI5OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7832990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imikod",
"html_url": "https://github.com/imikod",
"followers_url": "https://api.github.com/users/imikod/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2023-10-24T22:59:14 | 2023-10-27T19:13:45 | 2023-10-27T19:13:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | v0.1.4 is around 3 times slower than v0.1.3
I tested 2 models with cpu only.
The models are [dolphin-2.1-mistral-7b.Q3_K_M](https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF/blob/main/dolphin-2.1-mistral-7b.Q3_K_M.gguf) and [openhermes-2-mistral-7b.Q5_K_M](https://huggingface.co/TheBloke/OpenHermes-2-Mistra... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/899/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6937 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6937/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6937/comments | https://api.github.com/repos/ollama/ollama/issues/6937/events | https://github.com/ollama/ollama/issues/6937 | 2,545,706,930 | I_kwDOJ0Z1Ps6XvGey | 6,937 | error reading llm response:An existing connection was forcibly closed by the remote host. | {
"login": "yaosd99",
"id": 137629224,
"node_id": "U_kgDOCDQOKA",
"avatar_url": "https://avatars.githubusercontent.com/u/137629224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaosd99",
"html_url": "https://github.com/yaosd99",
"followers_url": "https://api.github.com/users/yaosd99/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 15 | 2024-09-24T15:18:12 | 2024-11-20T20:11:21 | 2024-11-20T20:11:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

***Cannot import images.
Please~~
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.11 | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6937/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1527 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1527/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1527/comments | https://api.github.com/repos/ollama/ollama/issues/1527/events | https://github.com/ollama/ollama/pull/1527 | 2,042,187,567 | PR_kwDOJ0Z1Ps5iB0uk | 1,527 | remove sample_count from docs | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-12-14T17:47:56 | 2023-12-14T22:49:02 | 2023-12-14T22:49:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1527",
"html_url": "https://github.com/ollama/ollama/pull/1527",
"diff_url": "https://github.com/ollama/ollama/pull/1527.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1527.patch",
"merged_at": "2023-12-14T22:49:01"
} | this info has not been returned from these endpoints in some time | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1527/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2720 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2720/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2720/comments | https://api.github.com/repos/ollama/ollama/issues/2720/events | https://github.com/ollama/ollama/issues/2720 | 2,152,053,852 | I_kwDOJ0Z1Ps6ARbxc | 2,720 | Ollama gibberish output when using rocm | {
"login": "BeastRein",
"id": 80418545,
"node_id": "MDQ6VXNlcjgwNDE4NTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/80418545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeastRein",
"html_url": "https://github.com/BeastRein",
"followers_url": "https://api.github.com/users/... | [
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-02-24T02:21:17 | 2024-04-17T03:53:58 | 2024-04-12T21:55:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running any model with any prompt while using rocm on my rx5500XT with compiled rocm driver, it gives a completely garbage output of a large quantity. not sure if this is rocm or ollama disagreeing with the rocm install.
(output shortened)
```
>>> Hi!
©########################################################... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2720/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3385 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3385/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3385/comments | https://api.github.com/repos/ollama/ollama/issues/3385/events | https://github.com/ollama/ollama/issues/3385 | 2,212,665,877 | I_kwDOJ0Z1Ps6D4poV | 3,385 | Model not found | {
"login": "qmauret",
"id": 17746331,
"node_id": "MDQ6VXNlcjE3NzQ2MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/17746331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmauret",
"html_url": "https://github.com/qmauret",
"followers_url": "https://api.github.com/users/qmaure... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-03-28T08:45:48 | 2024-03-28T14:13:37 | 2024-03-28T14:13:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am trying to access a model running locally on MacOS from a local Linux docker image and having this error "model 'mistral' not found, try pulling it first". I have followed @AdvancedAssistiveTech [comment](https://github.com/ollama/ollama/issues/1783#issuecomment-1877276553) but setting env... | {
"login": "qmauret",
"id": 17746331,
"node_id": "MDQ6VXNlcjE3NzQ2MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/17746331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qmauret",
"html_url": "https://github.com/qmauret",
"followers_url": "https://api.github.com/users/qmaure... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3385/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2425 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2425/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2425/comments | https://api.github.com/repos/ollama/ollama/issues/2425/events | https://github.com/ollama/ollama/issues/2425 | 2,126,765,600 | I_kwDOJ0Z1Ps5-w94g | 2,425 | OpenAI API 403 error with 'Origin' http request header | {
"login": "wizd",
"id": 2835415,
"node_id": "MDQ6VXNlcjI4MzU0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2835415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizd",
"html_url": "https://github.com/wizd",
"followers_url": "https://api.github.com/users/wizd/followers",
... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-02-09T09:28:20 | 2024-02-09T14:06:46 | 2024-02-09T14:06:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, gratz on OpenAI API release! My life is much easier for now.
When testing the API I found when the browser extension sends 'Origin' header, the API always return 403 error immediately, like bellow:
```
curl http://localhost:5310/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Ori... | {
"login": "wizd",
"id": 2835415,
"node_id": "MDQ6VXNlcjI4MzU0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2835415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizd",
"html_url": "https://github.com/wizd",
"followers_url": "https://api.github.com/users/wizd/followers",
... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2425/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2505 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2505/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2505/comments | https://api.github.com/repos/ollama/ollama/issues/2505/events | https://github.com/ollama/ollama/issues/2505 | 2,135,344,748 | I_kwDOJ0Z1Ps5_RsZs | 2,505 | How do I specify parameters when launching ollama from command line? | {
"login": "dtp555-1212",
"id": 13024057,
"node_id": "MDQ6VXNlcjEzMDI0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/13024057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtp555-1212",
"html_url": "https://github.com/dtp555-1212",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 5 | 2024-02-14T23:02:07 | 2024-12-09T00:48:07 | 2024-02-15T06:19:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I saw something online that said to try ollama run llama2:13b -temperature 0.0 but that does not work. I am also interested in setting the seed, so rerunning will do the same process rather than doing something different each time. (e.g. on a classification task, sometimes it says valid/invalid, sometimes is says corr... | {
"login": "dtp555-1212",
"id": 13024057,
"node_id": "MDQ6VXNlcjEzMDI0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/13024057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtp555-1212",
"html_url": "https://github.com/dtp555-1212",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2505/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2505/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2542 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2542/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2542/comments | https://api.github.com/repos/ollama/ollama/issues/2542/events | https://github.com/ollama/ollama/pull/2542 | 2,138,980,436 | PR_kwDOJ0Z1Ps5nG3H2 | 2,542 | fix: chat system prompting overrides | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 1 | 2024-02-16T16:44:11 | 2024-02-17T16:36:44 | 2024-02-16T19:42:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2542",
"html_url": "https://github.com/ollama/ollama/pull/2542",
"diff_url": "https://github.com/ollama/ollama/pull/2542.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2542.patch",
"merged_at": "2024-02-16T19:42:43"
} | This change fixes two more system message related issues with the CLI and message templates.
- When `/set system ...` is run multiple times in the CLI, use only the most recent system message rather than adding multiple system messages to the history.
- Do not add the model's default message as a first message when a... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2542/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3264 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3264/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3264/comments | https://api.github.com/repos/ollama/ollama/issues/3264/events | https://github.com/ollama/ollama/issues/3264 | 2,196,937,196 | I_kwDOJ0Z1Ps6C8pns | 3,264 | "CUDA error: out of memory" after random number of API requests | {
"login": "RandomGitUser321",
"id": 27916165,
"node_id": "MDQ6VXNlcjI3OTE2MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/27916165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RandomGitUser321",
"html_url": "https://github.com/RandomGitUser321",
"followers_url": "https://... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-03-20T08:26:16 | 2024-06-22T00:02:38 | 2024-06-22T00:02:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I run a workflow in ComfyUI that makes calls to Ollama server's API to generate prompts or analyze images. It works fine, normally, but occasionally I get CUDA errors that then make me have to restart the server. It's kind of disruptive to my workflow because I have to check back every 5-10 minu... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3264/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1915 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1915/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1915/comments | https://api.github.com/repos/ollama/ollama/issues/1915/events | https://github.com/ollama/ollama/pull/1915 | 2,075,481,288 | PR_kwDOJ0Z1Ps5jvoP4 | 1,915 | Bump llama.cpp to b1842 and add new cuda lib dep | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 4 | 2024-01-11T00:48:36 | 2024-01-16T21:36:52 | 2024-01-16T21:36:49 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1915",
"html_url": "https://github.com/ollama/ollama/pull/1915",
"diff_url": "https://github.com/ollama/ollama/pull/1915.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1915.patch",
"merged_at": "2024-01-16T21:36:49"
} | Upstream llama.cpp has added a new dependency with the NVIDIA CUDA Driver Libraries (libcuda.so) which is part of the driver distribution, not the general cuda libraries, and is not available as an archive, so we can not statically link it. This may introduce some additional compatibility challenges which we'll need t... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1915/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1915/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1338 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1338/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1338/comments | https://api.github.com/repos/ollama/ollama/issues/1338/events | https://github.com/ollama/ollama/issues/1338 | 2,019,872,318 | I_kwDOJ0Z1Ps54ZM4- | 1,338 | response with forever loop <s> | {
"login": "yangboz",
"id": 481954,
"node_id": "MDQ6VXNlcjQ4MTk1NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/481954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangboz",
"html_url": "https://github.com/yangboz",
"followers_url": "https://api.github.com/users/yangboz/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2023-12-01T01:39:12 | 2024-03-12T20:26:20 | 2024-03-12T20:26:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when testing llama2 or other models pulled from https://ollama.ai/library
after successfully running the model mix languages , we can see the "<s>" sometime displayed on console forever results to blank loop forever.
any idea ? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1338/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5263 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5263/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5263/comments | https://api.github.com/repos/ollama/ollama/issues/5263/events | https://github.com/ollama/ollama/issues/5263 | 2,371,487,482 | I_kwDOJ0Z1Ps6NWgb6 | 5,263 | Add a parameter to prohibit adding services to ` systemictl '` | {
"login": "wszgrcy",
"id": 9607121,
"node_id": "MDQ6VXNlcjk2MDcxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9607121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wszgrcy",
"html_url": "https://github.com/wszgrcy",
"followers_url": "https://api.github.com/users/wszgrcy/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-06-25T02:23:22 | 2024-06-25T02:23:52 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In Linux, sometimes it is necessary to manually call for some debugging
But the saving location of the model is different from the location where the service is automatically started when manually started
And sometimes I don't want to automatically start taking up space
So can we` https://ollama.com/install.sh `Add ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5263/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4123 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4123/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4123/comments | https://api.github.com/repos/ollama/ollama/issues/4123/events | https://github.com/ollama/ollama/pull/4123 | 2,277,317,785 | PR_kwDOJ0Z1Ps5udF50 | 4,123 | Feat: Add `OLLAMA_LOAD_TIMEOUT` env variable | {
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 4 | 2024-05-03T09:47:50 | 2024-05-24T05:57:14 | 2024-05-23T21:10:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4123",
"html_url": "https://github.com/ollama/ollama/pull/4123",
"diff_url": "https://github.com/ollama/ollama/pull/4123.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4123.patch",
"merged_at": null
} | Closes #3940
For certain hardware setups and models, the offloading to the GPU can take a lot of time and the user can hit a timeout. This PR makes the timeout configurable via the `OLLAMA_LOAD_TIMEOUT` env variable, to be provided in seconds.
@dhiltgen I added a subsection in the FAQ, since I was not sure where... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4123/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1677 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1677/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1677/comments | https://api.github.com/repos/ollama/ollama/issues/1677/events | https://github.com/ollama/ollama/pull/1677 | 2,054,299,892 | PR_kwDOJ0Z1Ps5irBgZ | 1,677 | update where are models stored q | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-12-22T17:49:12 | 2023-12-22T17:56:29 | 2023-12-22T17:56:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1677",
"html_url": "https://github.com/ollama/ollama/pull/1677",
"diff_url": "https://github.com/ollama/ollama/pull/1677.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1677.patch",
"merged_at": "2023-12-22T17:56:28"
} | null | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1677/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6533 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6533/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6533/comments | https://api.github.com/repos/ollama/ollama/issues/6533/events | https://github.com/ollama/ollama/issues/6533 | 2,490,447,813 | I_kwDOJ0Z1Ps6UcTfF | 6,533 | /api/embeddings returning 404 | {
"login": "jwstanwick",
"id": 48192612,
"node_id": "MDQ6VXNlcjQ4MTkyNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/48192612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwstanwick",
"html_url": "https://github.com/jwstanwick",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-27T22:02:38 | 2024-08-28T20:42:08 | 2024-08-28T20:42:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am on an M3 Mac. I am running Ollama using the installer, not on docker. When running `curl localhost:11434/api/embeddings`, Ollama returns `404 page not found`. Other api calls such as `pull` and `show` work as intended. The output from my Ollama logs is as follows:
```
[GIN] 2024/08/27 -... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6533/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6533/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1409 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1409/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1409/comments | https://api.github.com/repos/ollama/ollama/issues/1409/events | https://github.com/ollama/ollama/pull/1409 | 2,029,511,517 | PR_kwDOJ0Z1Ps5hWzWq | 1,409 | Simple chat example | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-12-06T22:36:33 | 2023-12-06T23:49:46 | 2023-12-06T23:49:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1409",
"html_url": "https://github.com/ollama/ollama/pull/1409",
"diff_url": "https://github.com/ollama/ollama/pull/1409.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1409.patch",
"merged_at": "2023-12-06T23:49:46"
} | Simple example using Bruce's chat endpoint | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1409/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1931 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1931/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1931/comments | https://api.github.com/repos/ollama/ollama/issues/1931/events | https://github.com/ollama/ollama/pull/1931 | 2,077,452,237 | PR_kwDOJ0Z1Ps5j2c5K | 1,931 | Add semantic kernel to Readme | {
"login": "eavanvalkenburg",
"id": 13749212,
"node_id": "MDQ6VXNlcjEzNzQ5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13749212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eavanvalkenburg",
"html_url": "https://github.com/eavanvalkenburg",
"followers_url": "https://api... | [] | closed | false | null | [] | null | 1 | 2024-01-11T19:37:14 | 2024-01-11T19:45:05 | 2024-01-11T19:40:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1931",
"html_url": "https://github.com/ollama/ollama/pull/1931",
"diff_url": "https://github.com/ollama/ollama/pull/1931.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1931.patch",
"merged_at": "2024-01-11T19:40:24"
} | We just released support for Ollama in the Python version of Semantic Kernel, this links directly there. Would love to move this to a package approach instead of using a http request, but that can be done once your work on that is completed as mentioned here #1857. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1931/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1931/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/440 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/440/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/440/comments | https://api.github.com/repos/ollama/ollama/issues/440/events | https://github.com/ollama/ollama/pull/440 | 1,871,700,580 | PR_kwDOJ0Z1Ps5ZCY-U | 440 | build: add Docker Compose file and service for running Ollama with Do… | {
"login": "blogbin",
"id": 1687732,
"node_id": "MDQ6VXNlcjE2ODc3MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1687732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blogbin",
"html_url": "https://github.com/blogbin",
"followers_url": "https://api.github.com/users/blogbin/... | [] | closed | false | null | [] | null | 3 | 2023-08-29T13:38:50 | 2023-11-29T21:22:41 | 2023-11-29T21:22:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/440",
"html_url": "https://github.com/ollama/ollama/pull/440",
"diff_url": "https://github.com/ollama/ollama/pull/440.diff",
"patch_url": "https://github.com/ollama/ollama/pull/440.patch",
"merged_at": null
} |
- Add Docker Compose file for running Ollama with Docker
- Create a new file `docker-compose.yaml`
- Define the `ollama` service in the Docker Compose file
- Build the image and set the image name to `jmorganca/ollama`
- Mount the `runtime/ollama` directory to `/home/ollama` in the container | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/440/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1909 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1909/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1909/comments | https://api.github.com/repos/ollama/ollama/issues/1909/events | https://github.com/ollama/ollama/pull/1909 | 2,075,273,117 | PR_kwDOJ0Z1Ps5ju6eP | 1,909 | Adds `HEALTHCHECK` to `Dockerfile` | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 4 | 2024-01-10T21:50:43 | 2024-10-22T19:56:13 | 2024-02-20T02:53:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1909",
"html_url": "https://github.com/ollama/ollama/pull/1909",
"diff_url": "https://github.com/ollama/ollama/pull/1909.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1909.patch",
"merged_at": null
} | Adds `HEALTHCHECK` to the `Dockerfile` for a fully functioning status
- Confirmed proper check in https://github.com/jmorganca/ollama/issues/1378
- Enables the below (meaningful and continually updated STATUS)
```bash
> docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama def456
abc123
> docker co... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1909/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4191 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4191/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4191/comments | https://api.github.com/repos/ollama/ollama/issues/4191/events | https://github.com/ollama/ollama/issues/4191 | 2,279,901,185 | I_kwDOJ0Z1Ps6H5IgB | 4,191 | applications on Windows | {
"login": "win10ogod",
"id": 125795763,
"node_id": "U_kgDOB399sw",
"avatar_url": "https://avatars.githubusercontent.com/u/125795763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/win10ogod",
"html_url": "https://github.com/win10ogod",
"followers_url": "https://api.github.com/users/win10o... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXU... | closed | false | null | [] | null | 1 | 2024-05-06T02:26:01 | 2024-05-06T22:53:37 | 2024-05-06T22:53:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can applications on Windows be updated to the latest version? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4191/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6510 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6510/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6510/comments | https://api.github.com/repos/ollama/ollama/issues/6510/events | https://github.com/ollama/ollama/issues/6510 | 2,486,301,879 | I_kwDOJ0Z1Ps6UMfS3 | 6,510 | Performing GET request to registry.ollama.ai/v2/ returns 404 page not found | {
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 3 | 2024-08-26T08:45:02 | 2024-08-26T10:59:22 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Background:
Kubernetes 1.31 introduced a new feature: [Read-Only Volumes Based on OCI Artifacts](https://kubernetes.io/blog/2024/08/16/kubernetes-1-31-image-volume-source/). I believe this feature could be very useful for deploying a dedicated model alongside Ollama in Kubernetes.
The curr... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6510/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6794 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6794/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6794/comments | https://api.github.com/repos/ollama/ollama/issues/6794/events | https://github.com/ollama/ollama/issues/6794 | 2,524,741,840 | I_kwDOJ0Z1Ps6WfIDQ | 6,794 | Wrong response at math question! | {
"login": "lsalamon",
"id": 235938,
"node_id": "MDQ6VXNlcjIzNTkzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/235938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsalamon",
"html_url": "https://github.com/lsalamon",
"followers_url": "https://api.github.com/users/lsalamo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-09-13T12:36:50 | 2024-09-18T20:17:06 | 2024-09-17T17:57:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I try this question for llama 3.1:8b and he goes in hallucination at response:
Can you explain why this mathematical equality is true: (SQR(2)*2 = (SQR(2))^3
### OS Windows
### GPU none
### CPU AMD Ryzen 9 5900X 12-Core Processor
### 32 Gb memory
### Ollama version ollama version is ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6794/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7731 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7731/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7731/comments | https://api.github.com/repos/ollama/ollama/issues/7731/events | https://github.com/ollama/ollama/pull/7731 | 2,670,502,348 | PR_kwDOJ0Z1Ps6CU1_h | 7,731 | update the docs | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 2 | 2024-11-19T02:16:08 | 2024-11-19T05:45:15 | 2024-11-19T05:17:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7731",
"html_url": "https://github.com/ollama/ollama/pull/7731",
"diff_url": "https://github.com/ollama/ollama/pull/7731.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7731.patch",
"merged_at": "2024-11-19T05:17:38"
} | Update the API docs with:
* how to quantize a model
* change "name" to "model"
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7731/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1582 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1582/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1582/comments | https://api.github.com/repos/ollama/ollama/issues/1582/events | https://github.com/ollama/ollama/issues/1582 | 2,046,680,361 | I_kwDOJ0Z1Ps55_d0p | 1,582 | ollama crashes when calling /api/generate with invalid duration message | {
"login": "michaelgloeckner",
"id": 56082327,
"node_id": "MDQ6VXNlcjU2MDgyMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelgloeckner",
"html_url": "https://github.com/michaelgloeckner",
"followers_url": "https://... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 6 | 2023-12-18T13:35:09 | 2023-12-20T09:31:10 | 2023-12-20T09:31:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
i run ollama in k8s cluster and i upgraded from 0.1.9 to 0.1.16 to get mixtral fix.
the error occurred first time with version 0.1.14.
But when i call /api/generate ollama stops.
Looking into ollama logs i see the following messages:
panic: time: invalid duration "-6414107897391086.000000ms"
More log... | {
"login": "michaelgloeckner",
"id": 56082327,
"node_id": "MDQ6VXNlcjU2MDgyMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/56082327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelgloeckner",
"html_url": "https://github.com/michaelgloeckner",
"followers_url": "https://... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1582/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5836 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5836/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5836/comments | https://api.github.com/repos/ollama/ollama/issues/5836/events | https://github.com/ollama/ollama/issues/5836 | 2,421,680,280 | I_kwDOJ0Z1Ps6QV-iY | 5,836 | Add restrictive license indicator | {
"login": "Darin755",
"id": 54958995,
"node_id": "MDQ6VXNlcjU0OTU4OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/54958995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darin755",
"html_url": "https://github.com/Darin755",
"followers_url": "https://api.github.com/users/Dar... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-07-22T00:32:55 | 2024-07-22T00:32:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have noticed that some models such as llama 3 and gemma have restrictive licenses that add arbitrary limitations. I think the licensing of models is not clear yet but models that do not allow use, distribution and modification for any purpose should be labeled with a red restrictive license indicator. This will make ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5836/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3774 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3774/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3774/comments | https://api.github.com/repos/ollama/ollama/issues/3774/events | https://github.com/ollama/ollama/issues/3774 | 2,254,500,440 | I_kwDOJ0Z1Ps6GYPJY | 3,774 | Error: llama runner process no longer running: 3221225785 | {
"login": "pheonixravi",
"id": 10174848,
"node_id": "MDQ6VXNlcjEwMTc0ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10174848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pheonixravi",
"html_url": "https://github.com/pheonixravi",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 29 | 2024-04-20T10:57:10 | 2024-08-06T16:50:08 | 2024-05-07T15:48:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
[server.log](https://github.com/ollama/ollama/files/15047891/server.log)
### Unable to run mistral or any other modal locally using ollama
C:\Users\ravik>ollama list
NAME ID SIZE MODIFIED
mistral:latest 61e88e884507 4.1 GB About an hour ago
C:\Users\ravik>ollama run mistral
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3774/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1379 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1379/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1379/comments | https://api.github.com/repos/ollama/ollama/issues/1379/events | https://github.com/ollama/ollama/pull/1379 | 2,024,707,757 | PR_kwDOJ0Z1Ps5hGZxT | 1,379 | Added `docker-compose.yaml` | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 5 | 2023-12-04T20:48:36 | 2025-01-21T00:16:13 | 2025-01-21T00:16:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1379",
"html_url": "https://github.com/ollama/ollama/pull/1379",
"diff_url": "https://github.com/ollama/ollama/pull/1379.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1379.patch",
"merged_at": null
} | Revives and improves https://github.com/jmorganca/ollama/pull/440 to close https://github.com/jmorganca/ollama/issues/546. | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1379/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2851 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2851/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2851/comments | https://api.github.com/repos/ollama/ollama/issues/2851/events | https://github.com/ollama/ollama/issues/2851 | 2,162,480,915 | I_kwDOJ0Z1Ps6A5NcT | 2,851 | Troubleshooting Dify Connection to Ollama Service: CPU vs. GPU Differences | {
"login": "xiaotianfotos",
"id": 25025807,
"node_id": "MDQ6VXNlcjI1MDI1ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/25025807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaotianfotos",
"html_url": "https://github.com/xiaotianfotos",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 14 | 2024-03-01T02:48:36 | 2024-04-24T00:54:20 | 2024-03-12T07:29:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | my problem is
i 'm using Dify to connect to ollama service
when using ollama API on Dify,always **loaded to CPU memory**,
but i tried using
curl http://localhost:11434/api/generate -d '{
"model": "qwen:14b",
"prompt": "Why is the sky blue?"
}'
**loaded into GPU**
log description
1st run :ollama ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2851/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1246 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1246/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1246/comments | https://api.github.com/repos/ollama/ollama/issues/1246/events | https://github.com/ollama/ollama/issues/1246 | 2,007,033,650 | I_kwDOJ0Z1Ps53oOcy | 1,246 | Status endpoint needed | {
"login": "ex3ndr",
"id": 400659,
"node_id": "MDQ6VXNlcjQwMDY1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/400659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ex3ndr",
"html_url": "https://github.com/ex3ndr",
"followers_url": "https://api.github.com/users/ex3ndr/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 2 | 2023-11-22T19:58:39 | 2024-11-06T19:05:16 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello!
I found a non-urgent issues in the API that makes UX much worse when working with models from web or with remote servers because we can't see current state of a ollama: is it downloading model? did it fail downloading model? is it doing inference? how much RAM/VRAM is used? Also lack of such status endpoint i... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1246/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1246/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/430 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/430/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/430/comments | https://api.github.com/repos/ollama/ollama/issues/430/events | https://github.com/ollama/ollama/issues/430 | 1,868,470,704 | I_kwDOJ0Z1Ps5vXpmw | 430 | How to clear history without deleting the model? | {
"login": "TheGardenMan",
"id": 60105172,
"node_id": "MDQ6VXNlcjYwMTA1MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/60105172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheGardenMan",
"html_url": "https://github.com/TheGardenMan",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2023-08-27T12:02:25 | 2023-08-28T10:55:31 | 2023-08-28T10:55:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "TheGardenMan",
"id": 60105172,
"node_id": "MDQ6VXNlcjYwMTA1MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/60105172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheGardenMan",
"html_url": "https://github.com/TheGardenMan",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/430/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1277 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1277/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1277/comments | https://api.github.com/repos/ollama/ollama/issues/1277/events | https://github.com/ollama/ollama/issues/1277 | 2,010,952,610 | I_kwDOJ0Z1Ps533LOi | 1,277 | Using Autogen with ollama (help wanted) | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [] | closed | false | null | [] | null | 6 | 2023-11-26T08:34:29 | 2024-02-14T22:16:35 | 2024-02-14T17:24:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I've been trying to use autogen with ollama.
To do this I've run
`
litellm --model ollama/alfred
`
which in theory is supposed to provide an openai api port that talks to ollama. (and seems to work)
My simple code to get started follows:
`
#import autogen
from autogen import AssistantAgent, UserProxyAgen... | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1277/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1377 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1377/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1377/comments | https://api.github.com/repos/ollama/ollama/issues/1377/events | https://github.com/ollama/ollama/pull/1377 | 2,024,585,216 | PR_kwDOJ0Z1Ps5hF-cY | 1,377 | update for qwen | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-04T19:38:17 | 2023-12-06T20:31:52 | 2023-12-06T20:31:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1377",
"html_url": "https://github.com/ollama/ollama/pull/1377",
"diff_url": "https://github.com/ollama/ollama/pull/1377.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1377.patch",
"merged_at": "2023-12-06T20:31:51"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1377/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2919 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2919/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2919/comments | https://api.github.com/repos/ollama/ollama/issues/2919/events | https://github.com/ollama/ollama/issues/2919 | 2,167,378,823 | I_kwDOJ0Z1Ps6BL5OH | 2,919 | Loading model into memory instead of generating chat completion. | {
"login": "RapierXbox",
"id": 65401386,
"node_id": "MDQ6VXNlcjY1NDAxMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/65401386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RapierXbox",
"html_url": "https://github.com/RapierXbox",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-03-04T17:34:05 | 2024-03-04T18:53:17 | 2024-03-04T18:53:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, i am trying to do a chat completion with llama2 and it seems like its loading up a model instead of generating a chat completion. `{
"model": "llama2,
"message": [
{
"role": "user",
"content": "hello"
}
],
"stream": "false"
}` and its returning nothing. | {
"login": "RapierXbox",
"id": 65401386,
"node_id": "MDQ6VXNlcjY1NDAxMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/65401386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RapierXbox",
"html_url": "https://github.com/RapierXbox",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2919/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/221 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/221/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/221/comments | https://api.github.com/repos/ollama/ollama/issues/221/events | https://github.com/ollama/ollama/pull/221 | 1,822,975,727 | PR_kwDOJ0Z1Ps5WeObQ | 221 | embed ggml-metal.metal | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-07-26T18:52:15 | 2023-07-28T00:24:43 | 2023-07-28T00:24:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/221",
"html_url": "https://github.com/ollama/ollama/pull/221",
"diff_url": "https://github.com/ollama/ollama/pull/221.diff",
"patch_url": "https://github.com/ollama/ollama/pull/221.patch",
"merged_at": "2023-07-28T00:24:42"
} | `go:embed ggml-metal.metal` and write it out to the right location on `init()` so llama.cpp can use it.
with this change, `ollama` is serveable using `go run . serve` or `go install . && ~/go/bin/ollama serve`
resolves #48 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/221/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3775 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3775/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3775/comments | https://api.github.com/repos/ollama/ollama/issues/3775/events | https://github.com/ollama/ollama/issues/3775 | 2,254,504,222 | I_kwDOJ0Z1Ps6GYQEe | 3,775 | Achieving Deterministic Output with Ollama | {
"login": "antonkratz",
"id": 8510296,
"node_id": "MDQ6VXNlcjg1MTAyOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8510296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonkratz",
"html_url": "https://github.com/antonkratz",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 1 | 2024-04-20T11:08:34 | 2024-05-14T23:26:08 | 2024-05-14T23:26:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For a research project, I am interested in exploring the effect of different prompts. The problem is, when I change the prompt even slightly, and I get a different result, I am unable to say how much is because I changed the prompt input and how much is because of the random and pseudo-random effects because of concept... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3775/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3775/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6492 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6492/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6492/comments | https://api.github.com/repos/ollama/ollama/issues/6492/events | https://github.com/ollama/ollama/issues/6492 | 2,484,795,357 | I_kwDOJ0Z1Ps6UGvfd | 6,492 | Models drastically quality drop on `chat/completions` gateway | {
"login": "yaroslavyaroslav",
"id": 16612247,
"node_id": "MDQ6VXNlcjE2NjEyMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16612247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaroslavyaroslav",
"html_url": "https://github.com/yaroslavyaroslav",
"followers_url": "https://... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-08-24T18:47:45 | 2024-10-29T11:46:39 | 2024-09-07T00:45:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Folks raised the following issue on my side (frontend for ollama) https://github.com/yaroslavyaroslav/OpenAI-sublime-text/issues/57
In short it's about that models response with very low quality through my app. Long story short.
1. I've ran the `export OLLAMA_DEBUG=1 && ollama serve`
2. r... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6492/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5086 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5086/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5086/comments | https://api.github.com/repos/ollama/ollama/issues/5086/events | https://github.com/ollama/ollama/issues/5086 | 2,355,946,611 | I_kwDOJ0Z1Ps6MbORz | 5,086 | `TextMonkey` model | {
"login": "insinfo",
"id": 12227024,
"node_id": "MDQ6VXNlcjEyMjI3MDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/12227024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insinfo",
"html_url": "https://github.com/insinfo",
"followers_url": "https://api.github.com/users/insinf... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-06-16T19:46:28 | 2024-06-18T11:37:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
In my quick tests on the demo, it seems to be the best document understanding and OCR model I have ever tested, my current use case is that I have to identify the process code of 1500000 images manually (a challenging job) (I am wondering if this model will be able to do this for me)
I have to identify from an ima... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5086/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4163 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4163/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4163/comments | https://api.github.com/repos/ollama/ollama/issues/4163/events | https://github.com/ollama/ollama/issues/4163 | 2,279,325,554 | I_kwDOJ0Z1Ps6H279y | 4,163 | llava broke in new version v0.1.33 | {
"login": "VideoFX",
"id": 47264978,
"node_id": "MDQ6VXNlcjQ3MjY0OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/47264978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VideoFX",
"html_url": "https://github.com/VideoFX",
"followers_url": "https://api.github.com/users/VideoF... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 14 | 2024-05-05T04:50:49 | 2024-05-17T10:01:00 | 2024-05-06T23:17:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama v0.1.33
Intel Core i9 14900K 64GB ram
Nvidia RTX 4070
llava only works for the first inference attempt. All attempts afterwards make up strange descriptions not related to the image, almost like its looking at a different picture.
This also happens with llava:13b. It will work the... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4163/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2230 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2230/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2230/comments | https://api.github.com/repos/ollama/ollama/issues/2230/events | https://github.com/ollama/ollama/issues/2230 | 2,103,508,787 | I_kwDOJ0Z1Ps59YP8z | 2,230 | Ollama (llama2) running in VM Box on Ubuntu but /api/generate not working | {
"login": "Marvin-VW",
"id": 82050751,
"node_id": "MDQ6VXNlcjgyMDUwNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/82050751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marvin-VW",
"html_url": "https://github.com/Marvin-VW",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-01-27T11:09:00 | 2024-06-25T06:49:15 | 2024-01-27T13:08:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey, i pulled llama2 as described and im running it with 'ollama run llama2'
It is working inside the terminal with no errors, but as soon as i try to to reach it via
`curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'`
it just says:
`{"error":"model "l... | {
"login": "Marvin-VW",
"id": 82050751,
"node_id": "MDQ6VXNlcjgyMDUwNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/82050751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marvin-VW",
"html_url": "https://github.com/Marvin-VW",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2230/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7628 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7628/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7628/comments | https://api.github.com/repos/ollama/ollama/issues/7628/events | https://github.com/ollama/ollama/pull/7628 | 2,651,706,315 | PR_kwDOJ0Z1Ps6BnJP2 | 7,628 | test PR | {
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2024-11-12T10:15:01 | 2024-11-12T17:49:21 | 2024-11-12T17:49:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7628",
"html_url": "https://github.com/ollama/ollama/pull/7628",
"diff_url": "https://github.com/ollama/ollama/pull/7628.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7628.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7628/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/446 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/446/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/446/comments | https://api.github.com/repos/ollama/ollama/issues/446/events | https://github.com/ollama/ollama/pull/446 | 1,875,075,893 | PR_kwDOJ0Z1Ps5ZNuiV | 446 | Add a warning for if digests are missing | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | [] | closed | false | null | [] | null | 1 | 2023-08-31T08:50:24 | 2023-08-31T12:16:05 | 2023-08-31T12:16:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/446",
"html_url": "https://github.com/ollama/ollama/pull/446",
"diff_url": "https://github.com/ollama/ollama/pull/446.diff",
"patch_url": "https://github.com/ollama/ollama/pull/446.patch",
"merged_at": null
} | null | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/446/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6636 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6636/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6636/comments | https://api.github.com/repos/ollama/ollama/issues/6636/events | https://github.com/ollama/ollama/issues/6636 | 2,505,843,666 | I_kwDOJ0Z1Ps6VXCPS | 6,636 | Install script not reporting issue with systemd | {
"login": "cfjedimaster",
"id": 393660,
"node_id": "MDQ6VXNlcjM5MzY2MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/393660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfjedimaster",
"html_url": "https://github.com/cfjedimaster",
"followers_url": "https://api.github.com/u... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-09-04T16:39:35 | 2024-11-18T23:02:42 | 2024-11-18T23:02:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This is based on issue #6204 - I ran the installer in Ubuntu running under WSL (WIndows Subsystem for Linux). The install ran perfectly fine with no error messages. However, systemd wasn't enabled and therefore Ollama was not set up as a service. This bug report is basically to state that the ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6636/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6636/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8394 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8394/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8394/comments | https://api.github.com/repos/ollama/ollama/issues/8394/events | https://github.com/ollama/ollama/issues/8394 | 2,782,329,407 | I_kwDOJ0Z1Ps6l1vo_ | 8,394 | The same model could load all onto the GPU last year. Today, after upgrading to ollama, I found that it cannot be loaded onto the GPU all at once. | {
"login": "21307369",
"id": 47931342,
"node_id": "MDQ6VXNlcjQ3OTMxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47931342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/21307369",
"html_url": "https://github.com/21307369",
"followers_url": "https://api.github.com/users/213... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2025-01-12T07:26:24 | 2025-01-14T21:27:11 | 2025-01-14T21:26:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The situation is as follows, version 0.2.1ollama can fully load the GPU with codegeex4:latest. However, after upgrading to 0.5.4 today, I am unable to fully load the GPU. My GPU is a 6750GRE with 12GB of VRAM. I found something wrong with this... The size of this model is 5.5G. Observing the r... | {
"login": "21307369",
"id": 47931342,
"node_id": "MDQ6VXNlcjQ3OTMxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47931342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/21307369",
"html_url": "https://github.com/21307369",
"followers_url": "https://api.github.com/users/213... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8394/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/482 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/482/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/482/comments | https://api.github.com/repos/ollama/ollama/issues/482/events | https://github.com/ollama/ollama/pull/482 | 1,885,279,069 | PR_kwDOJ0Z1Ps5Zv_OU | 482 | [docs] Improve build instructions | {
"login": "apepper",
"id": 86275,
"node_id": "MDQ6VXNlcjg2Mjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/86275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apepper",
"html_url": "https://github.com/apepper",
"followers_url": "https://api.github.com/users/apepper/follower... | [] | closed | false | null | [] | null | 1 | 2023-09-07T07:27:13 | 2023-09-07T10:59:45 | 2023-09-07T10:43:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/482",
"html_url": "https://github.com/ollama/ollama/pull/482",
"diff_url": "https://github.com/ollama/ollama/pull/482.diff",
"patch_url": "https://github.com/ollama/ollama/pull/482.patch",
"merged_at": "2023-09-07T10:43:26"
} | Go is required and not installed by default. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/482/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/482/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6281 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6281/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6281/comments | https://api.github.com/repos/ollama/ollama/issues/6281/events | https://github.com/ollama/ollama/pull/6281 | 2,457,517,648 | PR_kwDOJ0Z1Ps537WSt | 6,281 | docs(tools): add ingest | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [] | closed | false | null | [] | null | 1 | 2024-08-09T09:34:08 | 2024-08-14T22:24:23 | 2024-08-14T22:24:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6281",
"html_url": "https://github.com/ollama/ollama/pull/6281",
"diff_url": "https://github.com/ollama/ollama/pull/6281.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6281.patch",
"merged_at": null
} | Add ingest to the list of ollama integrated tools https://github.com/sammcj/ingest


| {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6281/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6281/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5368 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5368/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5368/comments | https://api.github.com/repos/ollama/ollama/issues/5368/events | https://github.com/ollama/ollama/pull/5368 | 2,381,354,023 | PR_kwDOJ0Z1Ps5z8QF9 | 5,368 | Do not shift context for sliding window models | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-06-29T00:49:26 | 2024-06-29T02:39:33 | 2024-06-29T02:39:31 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5368",
"html_url": "https://github.com/ollama/ollama/pull/5368",
"diff_url": "https://github.com/ollama/ollama/pull/5368.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5368.patch",
"merged_at": "2024-06-29T02:39:31"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5368/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6472 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6472/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6472/comments | https://api.github.com/repos/ollama/ollama/issues/6472/events | https://github.com/ollama/ollama/issues/6472 | 2,482,632,801 | I_kwDOJ0Z1Ps6T-fhh | 6,472 | 404 one download | {
"login": "vorticalbox",
"id": 10886065,
"node_id": "MDQ6VXNlcjEwODg2MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/10886065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vorticalbox",
"html_url": "https://github.com/vorticalbox",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-08-23T08:40:17 | 2024-12-02T21:54:11 | 2024-12-02T21:54:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
https://ollama.com/download/ollama-linux-amd64.tgz is returning a 404
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6472/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4267 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4267/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4267/comments | https://api.github.com/repos/ollama/ollama/issues/4267/events | https://github.com/ollama/ollama/issues/4267 | 2,286,575,046 | I_kwDOJ0Z1Ps6ISl3G | 4,267 | ollama_llama_server is still running after exiting via SIGINT | {
"login": "RobbyCBennett",
"id": 22121365,
"node_id": "MDQ6VXNlcjIyMTIxMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22121365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobbyCBennett",
"html_url": "https://github.com/RobbyCBennett",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-05-08T22:18:30 | 2024-05-09T22:58:46 | 2024-05-09T22:58:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I kill `ollama serve` with control-C with the keyboard, it closes `ollama_llama_server` and therefore it all exits properly. However, when I kill it in another way like `kill -2` then `ollama_llama_server` still runs afterward.
Replicate:
1. `OLLAMA_HOST=localhost:6767 ollama serve &`... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4267/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1459 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1459/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1459/comments | https://api.github.com/repos/ollama/ollama/issues/1459/events | https://github.com/ollama/ollama/issues/1459 | 2,034,781,666 | I_kwDOJ0Z1Ps55SE3i | 1,459 | LiteLLM does not forward temperature to Ollama models | {
"login": "scpedicini",
"id": 2040540,
"node_id": "MDQ6VXNlcjIwNDA1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2040540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scpedicini",
"html_url": "https://github.com/scpedicini",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 1 | 2023-12-11T04:04:19 | 2023-12-11T04:06:01 | 2023-12-11T04:06:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There seems to be an issue with temperature setting not being properly passed through LiteLLM to Ollama.
When running against Ollama API directly
```bash
curl http://localhost:11434/api/chat -d '{
"model": "mistral",
"messages": [
{
"role": "system",
"content": "You are a h... | {
"login": "scpedicini",
"id": 2040540,
"node_id": "MDQ6VXNlcjIwNDA1NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2040540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scpedicini",
"html_url": "https://github.com/scpedicini",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1459/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3395 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3395/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3395/comments | https://api.github.com/repos/ollama/ollama/issues/3395/events | https://github.com/ollama/ollama/issues/3395 | 2,214,181,524 | I_kwDOJ0Z1Ps6D-bqU | 3,395 | Print better error message when a new version of Ollama is required | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-03-28T21:22:13 | 2024-04-19T15:41:38 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Today Ollama prints hard to recognize error messages when a model isn't supported because a new version of Ollama is required
### What did you expect to see?
An error along the lines of: `Error: a new version of Ollama is required to run this model`. Even better would be to include the version... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3395/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4149 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4149/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4149/comments | https://api.github.com/repos/ollama/ollama/issues/4149/events | https://github.com/ollama/ollama/pull/4149 | 2,278,888,122 | PR_kwDOJ0Z1Ps5uiW65 | 4,149 | fix: format go code | {
"login": "alwqx",
"id": 9915368,
"node_id": "MDQ6VXNlcjk5MTUzNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alwqx",
"html_url": "https://github.com/alwqx",
"followers_url": "https://api.github.com/users/alwqx/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-04T09:36:38 | 2024-05-06T10:57:45 | 2024-05-05T23:08:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4149",
"html_url": "https://github.com/ollama/ollama/pull/4149",
"diff_url": "https://github.com/ollama/ollama/pull/4149.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4149.patch",
"merged_at": "2024-05-05T23:08:09"
} | Hi, I find some go code is not formatted, So I run `gofmt -w .` to format them. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4149/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8584 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8584/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8584/comments | https://api.github.com/repos/ollama/ollama/issues/8584/events | https://github.com/ollama/ollama/issues/8584 | 2,811,112,982 | I_kwDOJ0Z1Ps6nji4W | 8,584 | Error: "not authorized to push" | {
"login": "NLP-man",
"id": 174748562,
"node_id": "U_kgDOCmpzkg",
"avatar_url": "https://avatars.githubusercontent.com/u/174748562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NLP-man",
"html_url": "https://github.com/NLP-man",
"followers_url": "https://api.github.com/users/NLP-man/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-25T18:18:43 | 2025-01-25T20:47:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
At first i have pull a model using ollama (ollama pull qwen:0.5b) then i wanted to push this model and using this command ```ollama push qwen:0.5b``` but i got this error:
pushing fad2a06e4cc7... 100% ▕████████████████████████████████████████████████████████████████████████████████████████... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8584/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2648 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2648/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2648/comments | https://api.github.com/repos/ollama/ollama/issues/2648/events | https://github.com/ollama/ollama/issues/2648 | 2,147,548,858 | I_kwDOJ0Z1Ps6AAP66 | 2,648 | Windows Defender alert on update to 0.1.26 | {
"login": "OMGnotThatGuy",
"id": 91296990,
"node_id": "MDQ6VXNlcjkxMjk2OTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/91296990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OMGnotThatGuy",
"html_url": "https://github.com/OMGnotThatGuy",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 7 | 2024-02-21T19:43:54 | 2024-02-23T22:23:35 | 2024-02-21T20:32:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I didn't have any issues installing the previous packages, but it seems the latest release triggered a Malware alert in Defender on Windows 11.
**Windows:**
OS Name Microsoft Windows 11 Pro
Version 10.0.22631 Build 22631
**Defender:** - It appears Defender updated its signatures afterwards, so I don't know what... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2648/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2648/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4168 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4168/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4168/comments | https://api.github.com/repos/ollama/ollama/issues/4168/events | https://github.com/ollama/ollama/issues/4168 | 2,279,511,789 | I_kwDOJ0Z1Ps6H3pbt | 4,168 | Support for whisper models in Ollama | {
"login": "gkiri",
"id": 25444878,
"node_id": "MDQ6VXNlcjI1NDQ0ODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/25444878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gkiri",
"html_url": "https://github.com/gkiri",
"followers_url": "https://api.github.com/users/gkiri/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-05-05T12:38:40 | 2024-05-05T19:06:25 | 2024-05-05T19:06:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4168/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4168/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1133 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1133/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1133/comments | https://api.github.com/repos/ollama/ollama/issues/1133/events | https://github.com/ollama/ollama/pull/1133 | 1,993,792,593 | PR_kwDOJ0Z1Ps5fd4pi | 1,133 | initial commit of the readline editor replacement | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 2 | 2023-11-15T00:10:07 | 2024-08-14T20:02:09 | 2024-08-14T20:02:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1133",
"html_url": "https://github.com/ollama/ollama/pull/1133",
"diff_url": "https://github.com/ollama/ollama/pull/1133.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1133.patch",
"merged_at": null
} | This change is a full replacement for the current `readline` package that we had introduced before. It builds on that version but now properly handles multi-line input.
Some new features:
* word wrap between lines (no more splitting lines in the middle of a word)
* free movement of the cursor (up/down/left/rig... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1133/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7819 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7819/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7819/comments | https://api.github.com/repos/ollama/ollama/issues/7819/events | https://github.com/ollama/ollama/pull/7819 | 2,687,966,890 | PR_kwDOJ0Z1Ps6C8WZU | 7,819 | Bring ollama `fileType`s into alignment with llama.cpp. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-11-24T16:26:16 | 2024-11-24T18:33:33 | 2024-11-24T18:33:33 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7819",
"html_url": "https://github.com/ollama/ollama/pull/7819",
"diff_url": "https://github.com/ollama/ollama/pull/7819.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7819.patch",
"merged_at": "2024-11-24T18:33:33"
} | Fixes #7816 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7819/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7787 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7787/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7787/comments | https://api.github.com/repos/ollama/ollama/issues/7787/events | https://github.com/ollama/ollama/issues/7787 | 2,681,697,146 | I_kwDOJ0Z1Ps6f13N6 | 7,787 | How to update ollama desktop on windows? | {
"login": "Septemberlemon",
"id": 84148797,
"node_id": "MDQ6VXNlcjg0MTQ4Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/84148797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Septemberlemon",
"html_url": "https://github.com/Septemberlemon",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 4 | 2024-11-22T04:09:36 | 2024-12-06T15:00:06 | 2024-12-06T15:00:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I use windows and my ollama version is 0.3.13,and I can't update

this is the log files,I use clash for windows,how can I solve it?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
#... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7787/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7767 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7767/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7767/comments | https://api.github.com/repos/ollama/ollama/issues/7767/events | https://github.com/ollama/ollama/pull/7767 | 2,676,975,851 | PR_kwDOJ0Z1Ps6CkuCF | 7,767 | KV Cache Fixes | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-11-20T19:32:48 | 2024-11-20T20:49:26 | 2024-11-20T20:49:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7767",
"html_url": "https://github.com/ollama/ollama/pull/7767",
"diff_url": "https://github.com/ollama/ollama/pull/7767.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7767.patch",
"merged_at": "2024-11-20T20:49:25"
} | Users have reported a number of errors related to the KV cache such as:
- Error: "could not find a KV slot for the batch - try reducing the size of the batch or increase the context. code: 1"
- Hanging due to infinite loops
- Output that ends unexpectedly
- Slower performance than before when passing inputs that ar... | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7767/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3076 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3076/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3076/comments | https://api.github.com/repos/ollama/ollama/issues/3076/events | https://github.com/ollama/ollama/pull/3076 | 2,181,471,764 | PR_kwDOJ0Z1Ps5pXpFU | 3,076 | Add Japanese translation of documentation | {
"login": "jesseclin",
"id": 34976014,
"node_id": "MDQ6VXNlcjM0OTc2MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/34976014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jesseclin",
"html_url": "https://github.com/jesseclin",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-03-12T12:15:02 | 2024-11-21T08:43:18 | 2024-11-21T08:43:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3076",
"html_url": "https://github.com/ollama/ollama/pull/3076",
"diff_url": "https://github.com/ollama/ollama/pull/3076.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3076.patch",
"merged_at": null
} | Follow-up of issue Id #2371:
- README_ja.md: Translation of README.md
- docs/ja/: Translation of docs/...
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3076/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3076/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1525 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1525/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1525/comments | https://api.github.com/repos/ollama/ollama/issues/1525/events | https://github.com/ollama/ollama/issues/1525 | 2,042,143,992 | I_kwDOJ0Z1Ps55uKT4 | 1,525 | Mixtral 8x7B support | {
"login": "Baughn",
"id": 45811,
"node_id": "MDQ6VXNlcjQ1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/45811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baughn",
"html_url": "https://github.com/Baughn",
"followers_url": "https://api.github.com/users/Baughn/followers",
... | [] | closed | false | null | [] | null | 4 | 2023-12-14T17:22:20 | 2023-12-14T22:54:47 | 2023-12-14T22:54:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Following up on #1477 - llama.cpp now supports Mixtral.
I'd reopen the previous issue, but well, I can't. | {
"login": "Baughn",
"id": 45811,
"node_id": "MDQ6VXNlcjQ1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/45811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Baughn",
"html_url": "https://github.com/Baughn",
"followers_url": "https://api.github.com/users/Baughn/followers",
... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1525/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7971 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7971/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7971/comments | https://api.github.com/repos/ollama/ollama/issues/7971/events | https://github.com/ollama/ollama/pull/7971 | 2,723,489,062 | PR_kwDOJ0Z1Ps6EWbkY | 7,971 | ADD: OLLAMA_LLM_DEFAULT | {
"login": "bet0x",
"id": 778862,
"node_id": "MDQ6VXNlcjc3ODg2Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/778862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bet0x",
"html_url": "https://github.com/bet0x",
"followers_url": "https://api.github.com/users/bet0x/followers"... | [] | open | false | null | [] | null | 0 | 2024-12-06T16:42:46 | 2024-12-06T16:42:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7971",
"html_url": "https://github.com/ollama/ollama/pull/7971",
"diff_url": "https://github.com/ollama/ollama/pull/7971.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7971.patch",
"merged_at": null
} | he addition of OLLAMA_LLM_DEFAULT is a significant improvement over API-based model pulls. While Ollama's API does support model pulling, having a default model environment variable streamlines deployment and reduces operational overhead.
This approach aligns with modern DevOps practices by handling model downloads ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7971/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3541 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3541/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3541/comments | https://api.github.com/repos/ollama/ollama/issues/3541/events | https://github.com/ollama/ollama/pull/3541 | 2,231,887,136 | PR_kwDOJ0Z1Ps5sDCX_ | 3,541 | types/model: init with Name and Digest types | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 1 | 2024-04-08T18:52:24 | 2024-04-10T23:30:19 | 2024-04-10T23:30:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3541",
"html_url": "https://github.com/ollama/ollama/pull/3541",
"diff_url": "https://github.com/ollama/ollama/pull/3541.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3541.patch",
"merged_at": "2024-04-10T23:30:05"
} | null | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3541/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2710 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2710/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2710/comments | https://api.github.com/repos/ollama/ollama/issues/2710/events | https://github.com/ollama/ollama/issues/2710 | 2,151,276,892 | I_kwDOJ0Z1Ps6AOeFc | 2,710 | Quitting taskbar app on Windows doesn't always close `ollama` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-02-23T15:05:13 | 2024-05-02T22:04:48 | 2024-05-02T22:04:48 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2710/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4436 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4436/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4436/comments | https://api.github.com/repos/ollama/ollama/issues/4436/events | https://github.com/ollama/ollama/pull/4436 | 2,296,277,894 | PR_kwDOJ0Z1Ps5vcgVZ | 4,436 | return on part done | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-14T19:41:45 | 2024-05-16T00:16:25 | 2024-05-16T00:16:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4436",
"html_url": "https://github.com/ollama/ollama/pull/4436",
"diff_url": "https://github.com/ollama/ollama/pull/4436.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4436.patch",
"merged_at": "2024-05-16T00:16:25"
} | only copy as much as we're expecting to receive to prevent runaway downloads | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4436/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3707 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3707/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3707/comments | https://api.github.com/repos/ollama/ollama/issues/3707/events | https://github.com/ollama/ollama/issues/3707 | 2,248,988,902 | I_kwDOJ0Z1Ps6GDNjm | 3,707 | what is the difference in this two models i think this is a bug | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/foll... | [] | closed | false | null | [] | null | 2 | 2024-04-17T18:40:46 | 2024-04-17T19:35:47 | 2024-04-17T18:55:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?


One 8x7b is 4.1 gb and another is 26 gb what i... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3707/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5586 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5586/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5586/comments | https://api.github.com/repos/ollama/ollama/issues/5586/events | https://github.com/ollama/ollama/issues/5586 | 2,399,566,579 | I_kwDOJ0Z1Ps6PBnrz | 5,586 | version 0.2.1 error occurs when calling qwen-agent, but work normally in version 0.1.47 | {
"login": "bjfk2006",
"id": 6290119,
"node_id": "MDQ6VXNlcjYyOTAxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6290119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjfk2006",
"html_url": "https://github.com/bjfk2006",
"followers_url": "https://api.github.com/users/bjfk2... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-07-10T02:15:16 | 2024-07-11T08:45:05 | 2024-07-10T02:53:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
gpu:cuda12.5+V100
model:qwen2:7b-instruct-q8_0
ollama: 0.2.1
code: https://github.com/QwenLM/Qwen-Agent
error info:
Jul 10 10:06:15 VM-77-13-ubuntu ollama[481292]: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda/template-instances/../mmq.cuh:2422: ERROR: CUDA kernel mul_ma... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5586/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1913 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1913/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1913/comments | https://api.github.com/repos/ollama/ollama/issues/1913/events | https://github.com/ollama/ollama/issues/1913 | 2,075,351,707 | I_kwDOJ0Z1Ps57s1qb | 1,913 | 0.1.19 no longer uses my nvidia cards | {
"login": "skrew",
"id": 738170,
"node_id": "MDQ6VXNlcjczODE3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/738170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skrew",
"html_url": "https://github.com/skrew",
"followers_url": "https://api.github.com/users/skrew/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-01-10T22:50:47 | 2024-01-12T09:10:08 | 2024-01-12T09:10:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | worked on 0.1.18.
Logs from 0.1.19:
```
➜ ~ ollama serve
2024/01/10 22:35:20 images.go:808: total blobs: 5
2024/01/10 22:35:20 images.go:815: total unused blobs removed: 0
2024/01/10 22:35:20 routes.go:930: Listening on 127.0.0.1:11434 (version 0.1.19)
2024/01/10 22:35:21 shim_ext_server.go:142: Dynamic LLM v... | {
"login": "skrew",
"id": 738170,
"node_id": "MDQ6VXNlcjczODE3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/738170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skrew",
"html_url": "https://github.com/skrew",
"followers_url": "https://api.github.com/users/skrew/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1913/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4037 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4037/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4037/comments | https://api.github.com/repos/ollama/ollama/issues/4037/events | https://github.com/ollama/ollama/pull/4037 | 2,270,293,277 | PR_kwDOJ0Z1Ps5uFOjc | 4,037 | Update langchainpy.md | {
"login": "Cephra",
"id": 7629358,
"node_id": "MDQ6VXNlcjc2MjkzNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7629358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cephra",
"html_url": "https://github.com/Cephra",
"followers_url": "https://api.github.com/users/Cephra/foll... | [] | closed | false | null | [] | null | 0 | 2024-04-30T01:02:01 | 2024-04-30T09:08:21 | 2024-04-30T03:19:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4037",
"html_url": "https://github.com/ollama/ollama/pull/4037",
"diff_url": "https://github.com/ollama/ollama/pull/4037.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4037.patch",
"merged_at": "2024-04-30T03:19:06"
} | Updated the code a bit since it was showing deprecation messages for me. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4037/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2099 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2099/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2099/comments | https://api.github.com/repos/ollama/ollama/issues/2099/events | https://github.com/ollama/ollama/pull/2099 | 2,091,289,006 | PR_kwDOJ0Z1Ps5klaRF | 2,099 | Switch to local dlopen symbols | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-19T20:00:11 | 2024-01-19T20:22:07 | 2024-01-19T20:22:04 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2099",
"html_url": "https://github.com/ollama/ollama/pull/2099",
"diff_url": "https://github.com/ollama/ollama/pull/2099.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2099.patch",
"merged_at": "2024-01-19T20:22:04"
} | Fixes #2066 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2099/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1477 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1477/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1477/comments | https://api.github.com/repos/ollama/ollama/issues/1477/events | https://github.com/ollama/ollama/issues/1477 | 2,036,841,181 | I_kwDOJ0Z1Ps55Z7rd | 1,477 | Mixtral 8X7B | {
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavi... | [] | closed | false | null | [] | null | 1 | 2023-12-12T02:07:06 | 2023-12-12T02:18:03 | 2023-12-12T02:07:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have read that Mixtral 8X7B requires a PR from llama.cpp (https://github.com/ggerganov/llama.cpp/pull/4406) according to this source: (https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF).
Are there any plans yet to incorporate these changes? Is there a timeline? Mixtral 8X7B looks very impressive (appears to o... | {
"login": "pdavis68",
"id": 2781885,
"node_id": "MDQ6VXNlcjI3ODE4ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2781885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdavis68",
"html_url": "https://github.com/pdavis68",
"followers_url": "https://api.github.com/users/pdavi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1477/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1477/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7059 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7059/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7059/comments | https://api.github.com/repos/ollama/ollama/issues/7059/events | https://github.com/ollama/ollama/issues/7059 | 2,558,670,041 | I_kwDOJ0Z1Ps6YgjTZ | 7,059 | Have Ollama support the commands /exit and /quit. | {
"login": "bulrush15",
"id": 7031486,
"node_id": "MDQ6VXNlcjcwMzE0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bulrush15",
"html_url": "https://github.com/bulrush15",
"followers_url": "https://api.github.com/users/bu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-10-01T09:24:36 | 2024-10-01T22:54:08 | 2024-10-01T22:54:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have Ollama 0.3.12. It's a fun tool! Thank you so much!
Can you have it support commands for `/exit` and `/quit`? They will do the same as `/bye`.
Thanks! | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7059/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7942 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7942/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7942/comments | https://api.github.com/repos/ollama/ollama/issues/7942/events | https://github.com/ollama/ollama/issues/7942 | 2,719,196,047 | I_kwDOJ0Z1Ps6iE6OP | 7,942 | model requires more system memory than is available when useMmap | {
"login": "xgdgsc",
"id": 1189869,
"node_id": "MDQ6VXNlcjExODk4Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1189869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xgdgsc",
"html_url": "https://github.com/xgdgsc",
"followers_url": "https://api.github.com/users/xgdgsc/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 3 | 2024-12-05T03:01:23 | 2025-01-14T05:12:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I use continue vscode extension to call ollama config like
```
{
"model": "qwen2.5-coder:14b",
"title": "qwen2.5-coder:14b",
"provider": "ollama",
"completionOptions": {
"keepAlive": 9999999,
"useMmap": true
}
},
```
It still... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7942/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7942/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7828 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7828/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7828/comments | https://api.github.com/repos/ollama/ollama/issues/7828/events | https://github.com/ollama/ollama/pull/7828 | 2,690,433,255 | PR_kwDOJ0Z1Ps6DBgDJ | 7,828 | Easily see version without needing to go to command line | {
"login": "tagroup",
"id": 1417944,
"node_id": "MDQ6VXNlcjE0MTc5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1417944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tagroup",
"html_url": "https://github.com/tagroup",
"followers_url": "https://api.github.com/users/tagroup/... | [] | closed | false | null | [] | null | 1 | 2024-11-25T11:33:48 | 2024-11-26T17:58:56 | 2024-11-26T17:58:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7828",
"html_url": "https://github.com/ollama/ollama/pull/7828",
"diff_url": "https://github.com/ollama/ollama/pull/7828.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7828.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7828/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3211 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3211/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3211/comments | https://api.github.com/repos/ollama/ollama/issues/3211/events | https://github.com/ollama/ollama/issues/3211 | 2,191,135,613 | I_kwDOJ0Z1Ps6CmhN9 | 3,211 | GPU Not detected on kubernetes - works localy | {
"login": "didlawowo",
"id": 12622760,
"node_id": "MDQ6VXNlcjEyNjIyNzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/12622760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/didlawowo",
"html_url": "https://github.com/didlawowo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-03-18T03:10:56 | 2024-04-12T22:02:48 | 2024-04-12T22:02:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i have cluster kubernetes with 4070 Super GPU
On kubernetes cluster into container ollama doesn't detect gpu, but it work if i am running ollam directly on the node who have the gpu
stream logs failed container "ollama" in pod "ollama-74fbf7d68b-lglf9" is waiting to start: ContainerCreat... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3211/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5320 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5320/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5320/comments | https://api.github.com/repos/ollama/ollama/issues/5320/events | https://github.com/ollama/ollama/pull/5320 | 2,377,620,652 | PR_kwDOJ0Z1Ps5zvpo1 | 5,320 | Update faq.md | {
"login": "Dino-Burger",
"id": 56079246,
"node_id": "MDQ6VXNlcjU2MDc5MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/56079246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dino-Burger",
"html_url": "https://github.com/Dino-Burger",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2024-06-27T09:19:32 | 2024-08-14T17:15:41 | 2024-08-14T17:15:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5320",
"html_url": "https://github.com/ollama/ollama/pull/5320",
"diff_url": "https://github.com/ollama/ollama/pull/5320.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5320.patch",
"merged_at": null
} | Adding instructions on how to set environment variable when Ollama is _not_ run as a service.
It took ma a while to find out how to do this... :-) | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5320/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6791 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6791/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6791/comments | https://api.github.com/repos/ollama/ollama/issues/6791/events | https://github.com/ollama/ollama/issues/6791 | 2,524,410,174 | I_kwDOJ0Z1Ps6Wd3E- | 6,791 | Occasionally getting a 500 response and 'ollama._types.ResponseError: health resp' seemingly out of nowhere | {
"login": "danielj0nes",
"id": 32555231,
"node_id": "MDQ6VXNlcjMyNTU1MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/32555231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielj0nes",
"html_url": "https://github.com/danielj0nes",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-09-13T09:50:07 | 2024-09-13T09:50:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello, I am running a Python server that receives and sends requests to an instance of Ollama (with the Llama 3.1 model).
When lots of requests are sent at once, I occasionally receive a 500 response from the Ollama server which causes the process to crash. The error I get from the Python Oll... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6791/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6791/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3820 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3820/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3820/comments | https://api.github.com/repos/ollama/ollama/issues/3820/events | https://github.com/ollama/ollama/issues/3820 | 2,256,421,835 | I_kwDOJ0Z1Ps6GfkPL | 3,820 | TLS handshake timeout when pulling models | {
"login": "Shzyhao",
"id": 77272241,
"node_id": "MDQ6VXNlcjc3MjcyMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/77272241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shzyhao",
"html_url": "https://github.com/Shzyhao",
"followers_url": "https://api.github.com/users/Shzyha... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-22T12:33:19 | 2024-05-09T21:06:48 | 2024-05-09T21:06:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
使用控制台下载模型的时候,大部分时间速度很慢且会提示TLS handshake timeout,电脑的正常网络速度为50M/S,但是在下载模型的时候大部分时间速度都是300K/S以下,所有的模型都是这样的情况
When using the console to download the model, most of the time the speed is very slow and will prompt TLS handshake timeout, the normal network speed of the computer is 50M/S, but most of ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3820/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2117 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2117/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2117/comments | https://api.github.com/repos/ollama/ollama/issues/2117/events | https://github.com/ollama/ollama/pull/2117 | 2,092,297,145 | PR_kwDOJ0Z1Ps5kozIt | 2,117 | Unlock mutex when failing to load model | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-01-21T01:06:59 | 2024-01-21T01:54:47 | 2024-01-21T01:54:46 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2117",
"html_url": "https://github.com/ollama/ollama/pull/2117",
"diff_url": "https://github.com/ollama/ollama/pull/2117.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2117.patch",
"merged_at": "2024-01-21T01:54:46"
} | Avoids `ollama serve` hanging with `concurrent llm servers not yet supported, waiting for prior server to complete` when a model fails to load
I believe this also fixes https://github.com/jmorganca/ollama/issues/1641 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2117/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8620 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8620/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8620/comments | https://api.github.com/repos/ollama/ollama/issues/8620/events | https://github.com/ollama/ollama/issues/8620 | 2,814,178,540 | I_kwDOJ0Z1Ps6nvPTs | 8,620 | Add support fo Qwen 2.5 VL models (3B, 7B and 32B) instruct versions | {
"login": "YarvixPA",
"id": 152553832,
"node_id": "U_kgDOCRfJaA",
"avatar_url": "https://avatars.githubusercontent.com/u/152553832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YarvixPA",
"html_url": "https://github.com/YarvixPA",
"followers_url": "https://api.github.com/users/YarvixPA/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 9 | 2025-01-27T22:20:04 | 2025-01-30T11:41:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, now hours after being released, I would like to suggest that you add support for the Qwen2.5-VL models.
**[Qwen2.5-VL - Hugginface collection](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5)** | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8620/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8620/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3447 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3447/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3447/comments | https://api.github.com/repos/ollama/ollama/issues/3447/events | https://github.com/ollama/ollama/pull/3447 | 2,219,599,627 | PR_kwDOJ0Z1Ps5rY3bf | 3,447 | upgrade langchain for python-privategpt example | {
"login": "guanlisheng",
"id": 721973,
"node_id": "MDQ6VXNlcjcyMTk3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/721973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guanlisheng",
"html_url": "https://github.com/guanlisheng",
"followers_url": "https://api.github.com/user... | [] | closed | false | null | [] | null | 1 | 2024-04-02T05:35:55 | 2024-11-21T09:27:26 | 2024-11-21T09:27:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3447",
"html_url": "https://github.com/ollama/ollama/pull/3447",
"diff_url": "https://github.com/ollama/ollama/pull/3447.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3447.patch",
"merged_at": null
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3447/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6778 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6778/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6778/comments | https://api.github.com/repos/ollama/ollama/issues/6778/events | https://github.com/ollama/ollama/issues/6778 | 2,523,213,492 | I_kwDOJ0Z1Ps6WZS60 | 6,778 | Would be nice to have a "continue last message" option with the `/api/chat` endpoint | {
"login": "hammer-ai",
"id": 143602265,
"node_id": "U_kgDOCI8yWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/143602265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hammer-ai",
"html_url": "https://github.com/hammer-ai",
"followers_url": "https://api.github.com/users/hammer... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-09-12T19:38:21 | 2024-09-13T05:22:52 | 2024-09-13T05:22:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi there, it would be nice to have a "continue last message" option with the `/api/chat` endpoint. Thanks! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6778/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3433 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3433/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3433/comments | https://api.github.com/repos/ollama/ollama/issues/3433/events | https://github.com/ollama/ollama/issues/3433 | 2,217,424,110 | I_kwDOJ0Z1Ps6EKzTu | 3,433 | 添加chatglm3-6b-128k模型 | {
"login": "wantong-lab",
"id": 60781328,
"node_id": "MDQ6VXNlcjYwNzgxMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/60781328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wantong-lab",
"html_url": "https://github.com/wantong-lab",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 1 | 2024-04-01T03:12:10 | 2024-04-20T11:57:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What model would you like?
https://huggingface.co/THUDM/chatglm3-6b-128k
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3433/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4473 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4473/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4473/comments | https://api.github.com/repos/ollama/ollama/issues/4473/events | https://github.com/ollama/ollama/issues/4473 | 2,300,175,577 | I_kwDOJ0Z1Ps6JGeTZ | 4,473 | InternVL | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 4 | 2024-05-16T11:43:14 | 2025-01-28T13:32:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/OpenGVLab/InternVL
thanks | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4473/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8148 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8148/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8148/comments | https://api.github.com/repos/ollama/ollama/issues/8148/events | https://github.com/ollama/ollama/pull/8148 | 2,746,392,142 | PR_kwDOJ0Z1Ps6Fkplo | 8,148 | Add support for applying control vectors in gguf format [Rebased on v0.5.7] | {
"login": "itszn",
"id": 1857794,
"node_id": "MDQ6VXNlcjE4NTc3OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1857794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itszn",
"html_url": "https://github.com/itszn",
"followers_url": "https://api.github.com/users/itszn/follower... | [] | open | false | null | [] | null | 2 | 2024-12-18T00:11:08 | 2025-01-17T00:52:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8148",
"html_url": "https://github.com/ollama/ollama/pull/8148",
"diff_url": "https://github.com/ollama/ollama/pull/8148.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8148.patch",
"merged_at": null
} | Current Supported Ollama Release Version: v0.5.7
Control Vectors allow for changing the behavior of a model by steering towards or away from a specific behavior.
You can learn more about them from these sources:
https://hlfshell.ai/posts/representation-engineering/
https://vgel.me/posts/representation-engineeri... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8148/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8148/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6725 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6725/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6725/comments | https://api.github.com/repos/ollama/ollama/issues/6725/events | https://github.com/ollama/ollama/issues/6725 | 2,516,138,023 | I_kwDOJ0Z1Ps6V-Tgn | 6,725 | Incorrect AppDir when creating banner script (Preview) | {
"login": "DJStompZone",
"id": 85457381,
"node_id": "MDQ6VXNlcjg1NDU3Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/85457381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DJStompZone",
"html_url": "https://github.com/DJStompZone",
"followers_url": "https://api.github.com/... | [
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-09-10T11:09:58 | 2024-10-30T16:24:32 | 2024-10-30T16:24:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Upon installing Ollama (Preview, 0.3.10.0) on Windows*, I noticed an error in the log file:
```
time=2024-09-10T05:45:41.543-05:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2024-09-10T05:45:41.564-05:00 level=INFO source=store.go:96 msg="wrote store: C:\\Users\\desco\\AppData\\Local\\Ollama\\co... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6725/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4126 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4126/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4126/comments | https://api.github.com/repos/ollama/ollama/issues/4126/events | https://github.com/ollama/ollama/issues/4126 | 2,277,659,229 | I_kwDOJ0Z1Ps6HwlJd | 4,126 | Some Ollama models apparently affected by llama.cpp BPE pretokenization issue | {
"login": "sealad886",
"id": 155285242,
"node_id": "U_kgDOCUF2-g",
"avatar_url": "https://avatars.githubusercontent.com/u/155285242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sealad886",
"html_url": "https://github.com/sealad886",
"followers_url": "https://api.github.com/users/sealad... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 12 | 2024-05-03T13:13:23 | 2025-01-06T04:49:04 | 2025-01-06T04:49:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
See the following llama.cpp issues/PRs:
* [PR 6920](https://github.com/ggerganov/llama.cpp/pull/6920): llama : improve BPE pre-processing + LLaMA 3 and Deepseek support
* [Issue 7030](https://github.com/ggerganov/llama.cpp/issues/7030): Command-R GGUF conversion no longer working
* [Issue 7... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4126/reactions",
"total_count": 16,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4126/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5510 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5510/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5510/comments | https://api.github.com/repos/ollama/ollama/issues/5510/events | https://github.com/ollama/ollama/pull/5510 | 2,393,248,791 | PR_kwDOJ0Z1Ps50koYp | 5,510 | cmd: display transfer model data progress | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | [] | closed | false | null | [] | null | 1 | 2024-07-05T23:58:52 | 2024-07-31T17:16:37 | 2024-07-31T17:16:37 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5510",
"html_url": "https://github.com/ollama/ollama/pull/5510",
"diff_url": "https://github.com/ollama/ollama/pull/5510.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5510.patch",
"merged_at": null
} | displays `transferring model data 24% ⠇" during transfer data
rebased on top of https://github.com/ollama/ollama/pull/5441
https://github.com/ollama/ollama/issues/5423 | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/jos... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5510/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2432 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2432/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2432/comments | https://api.github.com/repos/ollama/ollama/issues/2432/events | https://github.com/ollama/ollama/pull/2432 | 2,127,739,346 | PR_kwDOJ0Z1Ps5mg3BH | 2,432 | Snap packaging | {
"login": "mz2",
"id": 71363,
"node_id": "MDQ6VXNlcjcxMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mz2",
"html_url": "https://github.com/mz2",
"followers_url": "https://api.github.com/users/mz2/followers",
"following... | [] | closed | false | null | [] | null | 3 | 2024-02-09T19:33:11 | 2024-11-21T08:14:40 | 2024-11-21T08:05:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2432",
"html_url": "https://github.com/ollama/ollama/pull/2432",
"diff_url": "https://github.com/ollama/ollama/pull/2432.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2432.patch",
"merged_at": null
} | Adds strictly confined snap packaging for x86-64 (~~and arm64~~ just x86-64 for starters, looks like this needs overall a bit of love in `ollama`), presently published on the channel `latest/beta`. This is a nice alternative to docker (no need to install and configure the nvidia docker runtime for example, systemd serv... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2432/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7629 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7629/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7629/comments | https://api.github.com/repos/ollama/ollama/issues/7629/events | https://github.com/ollama/ollama/issues/7629 | 2,651,860,893 | I_kwDOJ0Z1Ps6eEC-d | 7,629 | Ollama not Utilizing Maximum available VRAM | {
"login": "ahmedashraf443",
"id": 26746937,
"node_id": "MDQ6VXNlcjI2NzQ2OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/26746937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedashraf443",
"html_url": "https://github.com/ahmedashraf443",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw... | closed | false | null | [] | null | 7 | 2024-11-12T11:12:49 | 2024-12-02T15:24:24 | 2024-12-02T15:24:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've been using llama.cpp recently to run large models, some of which exceed my GPU's VRAM capacity. With llama.cpp, when I run models that are too large to fully fit in VRAM, it manages to utilize around 7.2 to 7.5 GB of my 8 GB VRAM, offloading the remainder to system RAM. This approach maximi... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7629/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7629/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/15 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/15/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/15/comments | https://api.github.com/repos/ollama/ollama/issues/15/events | https://github.com/ollama/ollama/pull/15 | 1,779,805,255 | PR_kwDOJ0Z1Ps5UMC1L | 15 | batch model | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-06-28T21:34:34 | 2023-06-29T00:10:43 | 2023-06-29T00:10:39 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/15",
"html_url": "https://github.com/ollama/ollama/pull/15",
"diff_url": "https://github.com/ollama/ollama/pull/15.diff",
"patch_url": "https://github.com/ollama/ollama/pull/15.patch",
"merged_at": "2023-06-29T00:10:39"
} | add a batch model which is distinct in the way the prompts are displayed to the user. this produces a cleaner output without a trailing `>>>` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/15/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/15/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8054 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8054/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8054/comments | https://api.github.com/repos/ollama/ollama/issues/8054/events | https://github.com/ollama/ollama/pull/8054 | 2,734,043,292 | PR_kwDOJ0Z1Ps6E6u8Z | 8,054 | ci: fix linux version | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-12-11T21:14:25 | 2024-12-11T22:10:00 | 2024-12-11T22:09:57 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8054",
"html_url": "https://github.com/ollama/ollama/pull/8054",
"diff_url": "https://github.com/ollama/ollama/pull/8054.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8054.patch",
"merged_at": "2024-12-11T22:09:57"
} | Pass through the version override so the makefiles use it
rc3 linux binaries are reporting a version string of `0.5.2-rc3-0-g581a4a5-dirty` | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8054/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4427 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4427/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4427/comments | https://api.github.com/repos/ollama/ollama/issues/4427/events | https://github.com/ollama/ollama/issues/4427 | 2,294,983,591 | I_kwDOJ0Z1Ps6Iyqun | 4,427 | ollama can't run qwen:72b, error msg ""gpu VRAM usage didn't recover within timeout | {
"login": "changingshow",
"id": 7709440,
"node_id": "MDQ6VXNlcjc3MDk0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7709440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changingshow",
"html_url": "https://github.com/changingshow",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 20 | 2024-05-14T09:50:54 | 2024-11-20T10:36:33 | 2024-05-21T22:30:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have already downloaded qwen:7b, but when i run `ollama run qwen:7b`,got this error `Error: timed out waiting for llama runner to start:`, in the server.log have this msg `gpu VRAM usage didn't recover within timeout`
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
olla... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4427/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4427/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/276/comments | https://api.github.com/repos/ollama/ollama/issues/276/events | https://github.com/ollama/ollama/pull/276 | 1,836,094,664 | PR_kwDOJ0Z1Ps5XKdQu | 276 | configurable rope frequency parameters | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 2 | 2023-08-04T05:34:23 | 2023-08-07T20:39:39 | 2023-08-07T20:39:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/276",
"html_url": "https://github.com/ollama/ollama/pull/276",
"diff_url": "https://github.com/ollama/ollama/pull/276.diff",
"patch_url": "https://github.com/ollama/ollama/pull/276.patch",
"merged_at": "2023-08-07T20:39:38"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/276/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4240 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4240/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4240/comments | https://api.github.com/repos/ollama/ollama/issues/4240/events | https://github.com/ollama/ollama/pull/4240 | 2,284,381,644 | PR_kwDOJ0Z1Ps5u0SU0 | 4,240 | reference license, template, system as files | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-07T23:21:22 | 2025-01-29T19:20:34 | 2025-01-29T19:20:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4240",
"html_url": "https://github.com/ollama/ollama/pull/4240",
"diff_url": "https://github.com/ollama/ollama/pull/4240.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4240.patch",
"merged_at": null
} | this change allows certain layers to take files as values. the final value for the layer is the content of the file
```
FROM llama3
LICENSE ./meta-llama/Meta-Llama-3-8B-Instruct/LICENSE
LICENSE ./meta-llama/Meta-Llama-3-8B-Instruct/USE_POLICY.md
```
any value that does not reference a file will use the origin... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4240/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4240/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7115 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7115/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7115/comments | https://api.github.com/repos/ollama/ollama/issues/7115/events | https://github.com/ollama/ollama/pull/7115 | 2,570,168,454 | PR_kwDOJ0Z1Ps59zZXy | 7,115 | Test | {
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2024-10-07T11:32:38 | 2024-10-08T04:15:25 | 2024-10-07T11:32:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7115",
"html_url": "https://github.com/ollama/ollama/pull/7115",
"diff_url": "https://github.com/ollama/ollama/pull/7115.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7115.patch",
"merged_at": null
} | null | {
"login": "kavita-rane2",
"id": 175689274,
"node_id": "U_kgDOCnjOOg",
"avatar_url": "https://avatars.githubusercontent.com/u/175689274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kavita-rane2",
"html_url": "https://github.com/kavita-rane2",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7115/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7267 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7267/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7267/comments | https://api.github.com/repos/ollama/ollama/issues/7267/events | https://github.com/ollama/ollama/issues/7267 | 2,598,879,242 | I_kwDOJ0Z1Ps6a58AK | 7,267 | Running out of memory when allocating to second GPU | {
"login": "joshuakoh1",
"id": 40602863,
"node_id": "MDQ6VXNlcjQwNjAyODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/40602863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshuakoh1",
"html_url": "https://github.com/joshuakoh1",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw... | open | false | null | [] | null | 5 | 2024-10-19T07:58:59 | 2024-10-22T09:32:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | tid="33768" timestamp=1729324300
INFO [wmain] build info | build=3670 commit="aad7f071" tid="33768" timestamp=1729324300
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7267/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8507 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8507/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8507/comments | https://api.github.com/repos/ollama/ollama/issues/8507/events | https://github.com/ollama/ollama/pull/8507 | 2,800,284,068 | PR_kwDOJ0Z1Ps6IaRxH | 8,507 | Add Nvidia Model | {
"login": "Setland34",
"id": 105908636,
"node_id": "U_kgDOBlAJnA",
"avatar_url": "https://avatars.githubusercontent.com/u/105908636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Setland34",
"html_url": "https://github.com/Setland34",
"followers_url": "https://api.github.com/users/Setlan... | [] | open | false | null | [] | null | 0 | 2025-01-20T21:18:44 | 2025-01-20T21:18:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8507",
"html_url": "https://github.com/ollama/ollama/pull/8507",
"diff_url": "https://github.com/ollama/ollama/pull/8507.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8507.patch",
"merged_at": null
} | Fixes #8460
Add support for Nvidia Model in the repository.
* **api/types.go**: Add a new struct `NvidiaModel` and a constant `NvidiaModelURL`.
* **convert/convert_llama.go**: Add a new function `convertNvidiaModel` to handle the Nvidia Model conversion. Add a new struct `NvidiaModel` and a constant `NvidiaModelURL`.... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8507/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3272 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3272/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3272/comments | https://api.github.com/repos/ollama/ollama/issues/3272/events | https://github.com/ollama/ollama/issues/3272 | 2,197,785,908 | I_kwDOJ0Z1Ps6C_400 | 3,272 | Error: exception create_tensor: tensor 'output.weight' not found | {
"login": "GhadaJouini",
"id": 32711189,
"node_id": "MDQ6VXNlcjMyNzExMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/32711189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GhadaJouini",
"html_url": "https://github.com/GhadaJouini",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-03-20T15:10:35 | 2024-10-29T07:31:51 | 2024-04-15T19:46:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to add a custom model in ollama but am encountering this error. Is it possible that the model quantization is not correct ?
**Error: exception create_tensor: tensor 'output.weight' not found**
Quantized model: https://huggingface.co/MaziyarPanahi/zephyr-7b-gemma-v0.1-GGUF
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3272/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3272/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6435 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6435/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6435/comments | https://api.github.com/repos/ollama/ollama/issues/6435/events | https://github.com/ollama/ollama/issues/6435 | 2,474,771,281 | I_kwDOJ0Z1Ps6TggNR | 6,435 | 0.3.6 /api/embed return 500 if more items are provided in input | {
"login": "davidliudev",
"id": 31893484,
"node_id": "MDQ6VXNlcjMxODkzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31893484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidliudev",
"html_url": "https://github.com/davidliudev",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-08-20T05:49:18 | 2024-08-22T21:51:44 | 2024-08-22T21:51:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This does not happen on 0.3.4. It only happen on the latest 0.3.6 (Haven't test on 0.3.5).
If I reduce the payload to a single item it is all good. It happens when I put more items.
I have to downgrade to the 0.3.4 until this is fixed.
Here is the log:
time=2024-08-20T13:38:29.761+08:0... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6435/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6752 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6752/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6752/comments | https://api.github.com/repos/ollama/ollama/issues/6752/events | https://github.com/ollama/ollama/pull/6752 | 2,519,544,747 | PR_kwDOJ0Z1Ps57JteD | 6,752 | Update README.md | {
"login": "rapidarchitect",
"id": 126218667,
"node_id": "U_kgDOB4Xxqw",
"avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rapidarchitect",
"html_url": "https://github.com/rapidarchitect",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 0 | 2024-09-11T11:56:49 | 2024-09-12T01:36:26 | 2024-09-12T01:36:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6752",
"html_url": "https://github.com/ollama/ollama/pull/6752",
"diff_url": "https://github.com/ollama/ollama/pull/6752.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6752.patch",
"merged_at": "2024-09-12T01:36:26"
} | Added Ollama Mixture of Experts repository to terminal apps. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6752/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.