url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6080 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6080/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6080/comments | https://api.github.com/repos/ollama/ollama/issues/6080/events | https://github.com/ollama/ollama/issues/6080 | 2,438,714,003 | I_kwDOJ0Z1Ps6RW9KT | 6,080 | Incorrect free VRAM reporting when two CUDA cards with different VRAM capacities are installed, preventing Ollama from using GPU inference | {
"login": "XJTU-WXY",
"id": 132470925,
"node_id": "U_kgDOB-VYjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/132470925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XJTU-WXY",
"html_url": "https://github.com/XJTU-WXY",
"followers_url": "https://api.github.com/users/XJTU-WXY/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-07-30T21:29:35 | 2024-11-05T23:21:39 | 2024-11-05T23:21:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Dear ollama developer:
First of all, thank you very much for developing and maintaining ollama. Open source leads the world to a brighter future!
I use the _gemma2:27b_ model, my problem is:
- When my device only has a Tesla P40 (with 24G VRAM) installed, ollama can automatically use GPU i... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6080/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2356 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2356/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2356/comments | https://api.github.com/repos/ollama/ollama/issues/2356/events | https://github.com/ollama/ollama/issues/2356 | 2,117,604,218 | I_kwDOJ0Z1Ps5-OBN6 | 2,356 | Phi modelfile is incorrect | {
"login": "mak448a",
"id": 94062293,
"node_id": "U_kgDOBZtG1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mak448a",
"html_url": "https://github.com/mak448a",
"followers_url": "https://api.github.com/users/mak448a/follow... | [
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 7 | 2024-02-05T03:32:12 | 2024-03-13T01:05:19 | 2024-03-12T18:40:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use phi ollama and put in the system prompt, it doesn't respond as well as it does in LM Studio.
Is the internal prompt in ollama correct?
LM Studio uses "Instruct:" and "Output:" as markers for the user's message and the assistant's message.
LM Studio: `{"speech": "Hi!", "program": "null"}`
Ollama: ` Welc... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2356/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1535 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1535/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1535/comments | https://api.github.com/repos/ollama/ollama/issues/1535/events | https://github.com/ollama/ollama/pull/1535 | 2,042,775,976 | PR_kwDOJ0Z1Ps5iD1Wu | 1,535 | add API tests for list handler | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-15T02:16:09 | 2023-12-15T02:18:26 | 2023-12-15T02:18:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1535",
"html_url": "https://github.com/ollama/ollama/pull/1535",
"diff_url": "https://github.com/ollama/ollama/pull/1535.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1535.patch",
"merged_at": "2023-12-15T02:18:25"
} | This change adds some tests for the `GET /api/list` endpoint. It includes a test that gets no models, and one that returns a single entry. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1535/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1766 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1766/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1766/comments | https://api.github.com/repos/ollama/ollama/issues/1766/events | https://github.com/ollama/ollama/pull/1766 | 2,064,196,581 | PR_kwDOJ0Z1Ps5jJbV- | 1,766 | Update README.md | {
"login": "cole-gillespie",
"id": 745064,
"node_id": "MDQ6VXNlcjc0NTA2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/745064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cole-gillespie",
"html_url": "https://github.com/cole-gillespie",
"followers_url": "https://api.github... | [] | closed | false | null | [] | null | 1 | 2024-01-03T15:10:12 | 2024-01-03T15:44:22 | 2024-01-03T15:44:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1766",
"html_url": "https://github.com/ollama/ollama/pull/1766",
"diff_url": "https://github.com/ollama/ollama/pull/1766.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1766.patch",
"merged_at": "2024-01-03T15:44:22"
} | fix quickstart spelling | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1766/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4893 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4893/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4893/comments | https://api.github.com/repos/ollama/ollama/issues/4893/events | https://github.com/ollama/ollama/issues/4893 | 2,339,497,859 | I_kwDOJ0Z1Ps6LceeD | 4,893 | Error: error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 | {
"login": "Hsiayukoo",
"id": 81662220,
"node_id": "MDQ6VXNlcjgxNjYyMjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/81662220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hsiayukoo",
"html_url": "https://github.com/Hsiayukoo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-07T03:17:23 | 2024-06-11T03:44:25 | 2024-06-09T17:33:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### 1. background
I want to use **llama.cpp** to build llama2-7b model based on my own ckpt file, follow theses steps:
1. Download a [llama2-7b.Q2_k](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q2_K.gguf) from hugging face.(This gguf file can be loaded by Ollama)
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4893/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2065 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2065/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2065/comments | https://api.github.com/repos/ollama/ollama/issues/2065/events | https://github.com/ollama/ollama/issues/2065 | 2,089,536,146 | I_kwDOJ0Z1Ps58i8qS | 2,065 | Any ollama command results in CORE DUMPED (ollama not using GPU) | {
"login": "Rushmore75",
"id": 76796612,
"node_id": "MDQ6VXNlcjc2Nzk2NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/76796612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rushmore75",
"html_url": "https://github.com/Rushmore75",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | null | [] | null | 8 | 2024-01-19T04:08:11 | 2024-03-11T17:59:19 | 2024-03-11T17:59:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Trying to interact with the command at all just returns `Illegal instruction (core dumped)`. The journalctl logs just show
```
Started Ollama Service
ollama.service: Main process exited, code=dumped, status=4/ILL
ollama.service: Failed with result 'core-dump;
```
System:
Kernel: 5.15.0-91-generic
Distro: Ubuntu... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2065/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/945 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/945/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/945/comments | https://api.github.com/repos/ollama/ollama/issues/945/events | https://github.com/ollama/ollama/issues/945 | 1,966,869,007 | I_kwDOJ0Z1Ps51PAoP | 945 | How does one delete ollama? | {
"login": "improvethings",
"id": 16601027,
"node_id": "MDQ6VXNlcjE2NjAxMDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/16601027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/improvethings",
"html_url": "https://github.com/improvethings",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 3 | 2023-10-29T06:54:36 | 2023-11-20T10:35:26 | 2023-10-30T15:14:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I don't have much diskspace in /, and so I need to delete ollama, and reinstall it in a custom directory. Thanks in advance! | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/945/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/501 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/501/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/501/comments | https://api.github.com/repos/ollama/ollama/issues/501/events | https://github.com/ollama/ollama/issues/501 | 1,888,513,519 | I_kwDOJ0Z1Ps5wkG3v | 501 | large embedded file fails on model create | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2023-09-09T00:43:48 | 2023-10-27T19:22:47 | 2023-10-27T19:22:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Adding a large file to an embedding may cause an unexpected error.
```
ollama crate exampleModel -f Modelfile
...
Error: unexpected end to create model
```
```
FROM codellama
SYSTEM """
You are a DND game master that reviews dice rolls and responds with JSON in the following format: "{\"action\":\"do st... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/501/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/501/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/193 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/193/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/193/comments | https://api.github.com/repos/ollama/ollama/issues/193/events | https://github.com/ollama/ollama/issues/193 | 1,818,777,907 | I_kwDOJ0Z1Ps5saFkz | 193 | Ability to download LLAMA2 70b | {
"login": "plannaAlain",
"id": 88775056,
"node_id": "MDQ6VXNlcjg4Nzc1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/88775056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plannaAlain",
"html_url": "https://github.com/plannaAlain",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5789807732,
"node_id": ... | closed | false | null | [] | null | 7 | 2023-07-24T16:44:44 | 2023-08-05T13:03:50 | 2023-08-04T20:04:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/193/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/193/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7919 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7919/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7919/comments | https://api.github.com/repos/ollama/ollama/issues/7919/events | https://github.com/ollama/ollama/issues/7919 | 2,715,230,108 | I_kwDOJ0Z1Ps6h1x-c | 7,919 | Performance decline | {
"login": "axil76",
"id": 1433185,
"node_id": "MDQ6VXNlcjE0MzMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1433185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/axil76",
"html_url": "https://github.com/axil76",
"followers_url": "https://api.github.com/users/axil76/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 16 | 2024-12-03T14:45:01 | 2025-01-13T01:32:33 | 2025-01-13T01:32:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am testing the Vgpu on a Vsphere 8 cluster, the drivers work on the redhat 8 os and work in docker, when the VM boots, the Ollama server responds well and after several minutes, the ollama server no longer responds
Device 0: NVIDIA L40S-24C, compute capability 8.9, VMM: no
time=2024-12-03T1... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7919/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8264 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8264/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8264/comments | https://api.github.com/repos/ollama/ollama/issues/8264/events | https://github.com/ollama/ollama/pull/8264 | 2,762,135,608 | PR_kwDOJ0Z1Ps6GYOeT | 8,264 | example: add python streamlit frontend UI example | {
"login": "Talen-520",
"id": 63370853,
"node_id": "MDQ6VXNlcjYzMzcwODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/63370853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Talen-520",
"html_url": "https://github.com/Talen-520",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2024-12-29T07:07:21 | 2025-01-09T14:41:37 | 2025-01-08T22:58:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8264",
"html_url": "https://github.com/ollama/ollama/pull/8264",
"diff_url": "https://github.com/ollama/ollama/pull/8264.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8264.patch",
"merged_at": null
} | This is a simple frontend user interface built using Streamlit, benefiting Python developers with no frontend experience. The code references an existing example format | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8264/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1318 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1318/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1318/comments | https://api.github.com/repos/ollama/ollama/issues/1318/events | https://github.com/ollama/ollama/issues/1318 | 2,017,106,154 | I_kwDOJ0Z1Ps54Opjq | 1,318 | How to Open Ollama Service to the Outside World with HTTPS Compatibility? | {
"login": "rehberim360",
"id": 144798027,
"node_id": "U_kgDOCKFxSw",
"avatar_url": "https://avatars.githubusercontent.com/u/144798027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rehberim360",
"html_url": "https://github.com/rehberim360",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2023-11-29T17:42:09 | 2023-12-04T22:15:54 | 2023-12-04T22:15:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
Problem: The Ollama service I've installed on Google VM doesn't seem to accept incoming requests over HTTPS. I'm aiming to allow external requests to reach the server and enable HTTPS support for the Ollama service. I've taken the following steps:
Server Configuration: I configured a reverse proxy using Ap... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1318/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1318/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7253 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7253/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7253/comments | https://api.github.com/repos/ollama/ollama/issues/7253/events | https://github.com/ollama/ollama/issues/7253 | 2,597,847,962 | I_kwDOJ0Z1Ps6a2AOa | 7,253 | The issue regarding concurrent processing with multiple GPU cards | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2024-10-18T15:36:44 | 2024-11-01T02:50:52 | 2024-11-01T02:50:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### Premise:
There are 4 GPU cards in the Linux server, and OLLAMA_SCHED_SPREAD=1 is set, with the aim of improving the model's inference efficiency through concurrent processing on multiple GPU cards.
### My Scenario:
In the same process, I wish to sequentially call 3 different LLM models ... | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7253/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8355 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8355/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8355/comments | https://api.github.com/repos/ollama/ollama/issues/8355/events | https://github.com/ollama/ollama/issues/8355 | 2,776,703,481 | I_kwDOJ0Z1Ps6lgSH5 | 8,355 | we need Ollama Video-LLaVA | {
"login": "ixn3rd3mxn",
"id": 119990214,
"node_id": "U_kgDOBybnxg",
"avatar_url": "https://avatars.githubusercontent.com/u/119990214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ixn3rd3mxn",
"html_url": "https://github.com/ixn3rd3mxn",
"followers_url": "https://api.github.com/users/ixn... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2025-01-09T02:34:25 | 2025-01-09T03:39:08 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | i want to use Ollama Video-LLaVA model , but it model is not have in Ollama , someone can add this model to Ollama please?
i just try [anas/video-llava](https://ollama.com/anas/video-llava) & [ManishThota/llava_next_video](https://ollama.com/ManishThota/llava_next_video)
it not work it have bug in this issues [iss... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8355/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4303 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4303/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4303/comments | https://api.github.com/repos/ollama/ollama/issues/4303/events | https://github.com/ollama/ollama/pull/4303 | 2,288,553,461 | PR_kwDOJ0Z1Ps5vCVXf | 4,303 | add project description | {
"login": "reid41",
"id": 25558653,
"node_id": "MDQ6VXNlcjI1NTU4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25558653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reid41",
"html_url": "https://github.com/reid41",
"followers_url": "https://api.github.com/users/reid41/fo... | [] | closed | false | null | [] | null | 1 | 2024-05-09T22:23:48 | 2024-11-24T23:55:09 | 2024-11-24T23:55:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4303",
"html_url": "https://github.com/ollama/ollama/pull/4303",
"diff_url": "https://github.com/ollama/ollama/pull/4303.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4303.patch",
"merged_at": "2024-11-24T23:55:09"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4303/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2162 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2162/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2162/comments | https://api.github.com/repos/ollama/ollama/issues/2162/events | https://github.com/ollama/ollama/pull/2162 | 2,096,857,781 | PR_kwDOJ0Z1Ps5k4Muw | 2,162 | Report more information about GPUs in verbose mode | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-01-23T19:43:51 | 2024-01-24T01:45:43 | 2024-01-24T01:45:40 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2162",
"html_url": "https://github.com/ollama/ollama/pull/2162",
"diff_url": "https://github.com/ollama/ollama/pull/2162.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2162.patch",
"merged_at": "2024-01-24T01:45:40"
} | This adds additional calls to both CUDA and ROCm management libraries to discover additional attributes about the GPU(s) detected in the system, and wires up runtime verbosity selection. When users hit problems with GPUs we can ask them to run with `OLLAMA_DEBUG=1 ollama serve` and share the server log.
Example out... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2162/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1798 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1798/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1798/comments | https://api.github.com/repos/ollama/ollama/issues/1798/events | https://github.com/ollama/ollama/issues/1798 | 2,066,610,955 | I_kwDOJ0Z1Ps57LfsL | 1,798 | failed to verify certificate: x509: certificate signed by unknown authority | {
"login": "jooyoungseo",
"id": 19754711,
"node_id": "MDQ6VXNlcjE5NzU0NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/19754711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jooyoungseo",
"html_url": "https://github.com/jooyoungseo",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 8 | 2024-01-05T02:04:27 | 2025-01-07T06:10:01 | 2024-01-08T19:03:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In my HPC system, I have to use apptainer instead of docker to run ollama. In the pulling process, I have encountered the following certificate issue. I was wondering if this could be addressed from ollama side.
``` sh
Apptainer> ollama serve &
[1] 2914729
Apptainer> 2024/01/04 15:51:13 images.go:737: total blobs: 0
2... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1798/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7607 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7607/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7607/comments | https://api.github.com/repos/ollama/ollama/issues/7607/events | https://github.com/ollama/ollama/pull/7607 | 2,647,964,758 | PR_kwDOJ0Z1Ps6Bc2eD | 7,607 | feat: add vibe app to readme | {
"login": "thewh1teagle",
"id": 61390950,
"node_id": "MDQ6VXNlcjYxMzkwOTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thewh1teagle",
"html_url": "https://github.com/thewh1teagle",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 0 | 2024-11-11T02:51:10 | 2024-11-20T18:45:10 | 2024-11-20T18:45:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7607",
"html_url": "https://github.com/ollama/ollama/pull/7607",
"diff_url": "https://github.com/ollama/ollama/pull/7607.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7607.patch",
"merged_at": "2024-11-20T18:45:10"
} | Add [vibe](https://github.com/thewh1teagle/vibe) app which just added Ollama support | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7607/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5665 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5665/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5665/comments | https://api.github.com/repos/ollama/ollama/issues/5665/events | https://github.com/ollama/ollama/pull/5665 | 2,406,713,540 | PR_kwDOJ0Z1Ps51SCMs | 5,665 | Refactor cmd.go for Improved Readability | {
"login": "hasitpbhatt",
"id": 778585,
"node_id": "MDQ6VXNlcjc3ODU4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/778585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasitpbhatt",
"html_url": "https://github.com/hasitpbhatt",
"followers_url": "https://api.github.com/user... | [] | open | false | null | [] | null | 0 | 2024-07-13T05:26:36 | 2024-07-15T01:05:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5665",
"html_url": "https://github.com/ollama/ollama/pull/5665",
"diff_url": "https://github.com/ollama/ollama/pull/5665.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5665.patch",
"merged_at": null
} | This PR refactors cmd.go to improve readability by eliminating unnecessary nesting, removing redundant count variables, and replacing HasPrefix with TrimPrefix for path manipulation. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5665/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4626 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4626/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4626/comments | https://api.github.com/repos/ollama/ollama/issues/4626/events | https://github.com/ollama/ollama/issues/4626 | 2,316,594,368 | I_kwDOJ0Z1Ps6KFGzA | 4,626 | about model quantization | {
"login": "andyyumiao",
"id": 11346379,
"node_id": "MDQ6VXNlcjExMzQ2Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11346379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyyumiao",
"html_url": "https://github.com/andyyumiao",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 3 | 2024-05-25T02:24:42 | 2024-05-28T20:40:22 | 2024-05-28T20:40:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | What are the quantization parameters used for the llama3 model in Ollama? For example, llama3 version, quantization parameters, etc
The llama3 8b version that I quantified using llama.cpp myself is not as good as the llama3 8b version that comes with Ollama, so I want to know the reason. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4626/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7328 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7328/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7328/comments | https://api.github.com/repos/ollama/ollama/issues/7328/events | https://github.com/ollama/ollama/issues/7328 | 2,607,539,111 | I_kwDOJ0Z1Ps6ba-On | 7,328 | Performance degradation with 8B+ models on Windows Radeon | {
"login": "7shi",
"id": 178381,
"node_id": "MDQ6VXNlcjE3ODM4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/178381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7shi",
"html_url": "https://github.com/7shi",
"followers_url": "https://api.github.com/users/7shi/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-10-23T07:14:50 | 2024-10-23T16:44:41 | 2024-10-23T16:44:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When running models 8B or larger on Windows with Radeon GPU, performance is slower than CPU-only mode, despite having sufficient VRAM available.
Environment:
- OS: Windows 11 Home [10.0.22631]
- CPU: AMD Ryzen 5 5600X 6-Core Processor
- GPU: Radeon RX 7600 XT
- VRAM: 16GB
Root Caus... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7328/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6689 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6689/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6689/comments | https://api.github.com/repos/ollama/ollama/issues/6689/events | https://github.com/ollama/ollama/issues/6689 | 2,511,934,082 | I_kwDOJ0Z1Ps6VuRKC | 6,689 | Reflection 70B fix? | {
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 5 | 2024-09-07T17:24:38 | 2024-09-08T23:30:19 | 2024-09-08T23:30:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There are rumors that Reflection model does not run properly on ollama, can anyone confirm it?
comments here:
https://www.reddit.com/r/LocalLLaMA/comments/1fa72an/reflectionllama3170b_available_on_ollama/ | {
"login": "gileneusz",
"id": 34601970,
"node_id": "MDQ6VXNlcjM0NjAxOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gileneusz",
"html_url": "https://github.com/gileneusz",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6689/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5944 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5944/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5944/comments | https://api.github.com/repos/ollama/ollama/issues/5944/events | https://github.com/ollama/ollama/issues/5944 | 2,429,563,960 | I_kwDOJ0Z1Ps6Q0DQ4 | 5,944 | Most difficult error ever: : no suitable llama servers found. | {
"login": "Swephoenix",
"id": 148555635,
"node_id": "U_kgDOCNrHcw",
"avatar_url": "https://avatars.githubusercontent.com/u/148555635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Swephoenix",
"html_url": "https://github.com/Swephoenix",
"followers_url": "https://api.github.com/users/Swe... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-07-25T10:05:37 | 2024-10-24T01:00:10 | 2024-07-26T20:23:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've reinstalled Ollama several times but it won't fix the error I'm getting at startup when I manually in CMD write ollama run llama3:8b (or any other model which are listed and recognized by ollama).

... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5944/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5944/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8282 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8282/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8282/comments | https://api.github.com/repos/ollama/ollama/issues/8282/events | https://github.com/ollama/ollama/issues/8282 | 2,765,170,432 | I_kwDOJ0Z1Ps6k0ScA | 8,282 | DeepSeek VL v2 | {
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2025-01-01T17:09:25 | 2025-01-01T17:09:25 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/collections/deepseek-ai/deepseek-vl2-675c22accc456d3beb4613ab
there are 3 versions: tiny, small and the default | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8282/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3614 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3614/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3614/comments | https://api.github.com/repos/ollama/ollama/issues/3614/events | https://github.com/ollama/ollama/issues/3614 | 2,239,786,761 | I_kwDOJ0Z1Ps6FgG8J | 3,614 | API response content contains leading space before some non-alphabetical chars | {
"login": "Propheticus",
"id": 6628064,
"node_id": "MDQ6VXNlcjY2MjgwNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6628064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Propheticus",
"html_url": "https://github.com/Propheticus",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | null | [] | null | 0 | 2024-04-12T10:44:07 | 2024-11-06T17:41:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When calling the /v1/chat/completions endpoint the response sometimes contains a leading space.
e.g. when asking for a markdown table the first char is a `|` , or when asking for a quote and the first char is a `_` (to later end with another to make _italic_) the content returned often -but... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3614/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4346 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4346/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4346/comments | https://api.github.com/repos/ollama/ollama/issues/4346/events | https://github.com/ollama/ollama/issues/4346 | 2,290,728,121 | I_kwDOJ0Z1Ps6Iiby5 | 4,346 | Ollama does not list installed models | {
"login": "javiergcim",
"id": 52302482,
"node_id": "MDQ6VXNlcjUyMzAyNDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/52302482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javiergcim",
"html_url": "https://github.com/javiergcim",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 8 | 2024-05-11T06:58:29 | 2024-05-13T16:48:07 | 2024-05-13T16:48:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The command "ollama list" does not list the installed models on the system (at least those created from a local GGUF file), which prevents other utilities (for example, WebUI) from discovering them.
However, the models are there and can be invoked by specifying their name explicitly. For exam... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4346/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2541 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2541/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2541/comments | https://api.github.com/repos/ollama/ollama/issues/2541/events | https://github.com/ollama/ollama/pull/2541 | 2,138,805,322 | PR_kwDOJ0Z1Ps5nGQp4 | 2,541 | fix: use requested model template | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 1 | 2024-02-16T15:05:05 | 2024-02-16T19:02:13 | 2024-02-16T19:02:13 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2541",
"html_url": "https://github.com/ollama/ollama/pull/2541",
"diff_url": "https://github.com/ollama/ollama/pull/2541.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2541.patch",
"merged_at": null
} | As reported in scenario 1 of #2492
When a request was made to a model than inherits from the currently loaded model the system and template were not updated in the `/chat` endpoint. The fix is to use the requested model rather than the loaded one.
Steps to reproduce:
1. Create a model that overrides the system ... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2541/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2541/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8515 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8515/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8515/comments | https://api.github.com/repos/ollama/ollama/issues/8515/events | https://github.com/ollama/ollama/pull/8515 | 2,801,364,312 | PR_kwDOJ0Z1Ps6Id8RD | 8,515 | Remove tfs_z from documentation. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2025-01-21T10:21:42 | 2025-01-21T17:36:01 | 2025-01-21T17:29:00 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8515",
"html_url": "https://github.com/ollama/ollama/pull/8515",
"diff_url": "https://github.com/ollama/ollama/pull/8515.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8515.patch",
"merged_at": "2025-01-21T17:29:00"
} | tfs_z was removed from llama.cpp in https://github.com/ggerganov/llama.cpp/pull/10071 and the vendor sync in https://github.com/ollama/ollama/pull/7875 propagated it into ollama.
Fixes: https://github.com/ollama/ollama/issues/8514 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8515/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3644 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3644/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3644/comments | https://api.github.com/repos/ollama/ollama/issues/3644/events | https://github.com/ollama/ollama/issues/3644 | 2,243,012,765 | I_kwDOJ0Z1Ps6Fsaid | 3,644 | Is the model's PROMPT maximum number of tokens determined by the inference tool? | {
"login": "17Reset",
"id": 122418720,
"node_id": "U_kgDOB0v2IA",
"avatar_url": "https://avatars.githubusercontent.com/u/122418720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/17Reset",
"html_url": "https://github.com/17Reset",
"followers_url": "https://api.github.com/users/17Reset/foll... | [] | closed | false | null | [] | null | 3 | 2024-04-15T08:16:07 | 2024-04-29T08:57:57 | 2024-04-15T19:25:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use ollama to reason about my Smuag-72B's model, there is no output when the input prompt has 150tokens, but the output is normal when scaled down to about 100. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3644/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1943 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1943/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1943/comments | https://api.github.com/repos/ollama/ollama/issues/1943/events | https://github.com/ollama/ollama/issues/1943 | 2,078,067,881 | I_kwDOJ0Z1Ps573Myp | 1,943 | [Feature] Add the ability to run a command or start a shell from the interactive mode | {
"login": "jimscard",
"id": 26580570,
"node_id": "MDQ6VXNlcjI2NTgwNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/26580570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimscard",
"html_url": "https://github.com/jimscard",
"followers_url": "https://api.github.com/users/jim... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-01-12T04:54:06 | 2024-03-11T19:19:25 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Many times, I'll go into the CLI client interactive mode, e.g., `ollama run [model]` to get help on doing something. Then, I have to start up another terminal window in order to actually do it.
To make this more user-friendly, two keyboard shortcuts should be added to the ollama run interactive mode -- `!` and `shell... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1943/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1943/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3897 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3897/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3897/comments | https://api.github.com/repos/ollama/ollama/issues/3897/events | https://github.com/ollama/ollama/pull/3897 | 2,262,417,048 | PR_kwDOJ0Z1Ps5tqbfk | 3,897 | add information about compiling with intel mkl | {
"login": "kannon92",
"id": 3780425,
"node_id": "MDQ6VXNlcjM3ODA0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kannon92",
"html_url": "https://github.com/kannon92",
"followers_url": "https://api.github.com/users/kanno... | [] | closed | false | null | [] | null | 2 | 2024-04-25T00:58:01 | 2024-06-04T13:04:14 | 2024-05-06T21:48:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3897",
"html_url": "https://github.com/ollama/ollama/pull/3897",
"diff_url": "https://github.com/ollama/ollama/pull/3897.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3897.patch",
"merged_at": null
} | llama.cpp has some information about how to compile with non gpu options.
I added a section on blas options for non gpu hosts. I use intel mkl and compile ollama (and llama.cpp) with this library.
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3897/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6212/comments | https://api.github.com/repos/ollama/ollama/issues/6212/events | https://github.com/ollama/ollama/issues/6212 | 2,451,872,643 | I_kwDOJ0Z1Ps6SJJuD | 6,212 | show --modelfile (still) doesn't properly quote MESSAGE statements | {
"login": "Maltz42",
"id": 20978744,
"node_id": "MDQ6VXNlcjIwOTc4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maltz42",
"html_url": "https://github.com/Maltz42",
"followers_url": "https://api.github.com/users/Maltz4... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-08-07T00:03:29 | 2024-08-07T05:13:06 | 2024-08-07T05:13:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The patch added to v0.3.3 for issue #6103 didn't work, and actually made the situation harder to mitigate with find/replace. Request that patch be rolled back, the issue be re-opened, and quoting of MESSAGE strings be revisited and tested more thoroughly. Thanks!
(Or let me know if I should... | {
"login": "Maltz42",
"id": 20978744,
"node_id": "MDQ6VXNlcjIwOTc4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maltz42",
"html_url": "https://github.com/Maltz42",
"followers_url": "https://api.github.com/users/Maltz4... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6212/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7046 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7046/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7046/comments | https://api.github.com/repos/ollama/ollama/issues/7046/events | https://github.com/ollama/ollama/issues/7046 | 2,556,762,944 | I_kwDOJ0Z1Ps6YZRtA | 7,046 | Loading Llama model to a Google Cloud Run Ollama Container through a Dockerfile | {
"login": "waynemorphic",
"id": 37283450,
"node_id": "MDQ6VXNlcjM3MjgzNDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/37283450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waynemorphic",
"html_url": "https://github.com/waynemorphic",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677677816,
"node_id": ... | closed | false | null | [] | null | 2 | 2024-09-30T13:54:12 | 2024-09-30T19:00:11 | 2024-09-30T19:00:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been trying to Dockerize Ollama and consequently load the Llama3.1 model into the Google Cloud Run deployment. While Ollama is running as expected in Cloud Run, the model is not loaded as expected since hitting `v1/models` returns a null result. I have a hacky solution with Compute Engine... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7046/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7589 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7589/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7589/comments | https://api.github.com/repos/ollama/ollama/issues/7589/events | https://github.com/ollama/ollama/issues/7589 | 2,646,531,312 | I_kwDOJ0Z1Ps6dvtzw | 7,589 | Adding option to default `/clear` after each query | {
"login": "soulrrrrr",
"id": 49684138,
"node_id": "MDQ6VXNlcjQ5Njg0MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/49684138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soulrrrrr",
"html_url": "https://github.com/soulrrrrr",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-11-09T20:44:31 | 2024-11-13T19:59:40 | 2024-11-13T19:59:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As there is `/clear` command to clear the session context, it would be great if there is an option to set to automatically run `/clear` for every query. I am using LLM as translator so this feature might be helpful. Since when the prompt gets longer and longer, the inference time increases. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7589/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3892 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3892/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3892/comments | https://api.github.com/repos/ollama/ollama/issues/3892/events | https://github.com/ollama/ollama/pull/3892 | 2,262,249,719 | PR_kwDOJ0Z1Ps5tp2si | 3,892 | refactor modelfile parser | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-24T21:52:23 | 2024-05-03T00:04:48 | 2024-05-03T00:04:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3892",
"html_url": "https://github.com/ollama/ollama/pull/3892",
"diff_url": "https://github.com/ollama/ollama/pull/3892.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3892.patch",
"merged_at": "2024-05-03T00:04:47"
} | split from #3833
resolves #3977 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3892/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3796 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3796/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3796/comments | https://api.github.com/repos/ollama/ollama/issues/3796/events | https://github.com/ollama/ollama/pull/3796 | 2,255,060,488 | PR_kwDOJ0Z1Ps5tRf2m | 3,796 | feat: enable OLLAMA Arc GPU support with SYCL backend | {
"login": "gamunu",
"id": 4501687,
"node_id": "MDQ6VXNlcjQ1MDE2ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4501687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gamunu",
"html_url": "https://github.com/gamunu",
"followers_url": "https://api.github.com/users/gamunu/foll... | [] | closed | false | null | [] | null | 21 | 2024-04-21T12:55:04 | 2024-06-09T17:59:57 | 2024-06-09T17:59:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3796",
"html_url": "https://github.com/ollama/ollama/pull/3796",
"diff_url": "https://github.com/ollama/ollama/pull/3796.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3796.patch",
"merged_at": null
} | This is based on the original PR created by @felipeagc:main https://github.com/ollama/ollama/pull/2458.
It seems that the work on that pull request has come to a halt. I would like to work on this project in the next few days and accelerate the progress. I have tested the build with Ubuntu LTS and GPU Arc770.
I'm... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3796/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3796/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/42 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/42/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/42/comments | https://api.github.com/repos/ollama/ollama/issues/42/events | https://github.com/ollama/ollama/pull/42 | 1,792,018,838 | PR_kwDOJ0Z1Ps5U1d6p | 42 | free llama model | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2023-07-06T18:15:23 | 2023-07-06T18:16:25 | 2023-07-06T18:16:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/42",
"html_url": "https://github.com/ollama/ollama/pull/42",
"diff_url": "https://github.com/ollama/ollama/pull/42.diff",
"patch_url": "https://github.com/ollama/ollama/pull/42.patch",
"merged_at": "2023-07-06T18:16:22"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/42/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/42/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1413 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1413/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1413/comments | https://api.github.com/repos/ollama/ollama/issues/1413/events | https://github.com/ollama/ollama/issues/1413 | 2,029,887,591 | I_kwDOJ0Z1Ps54_aBn | 1,413 | OOM Error on Bad CUDA Driver | {
"login": "farhanhubble",
"id": 761785,
"node_id": "MDQ6VXNlcjc2MTc4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/761785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farhanhubble",
"html_url": "https://github.com/farhanhubble",
"followers_url": "https://api.github.com/u... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2023-12-07T04:44:03 | 2024-01-08T21:42:03 | 2024-01-08T21:42:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Ollama version**: 0.1.1
**Reproduction**:
- `nvidia-smi`
```
Failed to initialize NVML: Driver/library version mismatch
NVML library version: 535.129
```
- Run server
```
IP='0.0.0.0'
PORT='11434'
EXE='bin/ollama'
ARGS='serve'
ENV="OLLAMA_HOST=$IP:$PORT'"
CMD="$ENV $EXE $ARGS"
e... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1413/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5243 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5243/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5243/comments | https://api.github.com/repos/ollama/ollama/issues/5243/events | https://github.com/ollama/ollama/pull/5243 | 2,368,869,000 | PR_kwDOJ0Z1Ps5zS1e1 | 5,243 | Fix use_mmap for modefiles | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-06-23T20:02:23 | 2024-07-03T20:59:46 | 2024-07-03T20:59:42 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5243",
"html_url": "https://github.com/ollama/ollama/pull/5243",
"diff_url": "https://github.com/ollama/ollama/pull/5243.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5243.patch",
"merged_at": "2024-07-03T20:59:42"
} | PR #5205 was incomplete and missed handling numeric json values. This switches to a pointer type to represent undefined as nil.
Fixes #5198
```
% cat use_mmap.modelfile
FROM library/llama2
PARAMETER use_mmap false
% ollama create test -f ./use_mmap.modelfile
transferring model data
using existing layer sha... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5243/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7036 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7036/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7036/comments | https://api.github.com/repos/ollama/ollama/issues/7036/events | https://github.com/ollama/ollama/issues/7036 | 2,554,992,877 | I_kwDOJ0Z1Ps6YShjt | 7,036 | Error creating the manifest | {
"login": "seblessa",
"id": 93839108,
"node_id": "U_kgDOBZffBA",
"avatar_url": "https://avatars.githubusercontent.com/u/93839108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seblessa",
"html_url": "https://github.com/seblessa",
"followers_url": "https://api.github.com/users/seblessa/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 1 | 2024-09-29T15:36:19 | 2024-10-04T17:10:30 | 2024-10-04T17:10:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello! I'm trying to create a custom model in ollama from a gguf file.
I'm using the Modelfile from the example in the ReadME:
```
~/example$ cat Modelfile
FROM ./llama.gguf
```
When using the create command the ouput seems fine and the model is created.
````
~/example$ ... | {
"login": "seblessa",
"id": 93839108,
"node_id": "U_kgDOBZffBA",
"avatar_url": "https://avatars.githubusercontent.com/u/93839108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seblessa",
"html_url": "https://github.com/seblessa",
"followers_url": "https://api.github.com/users/seblessa/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7036/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4043 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4043/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4043/comments | https://api.github.com/repos/ollama/ollama/issues/4043/events | https://github.com/ollama/ollama/issues/4043 | 2,270,894,070 | I_kwDOJ0Z1Ps6HWxf2 | 4,043 | having error while running llama2 on ollama | {
"login": "prateemnaskar",
"id": 168468278,
"node_id": "U_kgDOCgqfNg",
"avatar_url": "https://avatars.githubusercontent.com/u/168468278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prateemnaskar",
"html_url": "https://github.com/prateemnaskar",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-04-30T08:53:02 | 2024-05-21T17:41:25 | 2024-05-21T17:41:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | After running the command "ollama run llama2" in command prompt (m using windows), it says :-
Error: llama runner process no longer running: 3221225785
how to resolve this issues? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4043/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2933/comments | https://api.github.com/repos/ollama/ollama/issues/2933/events | https://github.com/ollama/ollama/pull/2933 | 2,168,968,501 | PR_kwDOJ0Z1Ps5otBzn | 2,933 | Update main.py print summary only | {
"login": "jliu015",
"id": 149941742,
"node_id": "U_kgDOCO_t7g",
"avatar_url": "https://avatars.githubusercontent.com/u/149941742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jliu015",
"html_url": "https://github.com/jliu015",
"followers_url": "https://api.github.com/users/jliu015/foll... | [] | closed | false | null | [] | null | 1 | 2024-03-05T11:32:54 | 2024-11-21T09:26:51 | 2024-11-21T09:26:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2933",
"html_url": "https://github.com/ollama/ollama/pull/2933",
"diff_url": "https://github.com/ollama/ollama/pull/2933.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2933.patch",
"merged_at": null
} | The original program printed both the input document and its summary. The input document is very long and its summary hides at the end. It really costed me some time to extract the summary by my eyes.
>>> type(result)
<class 'dict'>
>>> result.keys()
dict_keys(['input_documents', 'output_text'])
BTW, the depre... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2933/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8685 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8685/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8685/comments | https://api.github.com/repos/ollama/ollama/issues/8685/events | https://github.com/ollama/ollama/issues/8685 | 2,819,830,645 | I_kwDOJ0Z1Ps6oEzN1 | 8,685 | Request to change the file location and model path, and also gui | {
"login": "Bostoneary",
"id": 96782219,
"node_id": "U_kgDOBcTHiw",
"avatar_url": "https://avatars.githubusercontent.com/u/96782219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bostoneary",
"html_url": "https://github.com/Bostoneary",
"followers_url": "https://api.github.com/users/Bosto... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2025-01-30T03:38:48 | 2025-01-30T03:56:41 | 2025-01-30T03:56:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This software is automatically install on the default path in my C disk. And all model is download to specific path in C. However, there is limited space in my disk C, can we change the software install location and the model download path? And it is possible to have a gui of this softeware? Hope this can be better one... | {
"login": "Bostoneary",
"id": 96782219,
"node_id": "U_kgDOBcTHiw",
"avatar_url": "https://avatars.githubusercontent.com/u/96782219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bostoneary",
"html_url": "https://github.com/Bostoneary",
"followers_url": "https://api.github.com/users/Bosto... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8685/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6082 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6082/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6082/comments | https://api.github.com/repos/ollama/ollama/issues/6082/events | https://github.com/ollama/ollama/issues/6082 | 2,438,927,742 | I_kwDOJ0Z1Ps6RXxV- | 6,082 | why wsarecv: An existing connection was forcibly closed by the remote host ollama windows preview | {
"login": "springsuu",
"id": 170060937,
"node_id": "U_kgDOCiLsiQ",
"avatar_url": "https://avatars.githubusercontent.com/u/170060937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/springsuu",
"html_url": "https://github.com/springsuu",
"followers_url": "https://api.github.com/users/spring... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-07-31T01:30:10 | 2024-08-01T22:22:28 | 2024-08-01T22:22:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3/manifests/latest": read tcp 192.168.2.23:51514->172.67.182.229:443: wsarecv: An existing connection was forcibly closed by the remote host
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Oll... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6082/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6233 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6233/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6233/comments | https://api.github.com/repos/ollama/ollama/issues/6233/events | https://github.com/ollama/ollama/issues/6233 | 2,453,537,316 | I_kwDOJ0Z1Ps6SPgIk | 6,233 | Strange! Each request consumes an additional 2 seconds when I used /api/embed | {
"login": "AlbertXu233",
"id": 49802174,
"node_id": "MDQ6VXNlcjQ5ODAyMTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/49802174?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlbertXu233",
"html_url": "https://github.com/AlbertXu233",
"followers_url": "https://api.github.com/... | [
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
... | closed | false | null | [] | null | 6 | 2024-08-07T13:48:11 | 2024-09-05T18:51:42 | 2024-09-05T18:51:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="809" alt="image" src="https://github.com/user-attachments/assets/d649cc9e-7b95-4146-aeef-4eb564be7047">
OS: win11
device: intel i5 1335U with only iGPU
embedding model: shaw/dmeta-embedding-zh-q4
I test with very simple input: ["你好"]
### OS
Windows
### GPU
Intel
### CPU
I... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6233/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7188 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7188/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7188/comments | https://api.github.com/repos/ollama/ollama/issues/7188/events | https://github.com/ollama/ollama/issues/7188 | 2,583,662,665 | I_kwDOJ0Z1Ps6Z_5BJ | 7,188 | Bad juju creating a model (the llama.cpp generated file starts with "GGUF") | {
"login": "robbiemu",
"id": 248927,
"node_id": "MDQ6VXNlcjI0ODkyNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/248927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robbiemu",
"html_url": "https://github.com/robbiemu",
"followers_url": "https://api.github.com/users/robbiem... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 12 | 2024-10-13T05:26:22 | 2024-10-15T17:57:03 | 2024-10-15T00:06:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have the [2b base model of Salamandra](https://huggingface.co/robbiemu/salamandra-2b) quantized to different weights, but I am getting an error creating it.
```
ollama create salamandra:2b_bf16 -f ./Modelfile
transferring model data 100%
Error: invalid file magic
```
I hesitate to ev... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7188/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/842 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/842/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/842/comments | https://api.github.com/repos/ollama/ollama/issues/842/events | https://github.com/ollama/ollama/pull/842 | 1,950,472,851 | PR_kwDOJ0Z1Ps5dLiTN | 842 | #790 improve readme | {
"login": "jerzydziewierz",
"id": 1606347,
"node_id": "MDQ6VXNlcjE2MDYzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1606347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerzydziewierz",
"html_url": "https://github.com/jerzydziewierz",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 3 | 2023-10-18T19:18:44 | 2023-11-29T21:30:02 | 2023-11-29T21:30:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/842",
"html_url": "https://github.com/ollama/ollama/pull/842",
"diff_url": "https://github.com/ollama/ollama/pull/842.diff",
"patch_url": "https://github.com/ollama/ollama/pull/842.patch",
"merged_at": null
} | As promised, an updated README that explains how to force lower memory usage.
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/842/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3499 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3499/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3499/comments | https://api.github.com/repos/ollama/ollama/issues/3499/events | https://github.com/ollama/ollama/issues/3499 | 2,227,241,505 | I_kwDOJ0Z1Ps6EwQIh | 3,499 | OLLAMA_INITIAL_MODEL for use with OLLAMA_KEEP_ALLIVE=-1 | {
"login": "BananaAcid",
"id": 1894723,
"node_id": "MDQ6VXNlcjE4OTQ3MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1894723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BananaAcid",
"html_url": "https://github.com/BananaAcid",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 4 | 2024-04-05T07:11:36 | 2024-05-15T00:34:47 | 2024-05-15T00:34:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
It would be nice, to be able to initially load a model using an env like OLLAMA_INITIAL_MODEL in conjunction with the keep_alive=-1 option, to have OLLAMA start up and be ready to go on slow systems (as on a mining rig with usb2-raiser connected RTXs)
### How should we solve this?
_No... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3499/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3499/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4999 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4999/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4999/comments | https://api.github.com/repos/ollama/ollama/issues/4999/events | https://github.com/ollama/ollama/issues/4999 | 2,348,430,675 | I_kwDOJ0Z1Ps6L-jVT | 4,999 | Error: Head "http://127.0.0.1:11434/": EOF | {
"login": "HyperUpscale",
"id": 126105457,
"node_id": "U_kgDOB4Q3cQ",
"avatar_url": "https://avatars.githubusercontent.com/u/126105457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyperUpscale",
"html_url": "https://github.com/HyperUpscale",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-12T10:40:36 | 2024-06-12T12:19:26 | 2024-06-12T12:19:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
C:\Users\win>ollama list
Error: Head "http://127.0.0.1:11434/": EOF
C:\Users\win>ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.1.43
C:\Users\win>ollama serve
Error: listen tcp 127.0.0.1:11434: bind: An attempt was made to access a socket... | {
"login": "HyperUpscale",
"id": 126105457,
"node_id": "U_kgDOB4Q3cQ",
"avatar_url": "https://avatars.githubusercontent.com/u/126105457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyperUpscale",
"html_url": "https://github.com/HyperUpscale",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4999/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3246 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3246/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3246/comments | https://api.github.com/repos/ollama/ollama/issues/3246/events | https://github.com/ollama/ollama/issues/3246 | 2,194,941,966 | I_kwDOJ0Z1Ps6C1CgO | 3,246 | Error: invalid file magic when importing Safetensors models | {
"login": "amnweb",
"id": 16545063,
"node_id": "MDQ6VXNlcjE2NTQ1MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/16545063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amnweb",
"html_url": "https://github.com/amnweb",
"followers_url": "https://api.github.com/users/amnweb/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 11 | 2024-03-19T13:13:12 | 2024-06-14T07:20:55 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
> ollama create test -f Modelfile
transferring model data
creating model layer
Error: invalid file magic
This happens for all the **Safetensors** models I try to import.
Modelfile content
`FROM ./model.safetensors`
 ollama to, and I kept getting empty responses whenever I tried to run something. upon further investigation of the systemd service, it's exiting with status 2.
Here's the last few hundre... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1996/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2225 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2225/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2225/comments | https://api.github.com/repos/ollama/ollama/issues/2225/events | https://github.com/ollama/ollama/issues/2225 | 2,103,264,294 | I_kwDOJ0Z1Ps59XUQm | 2,225 | Ollama stops generating output and fails to run models after a few minutes | {
"login": "TheStarAlight",
"id": 105955974,
"node_id": "U_kgDOBlDChg",
"avatar_url": "https://avatars.githubusercontent.com/u/105955974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheStarAlight",
"html_url": "https://github.com/TheStarAlight",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 41 | 2024-01-27T06:22:10 | 2024-06-03T23:44:10 | 2024-04-15T19:09:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, I'm running ollama on a Debian server and use the oterm as the interface.
After some chats (just less than 10 normal questions) the ollama fails to respond anymore and running `ollama run mixtral` just didn't success (it keeps loading).
I noted that the same issue happened, like in #1863 . Is there a solution at ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2225/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2225/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7320 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7320/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7320/comments | https://api.github.com/repos/ollama/ollama/issues/7320/events | https://github.com/ollama/ollama/issues/7320 | 2,605,969,661 | I_kwDOJ0Z1Ps6bU_D9 | 7,320 | 0.4.0 regression | {
"login": "skobkin",
"id": 967576,
"node_id": "MDQ6VXNlcjk2NzU3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/967576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skobkin",
"html_url": "https://github.com/skobkin",
"followers_url": "https://api.github.com/users/skobkin/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-10-22T16:44:36 | 2024-10-22T19:54:17 | 2024-10-22T19:54:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Just updated ollama to [`0.4.0-rc3-rocm`](https://hub.docker.com/layers/ollama/ollama/0.4.0-rc3/images/sha256-6b75f17d6160b28dec8d8d519ceec02dfdae20e1c2451db34f3a3351f5de373a?context=explore) to test new LLaMA 3.2 Vision capabilities.
But it isn't working and returning 500 to OpenWebUI. It ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7320/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5183 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5183/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5183/comments | https://api.github.com/repos/ollama/ollama/issues/5183/events | https://github.com/ollama/ollama/issues/5183 | 2,364,585,215 | I_kwDOJ0Z1Ps6M8LT_ | 5,183 | `ollama show` has quotes around stop words | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-06-20T14:20:54 | 2024-06-23T02:09:25 | 2024-06-23T02:09:25 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
% ollama show llama3
Model
arch llama
parameters 8.0B
quantization Q4_0
context length 8192 ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5183/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5333 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5333/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5333/comments | https://api.github.com/repos/ollama/ollama/issues/5333/events | https://github.com/ollama/ollama/pull/5333 | 2,378,631,008 | PR_kwDOJ0Z1Ps5zzEuN | 5,333 | update readme for gemma 2 | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2024-06-27T16:43:43 | 2024-06-27T16:45:18 | 2024-06-27T16:45:16 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5333",
"html_url": "https://github.com/ollama/ollama/pull/5333",
"diff_url": "https://github.com/ollama/ollama/pull/5333.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5333.patch",
"merged_at": "2024-06-27T16:45:16"
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5333/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2534/comments | https://api.github.com/repos/ollama/ollama/issues/2534/events | https://github.com/ollama/ollama/issues/2534 | 2,137,822,432 | I_kwDOJ0Z1Ps5_bJTg | 2,534 | Packaging issues with vendored llama.cpp | {
"login": "viraptor",
"id": 188063,
"node_id": "MDQ6VXNlcjE4ODA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/188063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viraptor",
"html_url": "https://github.com/viraptor",
"followers_url": "https://api.github.com/users/virapto... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-02-16T03:51:12 | 2024-10-17T22:03:10 | 2024-10-17T22:03:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I'm trying to package the new version (after llama.cpp has been vendored) for nixpkgs and I'm running into issues. Essentially, ollama tries to be very clever and generic with the build, but this goes counter to what the systems which provide the packaged ollama and llama.cpp will try to achieve.
Since we alrea... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2534/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2534/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/65 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/65/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/65/comments | https://api.github.com/repos/ollama/ollama/issues/65/events | https://github.com/ollama/ollama/pull/65 | 1,797,545,560 | PR_kwDOJ0Z1Ps5VILe1 | 65 | call llama.cpp directly from go | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 2 | 2023-07-10T20:36:56 | 2023-07-11T21:02:07 | 2023-07-11T19:01:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/65",
"html_url": "https://github.com/ollama/ollama/pull/65",
"diff_url": "https://github.com/ollama/ollama/pull/65.diff",
"patch_url": "https://github.com/ollama/ollama/pull/65.patch",
"merged_at": "2023-07-11T19:01:03"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/65/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/65/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5641 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5641/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5641/comments | https://api.github.com/repos/ollama/ollama/issues/5641/events | https://github.com/ollama/ollama/issues/5641 | 2,404,358,129 | I_kwDOJ0Z1Ps6PT5fx | 5,641 | Ollama Puts out Gibberish After a While. | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-07-11T23:31:36 | 2024-10-24T02:50:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I run the MMLU Pro benchmark on phi3 or deepseek-coder-v2 with [this script](https://github.com/chigkim/Ollama-MMLU-Pro/) that uses OpenAI compatible API, it runs for a while.
Then, all of sudden, it starts to output:
deepseek-coder-v2:16b-lite-instruct-q8_0
`@@@@@@@@@@@@@@@@@@@@@@@@... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5641/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5641/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3592 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3592/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3592/comments | https://api.github.com/repos/ollama/ollama/issues/3592/events | https://github.com/ollama/ollama/issues/3592 | 2,237,568,024 | I_kwDOJ0Z1Ps6FXpQY | 3,592 | Long context like 32000 with command-r produces gibberish with random characters. | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-04-11T12:03:14 | 2024-04-19T15:41:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It responds in random characters. <5+a\j`=7Dc'_2^@&Til$g#*�wR0;��)3�ey�un��J���fd6�]{���-S����t�Z���:�x�"b|BI�jmĶ7��T'V?4_k^z0NU+=��i�
### What did you expect to see?
Response in English.
### Steps to reproduce
Initiate chat with command-r via api with num_ctx > 25000.
### Are there any rec... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3592/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/788 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/788/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/788/comments | https://api.github.com/repos/ollama/ollama/issues/788/events | https://github.com/ollama/ollama/issues/788 | 1,942,855,896 | I_kwDOJ0Z1Ps5zzaDY | 788 | i got this issue from orca-mini 7b | {
"login": "Boluex",
"id": 90112749,
"node_id": "MDQ6VXNlcjkwMTEyNzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/90112749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Boluex",
"html_url": "https://github.com/Boluex",
"followers_url": "https://api.github.com/users/Boluex/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 36 | 2023-10-14T01:32:07 | 2024-07-26T15:20:45 | 2023-10-31T17:29:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | i am using an 8gb of RAM cpu system.....No vram...Downloaded the orca-mini 7b model on ollama.....but got this error....... Error: llama runner process has terminated.....How can i fix this?...please guys
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/788/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7518 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7518/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7518/comments | https://api.github.com/repos/ollama/ollama/issues/7518/events | https://github.com/ollama/ollama/issues/7518 | 2,636,634,653 | I_kwDOJ0Z1Ps6dJ9od | 7,518 | Support for # of completions? (for loom obsidian plugin) | {
"login": "cognitivetech",
"id": 55156785,
"node_id": "MDQ6VXNlcjU1MTU2Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/55156785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cognitivetech",
"html_url": "https://github.com/cognitivetech",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-11-05T22:46:12 | 2024-11-05T22:46:12 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm trying to adapt the loom obsidian plugin to use ollama.
now it seems to work fine, except I only ever get 1 completion. where settings.n is the number of completions I would like to generate.
https://github.com/cosmicoptima/loom/blob/master/main.ts
```javascript
async completeOpenAICompat(prompt: string... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7518/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8567 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8567/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8567/comments | https://api.github.com/repos/ollama/ollama/issues/8567/events | https://github.com/ollama/ollama/pull/8567 | 2,809,888,422 | PR_kwDOJ0Z1Ps6I7BW- | 8,567 | build: support Compute Capability 5.0, 5.2 and 5.3 for CUDA 12.x | {
"login": "prusnak",
"id": 42201,
"node_id": "MDQ6VXNlcjQyMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/42201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prusnak",
"html_url": "https://github.com/prusnak",
"followers_url": "https://api.github.com/users/prusnak/follower... | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2025-01-24T16:50:29 | 2025-01-29T17:19:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8567",
"html_url": "https://github.com/ollama/ollama/pull/8567",
"diff_url": "https://github.com/ollama/ollama/pull/8567.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8567.patch",
"merged_at": null
} | CUDA 12.x still supports Compute Capability 5.0, 5.2 and 5.3, so let's build for these architectures as well
I have a GPU with CC 5.2 and confirmed that before the change ollama crashes, afterwards it works just fine.
source: https://stackoverflow.com/questions/28932864/which-compute-capability-is-supported-by-wh... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8567/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8561 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8561/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8561/comments | https://api.github.com/repos/ollama/ollama/issues/8561/events | https://github.com/ollama/ollama/issues/8561 | 2,808,999,555 | I_kwDOJ0Z1Ps6nbe6D | 8,561 | Use cases for using Ollama in Microsoft Word | {
"login": "GPTLocalhost",
"id": 72584872,
"node_id": "MDQ6VXNlcjcyNTg0ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/72584872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GPTLocalhost",
"html_url": "https://github.com/GPTLocalhost",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2025-01-24T09:46:42 | 2025-01-24T09:46:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If Microsoft Word users are a potential target audience for Ollama, what use cases would you expect? We recently released the following quick demo based on Ollama, and we are curious about what the next use case could be from this community's perspective. We’d greatly appreciate any advice.
* [Use Ollama in Microsoft... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8561/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2726 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2726/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2726/comments | https://api.github.com/repos/ollama/ollama/issues/2726/events | https://github.com/ollama/ollama/issues/2726 | 2,152,232,668 | I_kwDOJ0Z1Ps6ASHbc | 2,726 | Ollama 01.26 embeddings, alternative Models? | {
"login": "Daniel07n",
"id": 17878323,
"node_id": "MDQ6VXNlcjE3ODc4MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/17878323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Daniel07n",
"html_url": "https://github.com/Daniel07n",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 10 | 2024-02-24T09:35:39 | 2024-04-02T17:21:43 | 2024-03-12T04:50:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, is there the possibility to load alternative embedding models other than BERT and Nomic? Like for the larger LLMs either via the list shown on Ollama.com or as a manual download from Hugginface? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2726/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/183 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/183/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/183/comments | https://api.github.com/repos/ollama/ollama/issues/183/events | https://github.com/ollama/ollama/issues/183 | 1,817,200,381 | I_kwDOJ0Z1Ps5sUEb9 | 183 | User should be able to find models that support commercial use or at least search by license type | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-07-23T16:52:30 | 2023-08-30T21:36:58 | 2023-08-30T21:36:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Some of the license types allow commercial use. Today the user needs to go to other platforms to see if a model works for them. They should be able to stay at the ollama command line to get the basic info like gpl vs apache vs whatever else | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/183/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2526 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2526/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2526/comments | https://api.github.com/repos/ollama/ollama/issues/2526/events | https://github.com/ollama/ollama/pull/2526 | 2,137,504,500 | PR_kwDOJ0Z1Ps5nB2N- | 2,526 | Harden the OLLAMA_HOST lookup for quotes | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-02-15T21:47:30 | 2024-02-15T22:13:42 | 2024-02-15T22:13:40 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2526",
"html_url": "https://github.com/ollama/ollama/pull/2526",
"diff_url": "https://github.com/ollama/ollama/pull/2526.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2526.patch",
"merged_at": "2024-02-15T22:13:40"
} | Fixes #2512 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2526/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1339 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1339/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1339/comments | https://api.github.com/repos/ollama/ollama/issues/1339/events | https://github.com/ollama/ollama/issues/1339 | 2,020,001,954 | I_kwDOJ0Z1Ps54Zsii | 1,339 | MacOS opens kernel tasks doesn't unload model | {
"login": "igorcosta",
"id": 1169752,
"node_id": "MDQ6VXNlcjExNjk3NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1169752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorcosta",
"html_url": "https://github.com/igorcosta",
"followers_url": "https://api.github.com/users/ig... | [] | closed | false | null | [] | null | 11 | 2023-12-01T03:50:57 | 2024-08-06T07:35:43 | 2024-01-26T22:28:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | One of the things that makes me cringe is when swapping between models, it never releases the memory when I'm done using it. It's just piles up and I eventually have to restart my Mac.
Would memory optimisation being a target for next release? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1339/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3776 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3776/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3776/comments | https://api.github.com/repos/ollama/ollama/issues/3776/events | https://github.com/ollama/ollama/issues/3776 | 2,254,515,521 | I_kwDOJ0Z1Ps6GYS1B | 3,776 | Manifest error, no such host found. | {
"login": "harshaelon",
"id": 128384441,
"node_id": "U_kgDOB6b9uQ",
"avatar_url": "https://avatars.githubusercontent.com/u/128384441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harshaelon",
"html_url": "https://github.com/harshaelon",
"followers_url": "https://api.github.com/users/har... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | null | [] | null | 1 | 2024-04-20T11:42:09 | 2024-05-02T00:22:32 | 2024-05-02T00:22:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I was actually running the ollama and was trying to use it with llama2, but i was not able to run or proceed further. here is the screenshot, any help would be highly appreciated

.
### OS
Wind... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3776/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3776/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/658 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/658/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/658/comments | https://api.github.com/repos/ollama/ollama/issues/658/events | https://github.com/ollama/ollama/pull/658 | 1,920,230,913 | PR_kwDOJ0Z1Ps5blkmB | 658 | Add colab badge | {
"login": "bitsnaps",
"id": 1217741,
"node_id": "MDQ6VXNlcjEyMTc3NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bitsnaps",
"html_url": "https://github.com/bitsnaps",
"followers_url": "https://api.github.com/users/bitsn... | [] | closed | false | null | [] | null | 3 | 2023-09-30T11:57:03 | 2023-10-06T09:31:23 | 2023-10-01T05:39:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/658",
"html_url": "https://github.com/ollama/ollama/pull/658",
"diff_url": "https://github.com/ollama/ollama/pull/658.diff",
"patch_url": "https://github.com/ollama/ollama/pull/658.patch",
"merged_at": null
} | Update README to add a working colab Notbook, tested using the T4 with GPU support for free. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/658/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4070 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4070/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4070/comments | https://api.github.com/repos/ollama/ollama/issues/4070/events | https://github.com/ollama/ollama/issues/4070 | 2,272,867,624 | I_kwDOJ0Z1Ps6HeTUo | 4,070 | Ollama run model error | {
"login": "pandaymx",
"id": 82139672,
"node_id": "MDQ6VXNlcjgyMTM5Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/82139672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pandaymx",
"html_url": "https://github.com/pandaymx",
"followers_url": "https://api.github.com/users/pan... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-05-01T03:33:49 | 2024-05-02T16:20:41 | 2024-05-02T16:20:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
## question
I change env because my gpu don't support. My gpu version is AMD RX 6750 gre
<img width="465" alt="PixPin_2024-05-01_11-25-24" src="https://github.com/ollama/ollama/assets/82139672/19627989-f37b-44e7-aeb6-47c02db8b0f3">
<img width="1015" alt="PixPin_2024-05-01_11-26-58" src="htt... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4070/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/119 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/119/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/119/comments | https://api.github.com/repos/ollama/ollama/issues/119/events | https://github.com/ollama/ollama/issues/119 | 1,811,268,907 | I_kwDOJ0Z1Ps5r9cUr | 119 | Where is the model file stored? | {
"login": "happy15",
"id": 983570,
"node_id": "MDQ6VXNlcjk4MzU3MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/983570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/happy15",
"html_url": "https://github.com/happy15",
"followers_url": "https://api.github.com/users/happy15/fo... | [] | closed | false | null | [] | null | 8 | 2023-07-19T06:43:23 | 2024-02-04T08:20:25 | 2023-07-19T06:45:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, first thanks for the awesome work.
Just wondering, where is the model file located? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/119/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/988 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/988/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/988/comments | https://api.github.com/repos/ollama/ollama/issues/988/events | https://github.com/ollama/ollama/pull/988 | 1,976,472,811 | PR_kwDOJ0Z1Ps5ejUMQ | 988 | Add `encode` and `decode` API endpoints | {
"login": "samdevbr",
"id": 34373264,
"node_id": "MDQ6VXNlcjM0MzczMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/34373264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samdevbr",
"html_url": "https://github.com/samdevbr",
"followers_url": "https://api.github.com/users/sam... | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 5 | 2023-11-03T15:42:18 | 2023-11-16T16:03:34 | 2023-11-14T12:52:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/988",
"html_url": "https://github.com/ollama/ollama/pull/988",
"diff_url": "https://github.com/ollama/ollama/pull/988.diff",
"patch_url": "https://github.com/ollama/ollama/pull/988.patch",
"merged_at": null
} | While working on a POC project for the company I work at I've come across the need for encoding and decoding prompts.
We are building a long-term memory POC and that requires token management, as of now we cannot predict how long the token list of a prompt might be.
This PR creates the following endpoints:
- `... | {
"login": "samdevbr",
"id": 34373264,
"node_id": "MDQ6VXNlcjM0MzczMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/34373264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samdevbr",
"html_url": "https://github.com/samdevbr",
"followers_url": "https://api.github.com/users/sam... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/988/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/988/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3730 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3730/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3730/comments | https://api.github.com/repos/ollama/ollama/issues/3730/events | https://github.com/ollama/ollama/issues/3730 | 2,250,331,970 | I_kwDOJ0Z1Ps6GIVdC | 3,730 | 升级最新版启动报错 - windows subprocess crash on 0.1.32 | {
"login": "hyanqing1",
"id": 26663452,
"node_id": "MDQ6VXNlcjI2NjYzNDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26663452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyanqing1",
"html_url": "https://github.com/hyanqing1",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-04-18T10:30:25 | 2024-05-21T18:22:11 | 2024-05-21T18:22:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
升级了最新版本0.1.32,启动报错,错误如下:
Error: llama runner process no longer running: 3221225785
后来又重装了0.1.31版本,正常启动。
我的是windows10系统
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
0.1.32 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3730/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3238 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3238/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3238/comments | https://api.github.com/repos/ollama/ollama/issues/3238/events | https://github.com/ollama/ollama/issues/3238 | 2,194,261,265 | I_kwDOJ0Z1Ps6CycUR | 3,238 | Add a google colab notebook link to the github for new users. | {
"login": "jquintanilla4",
"id": 32947277,
"node_id": "MDQ6VXNlcjMyOTQ3Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/32947277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jquintanilla4",
"html_url": "https://github.com/jquintanilla4",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 4 | 2024-03-19T07:47:01 | 2024-04-23T05:18:44 | 2024-03-19T09:11:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
Every once in a while people will ask about how to get Ollama running on Google Colab, either for doing dev work inside of Colab or as a remote GPU. I think if the github repo had a one click button to a notebook, it would solve this evergreen question on the community.
### How shou... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3238/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/2674 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2674/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2674/comments | https://api.github.com/repos/ollama/ollama/issues/2674/events | https://github.com/ollama/ollama/pull/2674 | 2,148,836,170 | PR_kwDOJ0Z1Ps5nooA5 | 2,674 | Update Readme.md : Add Gemma to the table of supported example models | {
"login": "sethupavan12",
"id": 60856766,
"node_id": "MDQ6VXNlcjYwODU2NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/60856766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sethupavan12",
"html_url": "https://github.com/sethupavan12",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2024-02-22T11:20:16 | 2024-02-22T18:08:17 | 2024-02-22T18:08:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2674",
"html_url": "https://github.com/ollama/ollama/pull/2674",
"diff_url": "https://github.com/ollama/ollama/pull/2674.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2674.patch",
"merged_at": null
} | Minor
Adding the Google Gemma to the list | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2674/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1901 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1901/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1901/comments | https://api.github.com/repos/ollama/ollama/issues/1901/events | https://github.com/ollama/ollama/issues/1901 | 2,074,610,060 | I_kwDOJ0Z1Ps57qAmM | 1,901 | "api/generate" stalls after some queries | {
"login": "oderwat",
"id": 719156,
"node_id": "MDQ6VXNlcjcxOTE1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/719156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oderwat",
"html_url": "https://github.com/oderwat",
"followers_url": "https://api.github.com/users/oderwat/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | closed | false | null | [] | null | 8 | 2024-01-10T15:24:39 | 2024-03-14T12:58:16 | 2024-03-13T23:44:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have a strange phenomenon and can't get rid of it without a workaround:
When I call "api/generate" with the same model regularly every some seconds (5s-15s) the API suddenly stops responding after 15-20 calls (which seems to depend on the model size?).
This is reproducible with different models and with both: A... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1901/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1398 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1398/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1398/comments | https://api.github.com/repos/ollama/ollama/issues/1398/events | https://github.com/ollama/ollama/issues/1398 | 2,028,690,428 | I_kwDOJ0Z1Ps5461v8 | 1,398 | Bug: API - Chat docs examples are using `api/generate` in URL instead of `api/chat` | {
"login": "calderonsamuel",
"id": 19418298,
"node_id": "MDQ6VXNlcjE5NDE4Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/19418298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calderonsamuel",
"html_url": "https://github.com/calderonsamuel",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 4 | 2023-12-06T14:31:30 | 2023-12-06T22:22:17 | 2023-12-06T20:10:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/jmorganca/ollama/blob/32f62fbb8e0b1ecb4ec8369586562abce86c8e50/docs/api.md?plain=1#L317-L327
https://github.com/jmorganca/ollama/blob/32f62fbb8e0b1ecb4ec8369586562abce86c8e50/docs/api.md?plain=1#L366-L384 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1398/reactions",
"total_count": 1,
"+1": 0,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1398/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3865 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3865/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3865/comments | https://api.github.com/repos/ollama/ollama/issues/3865/events | https://github.com/ollama/ollama/pull/3865 | 2,260,256,162 | PR_kwDOJ0Z1Ps5tjCrA | 3,865 | add OLLAMA_KEEP_ALIVE env variable to FAQ | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-24T03:58:24 | 2024-04-24T04:06:52 | 2024-04-24T04:06:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3865",
"html_url": "https://github.com/ollama/ollama/pull/3865",
"diff_url": "https://github.com/ollama/ollama/pull/3865.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3865.patch",
"merged_at": "2024-04-24T04:06:51"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3865/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5322 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5322/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5322/comments | https://api.github.com/repos/ollama/ollama/issues/5322/events | https://github.com/ollama/ollama/issues/5322 | 2,377,849,904 | I_kwDOJ0Z1Ps6Nuxww | 5,322 | Latest 0.1.47 pre-release seems to break every model | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2024-06-27T11:04:23 | 2024-06-27T11:48:40 | 2024-06-27T11:48:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Tried the pre-release and every single model i tried it with either outputs random code bits with a response or just outputs random code bits with no response.
It seems to break literally every single model i have.
Edit: nevermind, not sure what happened exactly but i reinstalled the same version after downgrading a... | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5322/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4944 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4944/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4944/comments | https://api.github.com/repos/ollama/ollama/issues/4944/events | https://github.com/ollama/ollama/issues/4944 | 2,342,055,691 | I_kwDOJ0Z1Ps6LmO8L | 4,944 | Ollama reports incorrect version and does not show up in System tray | {
"login": "VirtualZardoz",
"id": 167669409,
"node_id": "U_kgDOCf5uoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/167669409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VirtualZardoz",
"html_url": "https://github.com/VirtualZardoz",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 14 | 2024-06-09T06:41:39 | 2024-06-18T16:09:44 | 2024-06-18T16:09:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ever since I upgraded Ollama to version 0.1.38, it continues reporting this as its version number:
```
[...]> ollama -v
ollama version is 0.1.38
```
Despite the fact that I have updated it to all the released versions since 0.1.38. In fact my current version should be reported as 0.1.42.
... | {
"login": "VirtualZardoz",
"id": 167669409,
"node_id": "U_kgDOCf5uoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/167669409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VirtualZardoz",
"html_url": "https://github.com/VirtualZardoz",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4944/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5277 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5277/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5277/comments | https://api.github.com/repos/ollama/ollama/issues/5277/events | https://github.com/ollama/ollama/issues/5277 | 2,373,096,132 | I_kwDOJ0Z1Ps6NcpLE | 5,277 | "How to utilize the Ollama local model in Windows 10 to generate the same API link as OpenAI, enabling other programs to replace the GPT-4 link? Currently, entering 'ollama serve' in CMD generates the 'http://localhost:11434' link, but replacing this link with the GPT-4 link in applications does not work. Please provid... | {
"login": "windkwbs",
"id": 129468439,
"node_id": "U_kgDOB7eIFw",
"avatar_url": "https://avatars.githubusercontent.com/u/129468439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/windkwbs",
"html_url": "https://github.com/windkwbs",
"followers_url": "https://api.github.com/users/windkwbs/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-06-25T16:16:53 | 2024-07-24T19:03:27 | 2024-07-24T19:03:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | "How to utilize the Ollama local model in Windows 10 to generate the same API link as OpenAI, enabling other programs to replace the GPT-4 link? Currently, entering 'ollama serve' in CMD generates the 'http://localhost:11434/' link, but replacing this link with the GPT-4 link in applications does not work. Please provi... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5277/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3899 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3899/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3899/comments | https://api.github.com/repos/ollama/ollama/issues/3899/events | https://github.com/ollama/ollama/pull/3899 | 2,262,481,897 | PR_kwDOJ0Z1Ps5tqp7r | 3,899 | show ggml modelinfo through the show api | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-25T01:54:59 | 2024-07-12T03:36:34 | 2024-07-12T03:36:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3899",
"html_url": "https://github.com/ollama/ollama/pull/3899",
"diff_url": "https://github.com/ollama/ollama/pull/3899.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3899.patch",
"merged_at": null
} | This change exposes the GGML KVs and tensor data to make it easier to introspect a model. | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3899/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3899/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6194 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6194/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6194/comments | https://api.github.com/repos/ollama/ollama/issues/6194/events | https://github.com/ollama/ollama/issues/6194 | 2,450,037,869 | I_kwDOJ0Z1Ps6SCJxt | 6,194 | Please add CodeShell to Ollama/library, as llama.cpp already supports it | {
"login": "vimBashMing",
"id": 148437161,
"node_id": "U_kgDOCNj4qQ",
"avatar_url": "https://avatars.githubusercontent.com/u/148437161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vimBashMing",
"html_url": "https://github.com/vimBashMing",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-08-06T06:17:52 | 2024-08-17T02:50:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
The codeshell model: https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4
Since CodeShell is already supported by llama.cpp, please help add CodeShell to ollama/library. Thanks!
<img width="896" alt="image" src="https://github.com/user-attachments/assets/0b125b59-17f0-44a4-83e7-98a13f849543">
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6194/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4179 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4179/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4179/comments | https://api.github.com/repos/ollama/ollama/issues/4179/events | https://github.com/ollama/ollama/issues/4179 | 2,279,707,154 | I_kwDOJ0Z1Ps6H4ZIS | 4,179 | pull qwen:32b-chat-v1.5-q4_0 Error: unexpected end of JSON input | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-05T20:06:55 | 2024-05-06T18:33:54 | 2024-05-06T18:33:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`ollama pull qwen:32b-chat-v1.5-q4_0` results in `Error: unepxected end of JSON input`
However, `ollama pull qwen:32b` works (right now they point to the same hash)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4179/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4387 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4387/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4387/comments | https://api.github.com/repos/ollama/ollama/issues/4387/events | https://github.com/ollama/ollama/pull/4387 | 2,291,562,241 | PR_kwDOJ0Z1Ps5vMQ3Z | 4,387 | Correct typos. | {
"login": "fangtaosong",
"id": 59201842,
"node_id": "MDQ6VXNlcjU5MjAxODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/59201842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fangtaosong",
"html_url": "https://github.com/fangtaosong",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-05-13T00:04:01 | 2024-05-13T01:21:11 | 2024-05-13T01:21:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4387",
"html_url": "https://github.com/ollama/ollama/pull/4387",
"diff_url": "https://github.com/ollama/ollama/pull/4387.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4387.patch",
"merged_at": "2024-05-13T01:21:11"
} | ASSSISTANT --> ASSISTANT | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4387/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/71 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/71/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/71/comments | https://api.github.com/repos/ollama/ollama/issues/71/events | https://github.com/ollama/ollama/pull/71 | 1,799,943,250 | PR_kwDOJ0Z1Ps5VQYsF | 71 | error checking new model | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-12T00:09:31 | 2023-07-12T16:20:40 | 2023-07-12T16:20:33 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/71",
"html_url": "https://github.com/ollama/ollama/pull/71",
"diff_url": "https://github.com/ollama/ollama/pull/71.diff",
"patch_url": "https://github.com/ollama/ollama/pull/71.patch",
"merged_at": "2023-07-12T16:20:33"
} | check nil to prevent later nil pointer dereferences | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/71/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/71/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4803 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4803/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4803/comments | https://api.github.com/repos/ollama/ollama/issues/4803/events | https://github.com/ollama/ollama/issues/4803 | 2,332,454,660 | I_kwDOJ0Z1Ps6LBm8E | 4,803 | Run chat api with Llama3 8B Model converted by llama.cpp had infinity response time | {
"login": "cuongnguyengit",
"id": 45245565,
"node_id": "MDQ6VXNlcjQ1MjQ1NTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/45245565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cuongnguyengit",
"html_url": "https://github.com/cuongnguyengit",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-04T03:26:14 | 2024-06-05T20:45:36 | 2024-06-05T20:45:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi team,
I used your guide (https://github.com/ollama/ollama/blob/main/docs/import.md) to convert https://huggingface.co/hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode to gguf file.
All of conversions were ok but when I run with ollama I get the following error:
llama_new_c... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4803/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1262 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1262/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1262/comments | https://api.github.com/repos/ollama/ollama/issues/1262/events | https://github.com/ollama/ollama/pull/1262 | 2,009,175,048 | PR_kwDOJ0Z1Ps5gR4sX | 1,262 | windows CUDA support | {
"login": "vinjn",
"id": 558657,
"node_id": "MDQ6VXNlcjU1ODY1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/558657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinjn",
"html_url": "https://github.com/vinjn",
"followers_url": "https://api.github.com/users/vinjn/followers"... | [] | closed | false | null | [] | null | 0 | 2023-11-24T06:26:31 | 2023-12-12T19:00:27 | 2023-11-24T22:16:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1262",
"html_url": "https://github.com/ollama/ollama/pull/1262",
"diff_url": "https://github.com/ollama/ollama/pull/1262.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1262.patch",
"merged_at": "2023-11-24T22:16:36"
} | Fix #403
- Support cuda build in Windows
- Import "containerd/console" lib to support colorful output in Windows terminal

| {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1262/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3709 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3709/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3709/comments | https://api.github.com/repos/ollama/ollama/issues/3709/events | https://github.com/ollama/ollama/pull/3709 | 2,249,167,308 | PR_kwDOJ0Z1Ps5s-GEt | 3,709 | Adds support for customizing GPU build flags in llama.cpp | {
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/foll... | [] | closed | false | null | [] | null | 3 | 2024-04-17T20:03:02 | 2024-04-23T16:30:31 | 2024-04-23T16:28:34 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3709",
"html_url": "https://github.com/ollama/ollama/pull/3709",
"diff_url": "https://github.com/ollama/ollama/pull/3709.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3709.patch",
"merged_at": "2024-04-23T16:28:34"
} | Appends OLLAMA_CUSTOM_GPU_DEFS to CMAKE_DEFS. Will override any previously set build flags, allows for customizing GPU options when building from source. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3709/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5545 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5545/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5545/comments | https://api.github.com/repos/ollama/ollama/issues/5545/events | https://github.com/ollama/ollama/issues/5545 | 2,395,793,435 | I_kwDOJ0Z1Ps6OzOgb | 5,545 | OpenAI v1/completion throws an error when passing list of strings to stop parameter. | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.git... | null | 2 | 2024-07-08T14:23:48 | 2024-07-10T00:59:12 | 2024-07-09T21:01:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The new OpenAI v1/completion, (not chat.completion) throws an error if you pass list of strings to stop parameter.
```python
from openai import OpenAI
client = OpenAI(base_url=base_url, api_key=api_key)
prompt = """User: Hello,
Assistant: Hi, how can I help you?
User: How's it going?
As... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5545/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/128 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/128/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/128/comments | https://api.github.com/repos/ollama/ollama/issues/128/events | https://github.com/ollama/ollama/pull/128 | 1,812,513,931 | PR_kwDOJ0Z1Ps5V7ERj | 128 | Update modelfile.md | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2023-07-19T18:38:21 | 2023-12-05T23:52:44 | 2023-07-19T20:40:39 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/128",
"html_url": "https://github.com/ollama/ollama/pull/128",
"diff_url": "https://github.com/ollama/ollama/pull/128.diff",
"patch_url": "https://github.com/ollama/ollama/pull/128.patch",
"merged_at": "2023-07-19T20:40:39"
} | null | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/128/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4034 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4034/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4034/comments | https://api.github.com/repos/ollama/ollama/issues/4034/events | https://github.com/ollama/ollama/issues/4034 | 2,269,952,319 | I_kwDOJ0Z1Ps6HTLk_ | 4,034 | Implement downloads via torrents | {
"login": "f321x",
"id": 51097237,
"node_id": "MDQ6VXNlcjUxMDk3MjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/51097237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f321x",
"html_url": "https://github.com/f321x",
"followers_url": "https://api.github.com/users/f321x/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6896227207,
"node_id": ... | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 5 | 2024-04-29T20:51:10 | 2024-11-14T22:55:29 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Model downloads with a slow (10mbit) internet connection are really unreliable and crash around all 5-10gb for me (EOF max retries).
At the same time huge torrents work very reliable.
If you could implement a call to a external torrent client for model downloading or implement a torrent client the download experienc... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4034/reactions",
"total_count": 18,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4034/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2895 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2895/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2895/comments | https://api.github.com/repos/ollama/ollama/issues/2895/events | https://github.com/ollama/ollama/issues/2895 | 2,165,423,108 | I_kwDOJ0Z1Ps6BEbwE | 2,895 | May I add GBNF support? | {
"login": "josharian",
"id": 67496,
"node_id": "MDQ6VXNlcjY3NDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/67496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josharian",
"html_url": "https://github.com/josharian",
"followers_url": "https://api.github.com/users/josharian/... | [] | closed | false | null | [] | null | 1 | 2024-03-03T15:45:31 | 2024-03-03T18:47:17 | 2024-03-03T18:47:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi! I see you're drowning in issues and PRs here. :)
Partly as a follow-up to #2623, I'd like to add support for arbitrary GBNF. I'm going to do this for myself regardless.
The question is: Should I polish it and document it and upstream it?
The trickiest API part is the command-line interface. The server can ... | {
"login": "josharian",
"id": 67496,
"node_id": "MDQ6VXNlcjY3NDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/67496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josharian",
"html_url": "https://github.com/josharian",
"followers_url": "https://api.github.com/users/josharian/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2895/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1094 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1094/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1094/comments | https://api.github.com/repos/ollama/ollama/issues/1094/events | https://github.com/ollama/ollama/issues/1094 | 1,989,128,993 | I_kwDOJ0Z1Ps52j7Mh | 1,094 | Ambiguous state in google colab | {
"login": "ArsBinarii",
"id": 6293391,
"node_id": "MDQ6VXNlcjYyOTMzOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6293391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArsBinarii",
"html_url": "https://github.com/ArsBinarii",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2023-11-11T21:44:03 | 2023-11-11T22:15:12 | 2023-11-11T22:15:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Google collab T4
Installed cuda 12.3 via: https://developer.nvidia.com/cuda-downloads
now, nvidia-smi shows 12.0, but nvcc reports 12.3
run ollama via
<pre>
import os
import threading
from pyngrok import ngrok
import subprocess
import time
def ollama():
os.environ['OLLAMA_HOST'] = '0.0.0.0:11434'
... | {
"login": "ArsBinarii",
"id": 6293391,
"node_id": "MDQ6VXNlcjYyOTMzOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6293391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArsBinarii",
"html_url": "https://github.com/ArsBinarii",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1094/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1706 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1706/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1706/comments | https://api.github.com/repos/ollama/ollama/issues/1706/events | https://github.com/ollama/ollama/pull/1706 | 2,055,564,721 | PR_kwDOJ0Z1Ps5ivFm9 | 1,706 | Add Community Integration: Chatbox | {
"login": "Bin-Huang",
"id": 20723142,
"node_id": "MDQ6VXNlcjIwNzIzMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/20723142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bin-Huang",
"html_url": "https://github.com/Bin-Huang",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 3 | 2023-12-25T09:46:38 | 2024-02-23T12:17:28 | 2024-02-23T12:17:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1706",
"html_url": "https://github.com/ollama/ollama/pull/1706",
"diff_url": "https://github.com/ollama/ollama/pull/1706.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1706.patch",
"merged_at": "2024-02-23T12:17:28"
} | Thank you so much for developing Ollama; it has made running llama2 on my Mac incredibly simple. I've completely forgotten how I used to handle all the dependencies myself.
Recently, I've added support for Ollama's locally deployed models to my project [Chatbox](https://github.com/Bin-Huang/chatbox) (in the [latest ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1706/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1706/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4044 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4044/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4044/comments | https://api.github.com/repos/ollama/ollama/issues/4044/events | https://github.com/ollama/ollama/issues/4044 | 2,270,989,631 | I_kwDOJ0Z1Ps6HXI0_ | 4,044 | Problems with more GPUs using v0.1.33-rc5 | {
"login": "cBrainAI",
"id": 156695209,
"node_id": "U_kgDOCVb6qQ",
"avatar_url": "https://avatars.githubusercontent.com/u/156695209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cBrainAI",
"html_url": "https://github.com/cBrainAI",
"followers_url": "https://api.github.com/users/cBrainAI/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-04-30T09:36:56 | 2024-05-02T16:04:23 | 2024-05-02T16:04:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am testing the fantastic(!) new features with OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS in v0.1.33-rc5.
I am running ollama using docker on a machine with two RTX4090.
Unfortunately it seems like ollama does not use both graphic-cards with v0.1.33-rc5 - it has worked perfect with ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4044/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2299 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2299/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2299/comments | https://api.github.com/repos/ollama/ollama/issues/2299/events | https://github.com/ollama/ollama/pull/2299 | 2,111,389,729 | PR_kwDOJ0Z1Ps5lo9BF | 2,299 | use `llm.ImageData` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-01T02:57:46 | 2024-02-01T03:11:11 | 2024-02-01T03:11:11 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2299",
"html_url": "https://github.com/ollama/ollama/pull/2299",
"diff_url": "https://github.com/ollama/ollama/pull/2299.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2299.patch",
"merged_at": "2024-02-01T03:11:11"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2299/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/695 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/695/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/695/comments | https://api.github.com/repos/ollama/ollama/issues/695/events | https://github.com/ollama/ollama/issues/695 | 1,925,546,597 | I_kwDOJ0Z1Ps5yxYJl | 695 | Can't resume download (pull) on restart server | {
"login": "KcZLog",
"id": 135950770,
"node_id": "U_kgDOCBpxsg",
"avatar_url": "https://avatars.githubusercontent.com/u/135950770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KcZLog",
"html_url": "https://github.com/KcZLog",
"followers_url": "https://api.github.com/users/KcZLog/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2023-10-04T07:08:44 | 2024-12-16T17:05:01 | 2024-01-16T22:21:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Auto pruning on server start was added in #491
But this cause losing unfinished/failed download progress if restarting server
Please change this to allow continuing downloads.
Suggestions:
1. Don't auto prune, pruning on delete is probably enough?
2. If want auto prune, use seperate directory for unfinished... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/695/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/8669 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8669/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8669/comments | https://api.github.com/repos/ollama/ollama/issues/8669/events | https://github.com/ollama/ollama/issues/8669 | 2,818,980,252 | I_kwDOJ0Z1Ps6oBjmc | 8,669 | deepseek-r1:32b do not support tools? qwen2.5 base model should support. | {
"login": "HuChundong",
"id": 3194932,
"node_id": "MDQ6VXNlcjMxOTQ5MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3194932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HuChundong",
"html_url": "https://github.com/HuChundong",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-29T18:49:54 | 2025-01-29T20:29:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when i use autogen, deepseek-r1:32b raise error: model do not support tools.
### OS
WSL2
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8669/timeline | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.