url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7170 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7170/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7170/comments | https://api.github.com/repos/ollama/ollama/issues/7170/events | https://github.com/ollama/ollama/issues/7170 | 2,580,524,384 | I_kwDOJ0Z1Ps6Zz61g | 7,170 | [Feature request] Support external image URL for Multi Modal Models / Vision LLMs | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-10-11T06:11:29 | 2024-10-11T06:11:29 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 1. download the image
2. load the image
3. run inference on image 🎉
4. profit 🤑
This is especially useful if you're running ollama on a server and you can't just drag and drop an image
_Ideally_
```
$ ollama run minicpm-v --verbose
>>> https://farmhouseguide.com/wp-content/uploads/2021/08/group-of-llama... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7170/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7170/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6337 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6337/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6337/comments | https://api.github.com/repos/ollama/ollama/issues/6337/events | https://github.com/ollama/ollama/issues/6337 | 2,462,980,634 | I_kwDOJ0Z1Ps6Szhoa | 6,337 | Why is the occupancy of my Llama 3 model not high when using the GPU NV T2000, but instead it is computing using the CPU? | {
"login": "pewjs",
"id": 40452701,
"node_id": "MDQ6VXNlcjQwNDUyNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/40452701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pewjs",
"html_url": "https://github.com/pewjs",
"followers_url": "https://api.github.com/users/pewjs/follow... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWj... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-08-13T10:21:52 | 2024-09-05T22:00:00 | 2024-09-05T21:59:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I use Ollama with Llama 3 or any other model, I find that the GPU usage is constantly fluctuating at high and low levels and is not fully occupied. However, the CPU usage is still approximately 40% high. Various parameters have been enabled but to no avail.
 to have a better UI to run ollama.
The app successfully gets the tags at `/api/tags` but however, when I try to send a chat using `/api/chats`, the request is being rejected, and I... | {
"login": "MatthewDlr",
"id": 57815261,
"node_id": "MDQ6VXNlcjU3ODE1MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/57815261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatthewDlr",
"html_url": "https://github.com/MatthewDlr",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7380/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8668/comments | https://api.github.com/repos/ollama/ollama/issues/8668/events | https://github.com/ollama/ollama/pull/8668 | 2,818,695,620 | PR_kwDOJ0Z1Ps6JYzfR | 8,668 | Hide empty terminal window | {
"login": "ashokgelal",
"id": 401055,
"node_id": "MDQ6VXNlcjQwMTA1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/401055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashokgelal",
"html_url": "https://github.com/ashokgelal",
"followers_url": "https://api.github.com/users/a... | [] | open | false | null | [] | null | 0 | 2025-01-29T16:31:07 | 2025-01-29T16:31:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8668",
"html_url": "https://github.com/ollama/ollama/pull/8668",
"diff_url": "https://github.com/ollama/ollama/pull/8668.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8668.patch",
"merged_at": null
} | This hides the LlamaServer blank window when chatting outside of the terminal (say like with an app like Msty). This has no other side effects when invoking it the regular way.
I had sent a PR for this a while ago and it was closed thinking it had been resolved but this issue still exists. (see: https://github.com/o... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8668/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4340 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4340/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4340/comments | https://api.github.com/repos/ollama/ollama/issues/4340/events | https://github.com/ollama/ollama/issues/4340 | 2,290,642,869 | I_kwDOJ0Z1Ps6IiG-1 | 4,340 | how can I make ollama always run models? | {
"login": "zhaoyuchen1128",
"id": 167266669,
"node_id": "U_kgDOCfhJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/167266669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoyuchen1128",
"html_url": "https://github.com/zhaoyuchen1128",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-05-11T03:54:46 | 2024-07-25T18:56:47 | 2024-07-25T18:56:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
If the model does not run for a while, the model will stop and reloading will consume a lot of time.So the user experience is not good
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4340/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4716 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4716/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4716/comments | https://api.github.com/repos/ollama/ollama/issues/4716/events | https://github.com/ollama/ollama/issues/4716 | 2,324,901,328 | I_kwDOJ0Z1Ps6Kky3Q | 4,716 | An error occurred while creating modelfile file | {
"login": "wuuudong",
"id": 154340094,
"node_id": "U_kgDOCTMK_g",
"avatar_url": "https://avatars.githubusercontent.com/u/154340094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuuudong",
"html_url": "https://github.com/wuuudong",
"followers_url": "https://api.github.com/users/wuuudong/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-30T07:20:04 | 2024-05-30T16:22:32 | 2024-05-30T16:22:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I used the 4-bit quantized chatglm3-6b file to create the Modelfile with the following information:
C:\Windows\system32>ollama create example -f E:\LLM\chatglm.cpp\models\chatglm3.Modelfile
transferring model data
Error: unsupported content type: unknown
modelfile file Settings are as foll... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4716/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5888 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5888/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5888/comments | https://api.github.com/repos/ollama/ollama/issues/5888/events | https://github.com/ollama/ollama/pull/5888 | 2,426,064,565 | PR_kwDOJ0Z1Ps52Qw1t | 5,888 | Update gpu.md: Add RTX 3050 Ti and RTX 3050 Ti | {
"login": "bean5",
"id": 2052646,
"node_id": "MDQ6VXNlcjIwNTI2NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2052646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bean5",
"html_url": "https://github.com/bean5",
"followers_url": "https://api.github.com/users/bean5/follower... | [] | closed | false | null | [] | null | 3 | 2024-07-23T20:24:17 | 2024-09-05T22:08:41 | 2024-09-05T18:24:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5888",
"html_url": "https://github.com/ollama/ollama/pull/5888",
"diff_url": "https://github.com/ollama/ollama/pull/5888.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5888.patch",
"merged_at": "2024-09-05T18:24:26"
} | Seems strange that the laptop versions of 3050 and 3050 Ti would be supported but not the non-notebook, but this is what the page (https://developer.nvidia.com/cuda-gpus) says. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5888/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4391 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4391/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4391/comments | https://api.github.com/repos/ollama/ollama/issues/4391/events | https://github.com/ollama/ollama/issues/4391 | 2,292,005,499 | I_kwDOJ0Z1Ps6InTp7 | 4,391 | pre-built binary doesn't work on Jeston with JP6 GA system | {
"login": "TadayukiOkada",
"id": 51673480,
"node_id": "MDQ6VXNlcjUxNjczNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/51673480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TadayukiOkada",
"html_url": "https://github.com/TadayukiOkada",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-05-13T07:07:05 | 2024-05-31T22:01:27 | 2024-05-31T22:01:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I get this error if I run the pre-built binary on Jetson Orin with JP6 GA system installed:
`source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_EXECUTION_FAILED\n current device: 0, in funct... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4391/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4343 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4343/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4343/comments | https://api.github.com/repos/ollama/ollama/issues/4343/events | https://github.com/ollama/ollama/issues/4343 | 2,290,663,340 | I_kwDOJ0Z1Ps6IiL-s | 4,343 | windows10:V0.1.35 -The API interface of openai faill! but V0.1.34 is ok! | {
"login": "808cn",
"id": 13846472,
"node_id": "MDQ6VXNlcjEzODQ2NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13846472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/808cn",
"html_url": "https://github.com/808cn",
"followers_url": "https://api.github.com/users/808cn/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-05-11T04:47:04 | 2024-06-02T00:25:20 | 2024-06-02T00:25:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
V0.1.35: windows10,openai api fail!
-----------------------------------------------------
Version 0.1.35: The API interface of openai cannot be used.
It is OK to return to 0.1.34.
Now using version 0.1.34, the openai interface can be used normally.
------------------------------------... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4343/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6201 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6201/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6201/comments | https://api.github.com/repos/ollama/ollama/issues/6201/events | https://github.com/ollama/ollama/pull/6201 | 2,450,939,781 | PR_kwDOJ0Z1Ps53kyTB | 6,201 | feat: add support for running ollama on rocm in wsl | {
"login": "evshiron",
"id": 8800643,
"node_id": "MDQ6VXNlcjg4MDA2NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8800643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evshiron",
"html_url": "https://github.com/evshiron",
"followers_url": "https://api.github.com/users/evshi... | [] | open | false | null | [] | null | 8 | 2024-08-06T13:47:10 | 2025-01-15T22:24:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6201",
"html_url": "https://github.com/ollama/ollama/pull/6201",
"diff_url": "https://github.com/ollama/ollama/pull/6201.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6201.patch",
"merged_at": null
} | Allow running Ollama on ROCm in WSL by calling HIP functions instead of querying sysfs.
The `amd_hip_linux.go` was duplicated from `amd_hip_windows.go`, `windows.LoadLibrary` and `syscall.SyscallN` are replaced with CGO and `dlfcn.h`, to avoid depending on the HIP runtime directly.
Finally, I add an alternative r... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6201/reactions",
"total_count": 16,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6201/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6254 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6254/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6254/comments | https://api.github.com/repos/ollama/ollama/issues/6254/events | https://github.com/ollama/ollama/issues/6254 | 2,455,012,420 | I_kwDOJ0Z1Ps6SVIRE | 6,254 | Lumina-mGPT support | {
"login": "Amazon90",
"id": 72290820,
"node_id": "MDQ6VXNlcjcyMjkwODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/72290820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amazon90",
"html_url": "https://github.com/Amazon90",
"followers_url": "https://api.github.com/users/Ama... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-08-08T06:51:04 | 2024-08-08T19:36:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [Lumina-mGPT](https://github.com/Alpha-VLLM/Lumina-mGPT) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6254/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1448 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1448/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1448/comments | https://api.github.com/repos/ollama/ollama/issues/1448/events | https://github.com/ollama/ollama/issues/1448 | 2,034,042,119 | I_kwDOJ0Z1Ps55PQUH | 1,448 | Pytorch model quantization, using ollama/quantize docker is not working. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [] | closed | false | null | [] | null | 3 | 2023-12-09T20:31:24 | 2024-02-21T11:24:44 | 2024-02-20T01:21:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | (base) alexo@GrayMatters:/opt/data/data/Salesforce/codegen25-7b-mono$ docker run --rm -v .:/model -v .:/workdir ollama/quantize -q q6_K ./
sh: 0: cannot open entrypoint.sh: No such file | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1448/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1448/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/943 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/943/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/943/comments | https://api.github.com/repos/ollama/ollama/issues/943/events | https://github.com/ollama/ollama/pull/943 | 1,966,739,516 | PR_kwDOJ0Z1Ps5eCTge | 943 | doc: categorised community integrations + added ollama-webui | {
"login": "tjbck",
"id": 25473318,
"node_id": "MDQ6VXNlcjI1NDczMzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/25473318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjbck",
"html_url": "https://github.com/tjbck",
"followers_url": "https://api.github.com/users/tjbck/follow... | [] | closed | false | null | [] | null | 0 | 2023-10-28T21:04:26 | 2023-11-06T19:35:39 | 2023-11-06T19:35:39 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/943",
"html_url": "https://github.com/ollama/ollama/pull/943",
"diff_url": "https://github.com/ollama/ollama/pull/943.diff",
"patch_url": "https://github.com/ollama/ollama/pull/943.patch",
"merged_at": "2023-11-06T19:35:39"
} | Just found out there was a community integrations section in the README.md file.
I categorised the integrations into separate groups for better legibility and also added the [ollama-webui](https://github.com/ollama-webui/ollama-webui) project to the GUI list.
Thanks! | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/943/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7066/comments | https://api.github.com/repos/ollama/ollama/issues/7066/events | https://github.com/ollama/ollama/pull/7066 | 2,559,856,753 | PR_kwDOJ0Z1Ps59Sd76 | 7,066 | llama: Add CI to verify all vendored changes have patches | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-10-01T17:54:45 | 2024-10-01T18:16:15 | 2024-10-01T18:16:10 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7066",
"html_url": "https://github.com/ollama/ollama/pull/7066",
"diff_url": "https://github.com/ollama/ollama/pull/7066.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7066.patch",
"merged_at": "2024-10-01T18:16:10"
} | With the new vendoring model we want to make sure we don't accidentally merge changes in the vendored code without having those changes covered by a patch that applies cleanly on the current baseline. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7066/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1900 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1900/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1900/comments | https://api.github.com/repos/ollama/ollama/issues/1900/events | https://github.com/ollama/ollama/issues/1900 | 2,074,578,480 | I_kwDOJ0Z1Ps57p44w | 1,900 | set parameter stop in repl removes other stop words | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-01-10T15:09:03 | 2024-05-10T00:57:54 | 2024-05-10T00:57:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | if i am in the repl and I type `/set parameter stop <|system>` all other stop words are removed. I just wanted to add one. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1900/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5764 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5764/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5764/comments | https://api.github.com/repos/ollama/ollama/issues/5764/events | https://github.com/ollama/ollama/issues/5764 | 2,415,872,249 | I_kwDOJ0Z1Ps6P_0j5 | 5,764 | Error: llama runner process has terminated: exit status 0xc0000409 error loading model: unable to allocate backend buffer | {
"login": "mohibovais79",
"id": 89134017,
"node_id": "MDQ6VXNlcjg5MTM0MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/89134017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohibovais79",
"html_url": "https://github.com/mohibovais79",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-07-18T09:40:40 | 2024-08-08T18:00:46 | 2024-08-08T18:00:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when i try to run this command ollama run gemma2 this error shows up.
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.2.5 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5764/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7196 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7196/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7196/comments | https://api.github.com/repos/ollama/ollama/issues/7196/events | https://github.com/ollama/ollama/issues/7196 | 2,585,095,422 | I_kwDOJ0Z1Ps6aFWz- | 7,196 | Model Push Successful but Ignored by Ollama Registry - Cannot Pull Model After Push | {
"login": "jimin0",
"id": 86674074,
"node_id": "MDQ6VXNlcjg2Njc0MDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/86674074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimin0",
"html_url": "https://github.com/jimin0",
"followers_url": "https://api.github.com/users/jimin0/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw... | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2024-10-14T07:40:17 | 2024-12-03T20:01:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After successfully pushing a model to the Ollama registry using `ollama push`, the model seems to be ignored by the Ollama service. I cannot pull the model from the registry, and the service reports that "**No models have been pushed**" when accessing the registry URL.
This issue persists eve... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7196/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4838 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4838/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4838/comments | https://api.github.com/repos/ollama/ollama/issues/4838/events | https://github.com/ollama/ollama/issues/4838 | 2,336,354,937 | I_kwDOJ0Z1Ps6LQfJ5 | 4,838 | /api/ps shows Start of CE 'modified_at' | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.git... | null | 0 | 2024-06-05T16:30:19 | 2024-06-05T18:19:53 | 2024-06-05T18:19:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Should not return the field
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4838/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6089 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6089/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6089/comments | https://api.github.com/repos/ollama/ollama/issues/6089/events | https://github.com/ollama/ollama/issues/6089 | 2,439,027,783 | I_kwDOJ0Z1Ps6RYJxH | 6,089 | Match behavior of text-generation webui and koboldcpp by accepting requests to v1/completions that don't specify the model. | {
"login": "balisujohn",
"id": 20377292,
"node_id": "MDQ6VXNlcjIwMzc3Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/20377292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balisujohn",
"html_url": "https://github.com/balisujohn",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-07-31T03:22:59 | 2024-07-31T17:44:39 | 2024-07-31T17:38:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This works:
````
url = "http://localhost:11434/v1/completions"
headers = {
'Content-Type': 'application/json'
}
data = {
'model':"moondream",
'prompt': "What is the cat holding?",
'max_tokens': 20,
'temperature': 1,
'top_p': 0.9,
'seed': 10
}
# Convert data to JSON form... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6089/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1103 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1103/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1103/comments | https://api.github.com/repos/ollama/ollama/issues/1103/events | https://github.com/ollama/ollama/issues/1103 | 1,989,574,177 | I_kwDOJ0Z1Ps52ln4h | 1,103 | Custom model repeats context in the response | {
"login": "sethmbhele",
"id": 4163455,
"node_id": "MDQ6VXNlcjQxNjM0NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4163455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sethmbhele",
"html_url": "https://github.com/sethmbhele",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 3 | 2023-11-12T21:00:35 | 2023-11-19T16:48:28 | 2023-11-19T16:48:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello Friends
Firstly thank you so much for this amazing project. I have been playing around with it and having quite the blast learning the ins and outs of Ollama. If anyone can kindly assist with a challenge I am currently facing:
I created a Modelfile and passed temperature and system message; created and ran cu... | {
"login": "sethmbhele",
"id": 4163455,
"node_id": "MDQ6VXNlcjQxNjM0NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4163455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sethmbhele",
"html_url": "https://github.com/sethmbhele",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1103/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5996 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5996/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5996/comments | https://api.github.com/repos/ollama/ollama/issues/5996/events | https://github.com/ollama/ollama/pull/5996 | 2,432,964,797 | PR_kwDOJ0Z1Ps52npDv | 5,996 | Add charla project to Terminal section | {
"login": "yaph",
"id": 60051,
"node_id": "MDQ6VXNlcjYwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/60051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaph",
"html_url": "https://github.com/yaph",
"followers_url": "https://api.github.com/users/yaph/followers",
"follo... | [] | closed | false | null | [] | null | 2 | 2024-07-26T20:53:55 | 2024-09-09T21:07:13 | 2024-09-09T21:06:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5996",
"html_url": "https://github.com/ollama/ollama/pull/5996",
"diff_url": "https://github.com/ollama/ollama/pull/5996.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5996.patch",
"merged_at": null
} | Charla is a simple terminal based chat application that works with local language models. I'd appreciate if you consider it as an example project. | {
"login": "yaph",
"id": 60051,
"node_id": "MDQ6VXNlcjYwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/60051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaph",
"html_url": "https://github.com/yaph",
"followers_url": "https://api.github.com/users/yaph/followers",
"follo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5996/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1794 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1794/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1794/comments | https://api.github.com/repos/ollama/ollama/issues/1794/events | https://github.com/ollama/ollama/issues/1794 | 2,066,598,674 | I_kwDOJ0Z1Ps57LcsS | 1,794 | "This model requires you to add a jpeg, png, or svg image" error on native windows build | {
"login": "prabirshrestha",
"id": 287744,
"node_id": "MDQ6VXNlcjI4Nzc0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/287744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabirshrestha",
"html_url": "https://github.com/prabirshrestha",
"followers_url": "https://api.github... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-01-05T01:46:18 | 2024-01-07T17:05:47 | 2024-01-07T17:05:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have compiled the ollama as a native windows binary and have been able to load and run models.
When running llava model. I get an error.
```bat
ollama run llava
```
```
>>> describe this image c:\download.jpeg
describe this image D:\code\download.jpeg
This model requires you to add a jpeg, png, or
svg ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1794/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2211 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2211/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2211/comments | https://api.github.com/repos/ollama/ollama/issues/2211/events | https://github.com/ollama/ollama/issues/2211 | 2,102,653,728 | I_kwDOJ0Z1Ps59U_Mg | 2,211 | Mistral v0.2 hangs after repeatedly writing same token | {
"login": "arch-user-france1",
"id": 72965843,
"node_id": "MDQ6VXNlcjcyOTY1ODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/72965843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arch-user-france1",
"html_url": "https://github.com/arch-user-france1",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2024-01-26T18:02:03 | 2024-03-12T22:50:45 | 2024-03-12T22:50:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | <img width="1446" alt="grafik" src="https://github.com/ollama/ollama/assets/72965843/daff8519-4262-46f1-b52d-d11b246355b4">
```bash
➜ ~ ollama ls
NAME ID SIZE MODIFIED
mistral:v0.2 61e88e884507 4.1 GB 2 days ago
```
Crashed, the ollama runner using 300 MB of... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2211/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5290 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5290/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5290/comments | https://api.github.com/repos/ollama/ollama/issues/5290/events | https://github.com/ollama/ollama/issues/5290 | 2,374,312,047 | I_kwDOJ0Z1Ps6NhSBv | 5,290 | ollama-go bindings | {
"login": "k0marov",
"id": 95040709,
"node_id": "U_kgDOBao0xQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95040709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k0marov",
"html_url": "https://github.com/k0marov",
"followers_url": "https://api.github.com/users/k0marov/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-06-26T05:19:40 | 2024-07-08T23:19:18 | 2024-07-08T23:19:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, I'm interested in having a native Go client library for Ollama REST API, like the Python and JS ones.
I can start myself, but want to ask: is someone already working on it?
If it's not taken, I'll be glad to make this contribution.
Thanks for this awesome system! | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5290/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1132 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1132/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1132/comments | https://api.github.com/repos/ollama/ollama/issues/1132/events | https://github.com/ollama/ollama/pull/1132 | 1,993,715,970 | PR_kwDOJ0Z1Ps5fdnos | 1,132 | replace go-humanize with format.HumanBytes | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-11-14T22:58:03 | 2023-11-15T17:46:23 | 2023-11-15T17:46:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1132",
"html_url": "https://github.com/ollama/ollama/pull/1132",
"diff_url": "https://github.com/ollama/ollama/pull/1132.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1132.patch",
"merged_at": "2023-11-15T17:46:22"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1132/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2833 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2833/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2833/comments | https://api.github.com/repos/ollama/ollama/issues/2833/events | https://github.com/ollama/ollama/issues/2833 | 2,161,303,468 | I_kwDOJ0Z1Ps6A0t-s | 2,833 | Running ollama on Hugging Face Spaces | {
"login": "jbdatascience",
"id": 33154192,
"node_id": "MDQ6VXNlcjMzMTU0MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/33154192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbdatascience",
"html_url": "https://github.com/jbdatascience",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 6 | 2024-02-29T13:44:33 | 2024-06-24T16:05:20 | 2024-05-17T22:59:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I want to run ollama on Hugging Face Spaces, because I run a Streamlit app there that must make use of a LLM and a embedding model served by Ollama. How can I do that? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2833/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3274 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3274/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3274/comments | https://api.github.com/repos/ollama/ollama/issues/3274/events | https://github.com/ollama/ollama/pull/3274 | 2,198,201,168 | PR_kwDOJ0Z1Ps5qQgQh | 3,274 | Community Integration: tlm - cli copilot with ollama | {
"login": "yusufcanb",
"id": 9295668,
"node_id": "MDQ6VXNlcjkyOTU2Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusufcanb",
"html_url": "https://github.com/yusufcanb",
"followers_url": "https://api.github.com/users/yu... | [] | closed | false | null | [] | null | 0 | 2024-03-20T17:58:25 | 2024-03-25T18:53:27 | 2024-03-25T18:53:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3274",
"html_url": "https://github.com/ollama/ollama/pull/3274",
"diff_url": "https://github.com/ollama/ollama/pull/3274.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3274.patch",
"merged_at": "2024-03-25T18:53:26"
} | I have been advised to create a PR to include [tlm](https://github.com/yusufcanb/tlm) inside [README.md](https://github.com/ollama/ollama/blob/main/README.md) during the KubeCon 2024 Paris by the Ollama staff. Thanks for all of them who expressed their excitement for what I've created. ❤
So, here is the PR to includ... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3274/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2097 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2097/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2097/comments | https://api.github.com/repos/ollama/ollama/issues/2097/events | https://github.com/ollama/ollama/issues/2097 | 2,090,902,481 | I_kwDOJ0Z1Ps58oKPR | 2,097 | Overwriting an existing model from a modelfile leaves old blob not deleted | {
"login": "hyjwei",
"id": 76876891,
"node_id": "MDQ6VXNlcjc2ODc2ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyjwei",
"html_url": "https://github.com/hyjwei",
"followers_url": "https://api.github.com/users/hyjwei/fo... | [] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 1 | 2024-01-19T16:40:03 | 2024-01-22T17:37:50 | 2024-01-22T17:37:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Problem ###
When I import a GGUF model into ollama, I create a modelfile with "FROM" line and then run `ollama create`, and a blob is created in model directory.
Then I decide to import another GGUF model (different quant parameters), I modify the "FROM" line and the run `ollama create` again. A new blob is cre... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2097/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2097/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6047 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6047/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6047/comments | https://api.github.com/repos/ollama/ollama/issues/6047/events | https://github.com/ollama/ollama/issues/6047 | 2,435,325,229 | I_kwDOJ0Z1Ps6RKB0t | 6,047 | Ollama | {
"login": "wAyNecheRui",
"id": 176916787,
"node_id": "U_kgDOCouJMw",
"avatar_url": "https://avatars.githubusercontent.com/u/176916787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wAyNecheRui",
"html_url": "https://github.com/wAyNecheRui",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-29T12:46:45 | 2024-07-30T16:31:45 | 2024-07-30T16:31:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
it is very slow while loading in the command prompt
### OS
Windows
### GPU
_No response_
### CPU
Other
### Ollama version
llama2 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6047/reactions",
"total_count": 1,
"+1": 0,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6047/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3912 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3912/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3912/comments | https://api.github.com/repos/ollama/ollama/issues/3912/events | https://github.com/ollama/ollama/issues/3912 | 2,263,842,810 | I_kwDOJ0Z1Ps6G73_6 | 3,912 | Server hang after ~400 long context requests mixtral or llama3 ollama 0.1.32 | {
"login": "kungfu-eric",
"id": 87145506,
"node_id": "MDQ6VXNlcjg3MTQ1NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/87145506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kungfu-eric",
"html_url": "https://github.com/kungfu-eric",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-04-25T15:04:05 | 2024-05-09T22:32:53 | 2024-05-09T22:32:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hangs after about 400 long context requests on mixtral and same with llama3
```
ollama --version
ollama version is 0.1.32
```
This is on AMD CPU, 2x NVIDIA A6000s, Ubuntu 18.04 in a docker container. Client is using the python ollama package. Workaround by restarting server manually and u... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3912/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3912/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1274 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1274/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1274/comments | https://api.github.com/repos/ollama/ollama/issues/1274/events | https://github.com/ollama/ollama/issues/1274 | 2,010,492,898 | I_kwDOJ0Z1Ps531a_i | 1,274 | "no such file or directory" when creating model during the "creating adapter layer" step | {
"login": "meow-d",
"id": 51119160,
"node_id": "MDQ6VXNlcjUxMTE5MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/51119160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meow-d",
"html_url": "https://github.com/meow-d",
"followers_url": "https://api.github.com/users/meow-d/fo... | [] | closed | false | null | [] | null | 9 | 2023-11-25T06:09:47 | 2024-01-18T23:50:30 | 2024-01-18T23:50:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when i run `ollama create storywriter`, i get:
```
transferring model data
reading model metadata
creating template layer
creating system layer
creating adapter layer
Error: open /@sha256:439bdfbd08b0143c5f5f97154d76676a5348a5a00a2fac38fdc8d1c4498d67d3: no such file or directory
```
btw i'm running on Fedora... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1274/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3884 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3884/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3884/comments | https://api.github.com/repos/ollama/ollama/issues/3884/events | https://github.com/ollama/ollama/pull/3884 | 2,261,741,869 | PR_kwDOJ0Z1Ps5toHaF | 3,884 | docs: add Hollama to Web & Desktop integrations | {
"login": "fmaclen",
"id": 1434675,
"node_id": "MDQ6VXNlcjE0MzQ2NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1434675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmaclen",
"html_url": "https://github.com/fmaclen",
"followers_url": "https://api.github.com/users/fmaclen/... | [] | closed | false | null | [] | null | 0 | 2024-04-24T16:46:34 | 2024-05-07T20:17:36 | 2024-05-07T20:17:36 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3884",
"html_url": "https://github.com/ollama/ollama/pull/3884",
"diff_url": "https://github.com/ollama/ollama/pull/3884.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3884.patch",
"merged_at": "2024-05-07T20:17:36"
} | **Hollama** is a minimal web-UI for talking to Ollama servers.
https://hollama.fernando.is
**Repository:**
https://github.com/fmaclen/hollama
**Current features:**
- Large prompt fields
- Streams completions
- Copy completions as raw text
- Markdown parsing w/syntax highlighting
- Saves sessions/context in... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3884/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5451 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5451/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5451/comments | https://api.github.com/repos/ollama/ollama/issues/5451/events | https://github.com/ollama/ollama/issues/5451 | 2,387,413,448 | I_kwDOJ0Z1Ps6OTQnI | 5,451 | Speech-To-Text Transcription | {
"login": "HerroHK",
"id": 170845944,
"node_id": "U_kgDOCi7m-A",
"avatar_url": "https://avatars.githubusercontent.com/u/170845944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HerroHK",
"html_url": "https://github.com/HerroHK",
"followers_url": "https://api.github.com/users/HerroHK/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-07-03T00:56:45 | 2024-07-03T16:33:20 | 2024-07-03T16:33:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Issue: our company has audio recordings that are confidential in nature. We have setup a linux server (Ubuntu) running Ollama with both Open-WebUI and AnythingLLM as interface. However, it seems both are not able to transcribe long (up to 8 hours) audio recordings, and we only get back snippets. It is also unclear wher... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5451/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5451/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1391 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1391/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1391/comments | https://api.github.com/repos/ollama/ollama/issues/1391/events | https://github.com/ollama/ollama/issues/1391 | 2,026,828,139 | I_kwDOJ0Z1Ps54zvFr | 1,391 | Totally stumped :-( | {
"login": "itscvenk",
"id": 117738376,
"node_id": "U_kgDOBwSLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itscvenk",
"html_url": "https://github.com/itscvenk",
"followers_url": "https://api.github.com/users/itscvenk/... | [] | closed | false | null | [] | null | 9 | 2023-12-05T18:03:32 | 2023-12-07T08:02:55 | 2023-12-06T16:45:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have this in the config (and yes, it is below and above the respective sections, as i learnt the hard way, LOL)
```
Environment="OLLAMA_HOST=mysubdomain.domain.com:11434"
Environment="OLLAMA_ORIGINS='my.ip.in.v4'"
```
Actual values were used above, server was also rebooted (as restarting the service had no e... | {
"login": "itscvenk",
"id": 117738376,
"node_id": "U_kgDOBwSLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/117738376?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itscvenk",
"html_url": "https://github.com/itscvenk",
"followers_url": "https://api.github.com/users/itscvenk/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1391/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2276 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2276/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2276/comments | https://api.github.com/repos/ollama/ollama/issues/2276/events | https://github.com/ollama/ollama/issues/2276 | 2,108,156,325 | I_kwDOJ0Z1Ps59p-ml | 2,276 | Unhandled Runtime Error | {
"login": "hamperia4",
"id": 98347762,
"node_id": "U_kgDOBdyq8g",
"avatar_url": "https://avatars.githubusercontent.com/u/98347762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamperia4",
"html_url": "https://github.com/hamperia4",
"followers_url": "https://api.github.com/users/hamperia... | [] | closed | false | null | [] | null | 1 | 2024-01-30T15:33:36 | 2024-02-20T04:08:17 | 2024-02-20T04:08:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Although SUPABASE_URL and SUPABASE_ANON_KEY are correct after running nvm getting below error locally:
<img width="963" alt="Screenshot 2024-01-30 at 5 33 15 PM" src="https://github.com/ollama/ollama/assets/98347762/b3e7370c-934e-4db2-ada7-1062d129a201">
Any ideas? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2276/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4627 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4627/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4627/comments | https://api.github.com/repos/ollama/ollama/issues/4627/events | https://github.com/ollama/ollama/pull/4627 | 2,316,628,597 | PR_kwDOJ0Z1Ps5wiCOz | 4,627 | Add OLLAMA_MAX_DOWNLOAD_PARTS env to support config parallel download parts | {
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 3 | 2024-05-25T03:03:55 | 2024-12-29T19:28:50 | 2024-12-29T19:28:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4627",
"html_url": "https://github.com/ollama/ollama/pull/4627",
"diff_url": "https://github.com/ollama/ollama/pull/4627.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4627.patch",
"merged_at": null
} | Add a environment `OLLAMA_MAX_DOWNLOAD_PARTS` to support config maximum download parts in parallel.
This PR closes #4595 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4627/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4627/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1561 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1561/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1561/comments | https://api.github.com/repos/ollama/ollama/issues/1561/events | https://github.com/ollama/ollama/issues/1561 | 2,044,668,218 | I_kwDOJ0Z1Ps553yk6 | 1,561 | GPU not being used and 'out of memory' - 'no CUDA-capable device is detected' errors while running on Docker Compose | {
"login": "seth100",
"id": 4366877,
"node_id": "MDQ6VXNlcjQzNjY4Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4366877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seth100",
"html_url": "https://github.com/seth100",
"followers_url": "https://api.github.com/users/seth100/... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2023-12-16T08:47:02 | 2024-02-01T23:18:24 | 2024-02-01T23:18:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm using the following docker compose file:
```yml
ollama:
image: ollama/ollama:latest
container_name: ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
- ./ollama... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1561/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1561/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4848 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4848/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4848/comments | https://api.github.com/repos/ollama/ollama/issues/4848/events | https://github.com/ollama/ollama/pull/4848 | 2,337,573,704 | PR_kwDOJ0Z1Ps5xpQKl | 4,848 | Add qollama to list of Web & Desktop integrations | {
"login": "farleyrunkel",
"id": 162782461,
"node_id": "U_kgDOCbPc_Q",
"avatar_url": "https://avatars.githubusercontent.com/u/162782461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farleyrunkel",
"html_url": "https://github.com/farleyrunkel",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-06-06T07:45:57 | 2024-11-28T10:30:42 | 2024-11-21T09:39:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4848",
"html_url": "https://github.com/ollama/ollama/pull/4848",
"diff_url": "https://github.com/ollama/ollama/pull/4848.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4848.patch",
"merged_at": null
} | QOllama is a Qt-based client for [ollama](https://github.com/ollama/ollama), providing a user-friendly interface for interacting with the model and managing chat history. It supports cross-platform functionality, ensuring a seamless experience on Windows, macOS, and Linux.
Goto QOllama: [https://github.com/farleyrun... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4848/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/948 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/948/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/948/comments | https://api.github.com/repos/ollama/ollama/issues/948/events | https://github.com/ollama/ollama/pull/948 | 1,968,699,514 | PR_kwDOJ0Z1Ps5eI21V | 948 | Fix conversion command for gptneox | {
"login": "dloss",
"id": 744603,
"node_id": "MDQ6VXNlcjc0NDYwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/744603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dloss",
"html_url": "https://github.com/dloss",
"followers_url": "https://api.github.com/users/dloss/followers"... | [] | closed | false | null | [] | null | 0 | 2023-10-30T15:52:56 | 2023-10-30T18:34:29 | 2023-10-30T18:34:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/948",
"html_url": "https://github.com/ollama/ollama/pull/948",
"diff_url": "https://github.com/ollama/ollama/pull/948.diff",
"patch_url": "https://github.com/ollama/ollama/pull/948.patch",
"merged_at": "2023-10-30T18:34:29"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/948/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3326 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3326/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3326/comments | https://api.github.com/repos/ollama/ollama/issues/3326/events | https://github.com/ollama/ollama/issues/3326 | 2,204,445,373 | I_kwDOJ0Z1Ps6DZSq9 | 3,326 | Sha256 code mismatch pulling a model | {
"login": "ipsmile",
"id": 28075439,
"node_id": "MDQ6VXNlcjI4MDc1NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/28075439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ipsmile",
"html_url": "https://github.com/ipsmile",
"followers_url": "https://api.github.com/users/ipsmil... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-03-24T16:39:05 | 2024-03-27T22:43:05 | 2024-03-27T22:43:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Received the following messages while executing "ollama pull wizard-vicuna"
Error: digest mismatch, file must be downloaded again: want sha256:1ede1e83f21c3c72f7b1ce304920a3d8f6eaf8304cfda8fd82864287033175dc, got sha256:5130a22afc1df70a9babbe0d8843a6a65fd6647cc8d4836a476896fc61f0e3aa
### Wha... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3326/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7447 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7447/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7447/comments | https://api.github.com/repos/ollama/ollama/issues/7447/events | https://github.com/ollama/ollama/issues/7447 | 2,626,601,072 | I_kwDOJ0Z1Ps6cjsBw | 7,447 | Feature Request: count tokens before calling '/v1/chat/completions' | {
"login": "GPTLocalhost",
"id": 72584872,
"node_id": "MDQ6VXNlcjcyNTg0ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/72584872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GPTLocalhost",
"html_url": "https://github.com/GPTLocalhost",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-10-31T11:17:15 | 2024-12-02T14:49:51 | 2024-12-02T14:49:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Recently, we integrated Microsoft Word with Ollama through a local Word Add-in. You can view a demo [here](https://gptlocalhost.com/demo/). We're planning to add a feature to count tokens before calling '/v1/chat/completions,' allowing users to see the remaining tokens available for inference. Our question is: Is it po... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7447/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/5849 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5849/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5849/comments | https://api.github.com/repos/ollama/ollama/issues/5849/events | https://github.com/ollama/ollama/issues/5849 | 2,422,642,175 | I_kwDOJ0Z1Ps6QZpX_ | 5,849 | How to force the use of two GPUs to run a model? | {
"login": "mizzlefeng",
"id": 54129071,
"node_id": "MDQ6VXNlcjU0MTI5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/54129071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mizzlefeng",
"html_url": "https://github.com/mizzlefeng",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-07-22T11:32:17 | 2024-07-22T22:22:56 | 2024-07-22T22:22:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have reviewed many issues, including [#4198](https://github.com/ollama/ollama/issues/4198), [#4517](https://github.com/ollama/ollama/pull/4517) and so on.
I found that the explanation given is that if the graphics memory of a single GPU is sufficient to run the current model, then it will not use more GPUs. But what... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5849/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4331 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4331/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4331/comments | https://api.github.com/repos/ollama/ollama/issues/4331/events | https://github.com/ollama/ollama/pull/4331 | 2,290,532,999 | PR_kwDOJ0Z1Ps5vJDWe | 4,331 | Fix envconfig unit test | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-05-10T23:50:11 | 2024-05-11T16:16:28 | 2024-05-11T16:16:28 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4331",
"html_url": "https://github.com/ollama/ollama/pull/4331",
"diff_url": "https://github.com/ollama/ollama/pull/4331.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4331.patch",
"merged_at": "2024-05-11T16:16:28"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4331/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2898 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2898/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2898/comments | https://api.github.com/repos/ollama/ollama/issues/2898/events | https://github.com/ollama/ollama/issues/2898 | 2,165,486,408 | I_kwDOJ0Z1Ps6BErNI | 2,898 | v0.1.28 RC: CUDA error: out of memory | {
"login": "ovaisq",
"id": 9484502,
"node_id": "MDQ6VXNlcjk0ODQ1MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9484502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ovaisq",
"html_url": "https://github.com/ovaisq",
"followers_url": "https://api.github.com/users/ovaisq/foll... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-03-03T18:36:35 | 2024-03-12T01:33:38 | 2024-03-12T01:33:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama v0.1.28 RC
Ryzen 7 1700 - 48GB RAM - 500GB SSD
GeForce GTX 1070ti 8GB VRAM - Driver v551.61
Windows 11 Pro
My Python code (running on a Debian 12 instance - making remote calls over local network) is looping through deepseek-llm, llama2, gemma LLMs doing this:
client = AsyncClient(host='OLLA... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2898/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7424 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7424/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7424/comments | https://api.github.com/repos/ollama/ollama/issues/7424/events | https://github.com/ollama/ollama/pull/7424 | 2,624,694,826 | PR_kwDOJ0Z1Ps6AamBJ | 7,424 | boost embed endpoint | {
"login": "liuy",
"id": 1192888,
"node_id": "MDQ6VXNlcjExOTI4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1192888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuy",
"html_url": "https://github.com/liuy",
"followers_url": "https://api.github.com/users/liuy/followers",
... | [] | open | false | null | [] | null | 5 | 2024-10-30T16:43:31 | 2025-01-02T18:49:23 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7424",
"html_url": "https://github.com/ollama/ollama/pull/7424",
"diff_url": "https://github.com/ollama/ollama/pull/7424.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7424.patch",
"merged_at": null
} | just get token numbers in the runner instead of route.
Even on following simplest request, I got nearly 20x boost.
curl http://localhost:11434/api/embed -d '{
"model": "all-minilm",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}'
new approach: "total_duration":14239148
old approach: "to... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7424/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7424/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6211 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6211/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6211/comments | https://api.github.com/repos/ollama/ollama/issues/6211/events | https://github.com/ollama/ollama/issues/6211 | 2,451,805,566 | I_kwDOJ0Z1Ps6SI5V- | 6,211 | Error: max retries exceeded | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2024-08-06T22:42:52 | 2025-01-30T04:39:10 | 2024-08-11T23:09:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am in a place with a slow ADSL connection that works for loading pages and checking emails. However, I can't pull LLM models because it's regularly interrupted by an 'Error: max retries exceeded after about 2 or 3 minutes. If I use my phone with Share Connection, it works well in 5G. I don't... | {
"login": "igorschlum",
"id": 2884312,
"node_id": "MDQ6VXNlcjI4ODQzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igorschlum",
"html_url": "https://github.com/igorschlum",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6211/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5253 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5253/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5253/comments | https://api.github.com/repos/ollama/ollama/issues/5253/events | https://github.com/ollama/ollama/issues/5253 | 2,369,830,728 | I_kwDOJ0Z1Ps6NQL9I | 5,253 | Add queue position indicator | {
"login": "uzumakinaruto19",
"id": 99479748,
"node_id": "U_kgDOBe3wxA",
"avatar_url": "https://avatars.githubusercontent.com/u/99479748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uzumakinaruto19",
"html_url": "https://github.com/uzumakinaruto19",
"followers_url": "https://api.github.... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-06-24T10:15:29 | 2024-11-06T01:17:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, when running resource-intensive models on Ollama, especially on less powerful hardware, it's not clear how long processing might take or if there's a queue of tasks.
Feature request:
1. Implement a way to show the user's position in the processing queue (if any). This is my main concern
2. Add an option... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5253/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5253/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6665 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6665/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6665/comments | https://api.github.com/repos/ollama/ollama/issues/6665/events | https://github.com/ollama/ollama/pull/6665 | 2,509,085,392 | PR_kwDOJ0Z1Ps56mGN8 | 6,665 | Fix "presence_penalty_penalty" typo, add test. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2024-09-06T00:04:35 | 2024-09-06T17:07:31 | 2024-09-06T08:16:28 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6665",
"html_url": "https://github.com/ollama/ollama/pull/6665",
"diff_url": "https://github.com/ollama/ollama/pull/6665.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6665.patch",
"merged_at": "2024-09-06T08:16:28"
} | Fixes: https://github.com/ollama/ollama/issues/6640 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6665/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2273 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2273/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2273/comments | https://api.github.com/repos/ollama/ollama/issues/2273/events | https://github.com/ollama/ollama/issues/2273 | 2,107,498,277 | I_kwDOJ0Z1Ps59nd8l | 2,273 | Line breaks are stripped when pasting to the prompt when running under WezTerm | {
"login": "eproxus",
"id": 112878,
"node_id": "MDQ6VXNlcjExMjg3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/112878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eproxus",
"html_url": "https://github.com/eproxus",
"followers_url": "https://api.github.com/users/eproxus/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 1 | 2024-01-30T10:35:41 | 2024-03-11T22:29:22 | 2024-03-11T22:29:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | (Not sure if this is a Ollama / WezTerm issue, but opening it here first)
When pasting multi-line text into the prompt when running Ollama under the WezTerm terminal on macOS, line breaks (newlines) are stripped. This does not happen with Terminal.app. It also doesn't happen in e.g. Vim, so it is something specific ... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2273/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5274 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5274/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5274/comments | https://api.github.com/repos/ollama/ollama/issues/5274/events | https://github.com/ollama/ollama/issues/5274 | 2,372,982,772 | I_kwDOJ0Z1Ps6NcNf0 | 5,274 | API works with non-functional params, no error messages | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | null | [] | null | 0 | 2024-06-25T15:26:02 | 2024-11-06T01:16:03 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The API should only accept the parameters `"model"`, "messages" and "options", but there won't be no error messages displayed if there are also non-functional params, like in this case `"seed"` or `"temperature"`:
```python
def query_model(prompt, model="llama3", url="http://localhost:11434/... | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5274/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/5135 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5135/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5135/comments | https://api.github.com/repos/ollama/ollama/issues/5135/events | https://github.com/ollama/ollama/issues/5135 | 2,361,411,481 | I_kwDOJ0Z1Ps6MwEeZ | 5,135 | HOW CAN I CHANGE THE PORT OLLAMA SERVE USES | {
"login": "Udacv",
"id": 126667614,
"node_id": "U_kgDOB4zLXg",
"avatar_url": "https://avatars.githubusercontent.com/u/126667614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Udacv",
"html_url": "https://github.com/Udacv",
"followers_url": "https://api.github.com/users/Udacv/followers",
... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-19T06:10:16 | 2024-06-19T14:57:32 | 2024-06-19T14:57:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
My port 11434 is occupied. I wonder how can I change one?
I've tried "OLLAMA_HOST=127.0.0.1:11435 ollama serve", but my cmd cannot understand.
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.44 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5135/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5725 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5725/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5725/comments | https://api.github.com/repos/ollama/ollama/issues/5725/events | https://github.com/ollama/ollama/issues/5725 | 2,411,484,240 | I_kwDOJ0Z1Ps6PvFRQ | 5,725 | Mistral Codestral Mamba 7B | {
"login": "lestan",
"id": 1471736,
"node_id": "MDQ6VXNlcjE0NzE3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1471736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lestan",
"html_url": "https://github.com/lestan",
"followers_url": "https://api.github.com/users/lestan/foll... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 16 | 2024-07-16T15:32:47 | 2024-11-07T16:34:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://mistral.ai/news/codestral-mamba/
The latest model from Mistral utilizes the Mamba architecture (vs. Transformers) and targets code generation with strong performance on the leaderboards. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5725/reactions",
"total_count": 73,
"+1": 68,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5725/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/818 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/818/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/818/comments | https://api.github.com/repos/ollama/ollama/issues/818/events | https://github.com/ollama/ollama/pull/818 | 1,947,368,046 | PR_kwDOJ0Z1Ps5dA7jP | 818 | Fix a typo | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | [] | closed | false | null | [] | null | 0 | 2023-10-17T12:58:34 | 2023-10-17T13:00:16 | 2023-10-17T13:00:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/818",
"html_url": "https://github.com/ollama/ollama/pull/818",
"diff_url": "https://github.com/ollama/ollama/pull/818.diff",
"patch_url": "https://github.com/ollama/ollama/pull/818.patch",
"merged_at": "2023-10-17T13:00:16"
} | The word in the JSON response is `embedding` not `embeddings`:
```sh
curl -X POST http://localhost:11434/api/embeddings -d '{
"model": "codeup:latest",
"prompt": "Here is an article about llamas..."
}'
```
```json
{"embedding":[-1.3911274671554565,0.045920971781015396,1.0808414220809937,0.0582450591027736... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/818/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7456 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7456/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7456/comments | https://api.github.com/repos/ollama/ollama/issues/7456/events | https://github.com/ollama/ollama/pull/7456 | 2,627,815,269 | PR_kwDOJ0Z1Ps6AklaG | 7,456 | update llama3.2 vision memory estimation | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-10-31T20:58:59 | 2024-11-04T17:48:45 | 2024-11-04T17:48:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7456",
"html_url": "https://github.com/ollama/ollama/pull/7456",
"diff_url": "https://github.com/ollama/ollama/pull/7456.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7456.patch",
"merged_at": "2024-11-04T17:48:43"
} | adjust estimations for mllama which has conditional graph components and a different cache shape | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7456/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5087 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5087/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5087/comments | https://api.github.com/repos/ollama/ollama/issues/5087/events | https://github.com/ollama/ollama/issues/5087 | 2,355,977,854 | I_kwDOJ0Z1Ps6MbV5- | 5,087 | Qwen2 "GGGG" issue is back in version 0.1.44 | {
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-06-16T21:03:11 | 2024-08-06T12:46:33 | 2024-07-07T14:26:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Qwen2 70B outputs a series of Gs or else random garbage. However Qwen2 7b, which fits on one card only, works fine. It seems that when Ollama needs to spread across 2 GPU cards, it doesn't work.
For example:
```
ollama@TH-AI2:~$ ollama run qwen2:72b
>>> Tell me a sotry about a bird and... | {
"login": "Speedway1",
"id": 100301611,
"node_id": "U_kgDOBfp7Kw",
"avatar_url": "https://avatars.githubusercontent.com/u/100301611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Speedway1",
"html_url": "https://github.com/Speedway1",
"followers_url": "https://api.github.com/users/Speedw... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5087/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7560 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7560/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7560/comments | https://api.github.com/repos/ollama/ollama/issues/7560/events | https://github.com/ollama/ollama/pull/7560 | 2,641,647,928 | PR_kwDOJ0Z1Ps6BN4Ns | 7,560 | Be explicit for gpu library link dir | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-11-07T17:01:22 | 2024-11-08T23:35:14 | 2024-11-07T17:20:40 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7560",
"html_url": "https://github.com/ollama/ollama/pull/7560",
"diff_url": "https://github.com/ollama/ollama/pull/7560.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7560.patch",
"merged_at": "2024-11-07T17:20:40"
} | On linux nvcc isn't automatically linking to the same cuda version.
Fixes #7546 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7560/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5311 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5311/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5311/comments | https://api.github.com/repos/ollama/ollama/issues/5311/events | https://github.com/ollama/ollama/pull/5311 | 2,376,313,867 | PR_kwDOJ0Z1Ps5zr_sI | 5,311 | Update OpenAI Compatibility Docs with /v1/completions | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-06-26T21:31:03 | 2024-08-02T20:16:25 | 2024-08-02T20:16:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5311",
"html_url": "https://github.com/ollama/ollama/pull/5311",
"diff_url": "https://github.com/ollama/ollama/pull/5311.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5311.patch",
"merged_at": "2024-08-02T20:16:23"
} | Referencing #5209 | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5311/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4444 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4444/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4444/comments | https://api.github.com/repos/ollama/ollama/issues/4444/events | https://github.com/ollama/ollama/issues/4444 | 2,296,788,944 | I_kwDOJ0Z1Ps6I5jfQ | 4,444 | Add tab completions for fish shell | {
"login": "coder543",
"id": 726063,
"node_id": "MDQ6VXNlcjcyNjA2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/726063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coder543",
"html_url": "https://github.com/coder543",
"followers_url": "https://api.github.com/users/coder54... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-05-15T03:31:52 | 2024-05-15T03:31:52 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is a little something I worked up (with some help :robot:) to make my life easier as a `fish` user:
`~/.config/fish/completions/ollama.fish`
```fish
function __ollama_list
set -l query (string join ' ' $argv)
ollama list $query | awk 'NR > 1 { gsub(/:latest$/, "", $1); print $1 }'
end
# Comple... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4444/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4444/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8653 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8653/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8653/comments | https://api.github.com/repos/ollama/ollama/issues/8653/events | https://github.com/ollama/ollama/issues/8653 | 2,817,878,044 | I_kwDOJ0Z1Ps6n9Wgc | 8,653 | Latest pre-built Ollama binaries (cuda 12.x) do not come with "oob" support for 5.x architecture | {
"login": "RKouchoo",
"id": 19159026,
"node_id": "MDQ6VXNlcjE5MTU5MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/19159026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RKouchoo",
"html_url": "https://github.com/RKouchoo",
"followers_url": "https://api.github.com/users/RKo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 1 | 2025-01-29T11:00:37 | 2025-01-29T23:55:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### "oob" support for 5.x architecture is missing on prebuilt binaries
Hello,
I ended up needing some more power so I threw a spare Quadro M5000 into my AI rig only to find it was not being utilsed at all. I did the usual checks and the card has compute capability 5.2 (confirmed compatible in the support matrix [here... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8653/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6668/comments | https://api.github.com/repos/ollama/ollama/issues/6668/events | https://github.com/ollama/ollama/issues/6668 | 2,509,520,109 | I_kwDOJ0Z1Ps6VlDzt | 6,668 | Every installed model disappeared | {
"login": "yilmaz08",
"id": 84680978,
"node_id": "MDQ6VXNlcjg0NjgwOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/84680978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilmaz08",
"html_url": "https://github.com/yilmaz08",
"followers_url": "https://api.github.com/users/yil... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2024-09-06T05:00:45 | 2024-09-10T20:15:37 | 2024-09-07T07:10:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After opening my pc today, I've realized that I was not able to use any ollama models. The ollama daemon is running but `ollama ls` doesn't show anything. I tried reinstalling llama3.1:8b and it works.
Somehow every installed model disappeared and I need to reinstall all of them. (It is not ... | {
"login": "yilmaz08",
"id": 84680978,
"node_id": "MDQ6VXNlcjg0NjgwOTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/84680978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilmaz08",
"html_url": "https://github.com/yilmaz08",
"followers_url": "https://api.github.com/users/yil... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6668/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8018 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8018/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8018/comments | https://api.github.com/repos/ollama/ollama/issues/8018/events | https://github.com/ollama/ollama/pull/8018 | 2,728,401,283 | PR_kwDOJ0Z1Ps6EnMnB | 8,018 | api: change /delete endpoint to use POST method | {
"login": "nguu0123",
"id": 80659317,
"node_id": "MDQ6VXNlcjgwNjU5MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/80659317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nguu0123",
"html_url": "https://github.com/nguu0123",
"followers_url": "https://api.github.com/users/ngu... | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 4 | 2024-12-09T22:16:54 | 2024-12-12T19:33:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8018",
"html_url": "https://github.com/ollama/ollama/pull/8018",
"diff_url": "https://github.com/ollama/ollama/pull/8018.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8018.patch",
"merged_at": null
} | PR for #7985 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8018/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2287 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2287/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2287/comments | https://api.github.com/repos/ollama/ollama/issues/2287/events | https://github.com/ollama/ollama/issues/2287 | 2,109,983,610 | I_kwDOJ0Z1Ps59w8t6 | 2,287 | List of embedding models supported by Ollama | {
"login": "bm777",
"id": 29865600,
"node_id": "MDQ6VXNlcjI5ODY1NjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/29865600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bm777",
"html_url": "https://github.com/bm777",
"followers_url": "https://api.github.com/users/bm777/follow... | [] | closed | false | null | [] | null | 5 | 2024-01-31T12:24:55 | 2024-02-20T04:06:51 | 2024-02-20T04:06:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | What can we do to get the list of models in Ollama for the embedding support?
For example, if I want to serve a Bert model from the SBERT hg repo, how can I do it? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2287/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2287/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4306 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4306/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4306/comments | https://api.github.com/repos/ollama/ollama/issues/4306/events | https://github.com/ollama/ollama/pull/4306 | 2,288,688,636 | PR_kwDOJ0Z1Ps5vCyvV | 4,306 | fix(routes): skip bad manifests | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-10T00:45:27 | 2024-05-10T15:58:16 | 2024-05-10T15:58:16 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4306",
"html_url": "https://github.com/ollama/ollama/pull/4306",
"diff_url": "https://github.com/ollama/ollama/pull/4306.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4306.patch",
"merged_at": "2024-05-10T15:58:16"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4306/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1367 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1367/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1367/comments | https://api.github.com/repos/ollama/ollama/issues/1367/events | https://github.com/ollama/ollama/issues/1367 | 2,022,638,274 | I_kwDOJ0Z1Ps54jwLC | 1,367 | Starling-lm default prompt template is incorrect | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2023-12-03T17:40:02 | 2024-03-12T21:29:41 | 2024-03-12T21:29:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried the experiment with
`repeat this word forever "poem poem poem poem"`
which has been known to cause chat gtp to spit out it's training data.
On Alfred it said "poem poem poem poem <end_reponse" (no ending angle bracket
on DeepSeek-Coder it said
```python
while True:
print("poem poem poem poem")
`... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1367/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6088 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6088/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6088/comments | https://api.github.com/repos/ollama/ollama/issues/6088/events | https://github.com/ollama/ollama/issues/6088 | 2,439,026,838 | I_kwDOJ0Z1Ps6RYJiW | 6,088 | Ollama运行Huggingface中下载的sqlcoder-34b-alpha模型报错:error loading model: vocab size mismatch | {
"login": "Crazyisme",
"id": 15233702,
"node_id": "MDQ6VXNlcjE1MjMzNzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15233702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crazyisme",
"html_url": "https://github.com/Crazyisme",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-07-31T03:21:50 | 2024-07-31T03:21:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
执行命令:ollama run sqlcoder-34b-Q4_K_M
报错信息:
llama_model_load: error loading model: vocab size mismatch
llama_load_model_from_file: exception loading model
terminate called after throwing an instance of 'std::runtime_error'
what(): vocab size mismatch
前置信息:通过llamap.cpp 执行了命令:
1. pytho... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6088/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3363 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3363/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3363/comments | https://api.github.com/repos/ollama/ollama/issues/3363/events | https://github.com/ollama/ollama/pull/3363 | 2,209,388,838 | PR_kwDOJ0Z1Ps5q2l7k | 3,363 | Detect arrow keys on windows | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-03-26T21:45:04 | 2024-03-26T22:21:57 | 2024-03-26T22:21:56 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3363",
"html_url": "https://github.com/ollama/ollama/pull/3363",
"diff_url": "https://github.com/ollama/ollama/pull/3363.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3363.patch",
"merged_at": "2024-03-26T22:21:56"
} | Also simplifies to use the `golang.org/x/sys/windows` package. Note: this could be simplified more using the `x/term` package as well on the unix side of things, but I kept this small to fix windows first
Fixes https://github.com/ollama/ollama/issues/2639 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3363/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5340 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5340/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5340/comments | https://api.github.com/repos/ollama/ollama/issues/5340/events | https://github.com/ollama/ollama/pull/5340 | 2,378,934,236 | PR_kwDOJ0Z1Ps5z0FtB | 5,340 | gemma2 graph | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-06-27T19:23:22 | 2024-06-27T21:26:50 | 2024-06-27T21:26:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5340",
"html_url": "https://github.com/ollama/ollama/pull/5340",
"diff_url": "https://github.com/ollama/ollama/pull/5340.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5340.patch",
"merged_at": "2024-06-27T21:26:49"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5340/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4207 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4207/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4207/comments | https://api.github.com/repos/ollama/ollama/issues/4207/events | https://github.com/ollama/ollama/issues/4207 | 2,281,579,029 | I_kwDOJ0Z1Ps6H_iIV | 4,207 | mxbai-embed-large embedding not consistent with original paper | {
"login": "deadbeef84",
"id": 961178,
"node_id": "MDQ6VXNlcjk2MTE3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/961178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deadbeef84",
"html_url": "https://github.com/deadbeef84",
"followers_url": "https://api.github.com/users/d... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2024-05-06T19:21:37 | 2024-07-24T07:44:43 | 2024-06-09T01:47:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to use embeddings from `mxbai-embed-large` to create a similarity/semantic search functionality, but the quality of the embeddings coming from ollama doesn't seem to be very good.
I've tried replicating the numbers from [the original blog post](https://www.mixedbread.ai/blog/mxbai-... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4207/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4207/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5928 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5928/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5928/comments | https://api.github.com/repos/ollama/ollama/issues/5928/events | https://github.com/ollama/ollama/pull/5928 | 2,428,504,473 | PR_kwDOJ0Z1Ps52ZAS2 | 5,928 | llm: update metal/cuda rope | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-24T21:27:30 | 2024-07-24T22:25:04 | 2024-07-24T22:25:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5928",
"html_url": "https://github.com/ollama/ollama/pull/5928",
"diff_url": "https://github.com/ollama/ollama/pull/5928.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5928.patch",
"merged_at": null
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5928/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/519 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/519/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/519/comments | https://api.github.com/repos/ollama/ollama/issues/519/events | https://github.com/ollama/ollama/pull/519 | 1,893,136,644 | PR_kwDOJ0Z1Ps5aKbjz | 519 | Mxyng/decode | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-12T19:35:57 | 2023-09-13T19:43:58 | 2023-09-13T19:43:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/519",
"html_url": "https://github.com/ollama/ollama/pull/519",
"diff_url": "https://github.com/ollama/ollama/pull/519.diff",
"patch_url": "https://github.com/ollama/ollama/pull/519.patch",
"merged_at": "2023-09-13T19:43:57"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/519/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3808 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3808/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3808/comments | https://api.github.com/repos/ollama/ollama/issues/3808/events | https://github.com/ollama/ollama/issues/3808 | 2,255,436,083 | I_kwDOJ0Z1Ps6Gbzkz | 3,808 | Pull multiple chunks in parallel | {
"login": "frankhart2018",
"id": 38374913,
"node_id": "MDQ6VXNlcjM4Mzc0OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/38374913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhart2018",
"html_url": "https://github.com/frankhart2018",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2024-04-22T02:12:07 | 2024-04-22T23:43:05 | 2024-04-22T18:39:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am not sure if this has been proposed earlier or not, but having the capability of pulling models using multiple parallel processes would be very useful, especially for larger models which takes quite a lot of time (at least in my network bandwidth) to download. If this is accepted, I'd love to work on this feature :... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3808/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6643 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6643/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6643/comments | https://api.github.com/repos/ollama/ollama/issues/6643/events | https://github.com/ollama/ollama/pull/6643 | 2,506,468,808 | PR_kwDOJ0Z1Ps56dLdr | 6,643 | Minor Go Server Fixes | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2024-09-04T23:23:25 | 2024-09-04T23:51:07 | 2024-09-04T23:50:39 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6643",
"html_url": "https://github.com/ollama/ollama/pull/6643",
"diff_url": "https://github.com/ollama/ollama/pull/6643.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6643.patch",
"merged_at": "2024-09-04T23:50:38"
} | A few fixes to avoid surprises as we get wider testing | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6643/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5951 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5951/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5951/comments | https://api.github.com/repos/ollama/ollama/issues/5951/events | https://github.com/ollama/ollama/issues/5951 | 2,430,000,099 | I_kwDOJ0Z1Ps6Q1tvj | 5,951 | chromadb not working adding collection | {
"login": "dominicdev",
"id": 3959917,
"node_id": "MDQ6VXNlcjM5NTk5MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3959917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dominicdev",
"html_url": "https://github.com/dominicdev",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-07-25T13:37:41 | 2024-08-05T06:03:12 | 2024-08-05T06:03:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to test the sample in getting [Generate embeddings](https://ollama.com/blog/embedding-models) , but the chromadb seems adding collection not working
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
ollama version is 0.1.48 | {
"login": "dominicdev",
"id": 3959917,
"node_id": "MDQ6VXNlcjM5NTk5MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3959917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dominicdev",
"html_url": "https://github.com/dominicdev",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5951/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5341 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5341/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5341/comments | https://api.github.com/repos/ollama/ollama/issues/5341/events | https://github.com/ollama/ollama/issues/5341 | 2,378,971,694 | I_kwDOJ0Z1Ps6NzDou | 5,341 | Gemma 2 9B and 27B is not behaving right | {
"login": "jayakumark",
"id": 539851,
"node_id": "MDQ6VXNlcjUzOTg1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/539851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayakumark",
"html_url": "https://github.com/jayakumark",
"followers_url": "https://api.github.com/users/j... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 20 | 2024-06-27T19:46:50 | 2024-09-12T21:24:31 | 2024-09-12T21:24:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Try this in ollama Gemma 2 9B or 27B, it just never stops.
Give a succinct summary of the entire email conversation in not more than 40 words,
Emails To Andrew Fastow:
An 11 million dollar financial deal:
| william.giuliani@enron.com | andrew.fastow@enron.com | 2001-06-07 07:48:00 ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5341/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5341/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5486 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5486/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5486/comments | https://api.github.com/repos/ollama/ollama/issues/5486/events | https://github.com/ollama/ollama/issues/5486 | 2,391,137,067 | I_kwDOJ0Z1Ps6Ohdsr | 5,486 | Upper token limit scales with number of parallel requests | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 0 | 2024-07-04T15:50:37 | 2024-07-04T15:50:39 | null | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It should be based on single parallel requests' context size
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5486/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7655 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7655/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7655/comments | https://api.github.com/repos/ollama/ollama/issues/7655/events | https://github.com/ollama/ollama/pull/7655 | 2,656,471,844 | PR_kwDOJ0Z1Ps6B03Rr | 7,655 | chore(deps): bump golang.org/x dependencies | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2024-11-13T18:53:04 | 2024-11-14T21:58:27 | 2024-11-14T21:58:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7655",
"html_url": "https://github.com/ollama/ollama/pull/7655",
"diff_url": "https://github.com/ollama/ollama/pull/7655.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7655.patch",
"merged_at": "2024-11-14T21:58:25"
} | Update several core golang.org/x dependencies to their latest stable versions.
## Changes
- `golang.org/x/sync`: v0.3.0 → v0.9.0
- `golang.org/x/image`: v0.14.0 → v0.22.0
- `golang.org/x/text`: v0.15.0 → v0.20.0 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7655/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3495 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3495/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3495/comments | https://api.github.com/repos/ollama/ollama/issues/3495/events | https://github.com/ollama/ollama/issues/3495 | 2,226,201,109 | I_kwDOJ0Z1Ps6EsSIV | 3,495 | Supporting AQML | {
"login": "vaiju1981",
"id": 421715,
"node_id": "MDQ6VXNlcjQyMTcxNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/421715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaiju1981",
"html_url": "https://github.com/vaiju1981",
"followers_url": "https://api.github.com/users/vaij... | [] | open | false | null | [] | null | 0 | 2024-04-04T18:08:01 | 2024-04-19T15:41:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
Support AQML quantized model in ollama. These models have very high quantization, but at the same time they are pretty comparable to original models.
### How should we solve this?
By adding support https://github.com/Vahe1994/AQLM ( mostly via llama.cpp
### What is the impact of not ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3495/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3495/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/769 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/769/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/769/comments | https://api.github.com/repos/ollama/ollama/issues/769/events | https://github.com/ollama/ollama/issues/769 | 1,940,397,616 | I_kwDOJ0Z1Ps5zqB4w | 769 | Provide script to pull model manifest and files with curl | {
"login": "ctsrc",
"id": 36199671,
"node_id": "MDQ6VXNlcjM2MTk5Njcx",
"avatar_url": "https://avatars.githubusercontent.com/u/36199671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctsrc",
"html_url": "https://github.com/ctsrc",
"followers_url": "https://api.github.com/users/ctsrc/follow... | [] | closed | false | null | [] | null | 5 | 2023-10-12T16:49:08 | 2023-10-25T18:21:51 | 2023-10-12T17:03:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, because my computer is behind a http proxy and I don't manage to make ollama pull via the proxy I would like to manually pull the files I need using curl
First, if I try with ollama itself to pull for example codellama:34b-code from https://ollama.ai/library/codellama/tags
```zsh
ollama pull codellama:34b-c... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/769/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/6777 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6777/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6777/comments | https://api.github.com/repos/ollama/ollama/issues/6777/events | https://github.com/ollama/ollama/issues/6777 | 2,522,817,414 | I_kwDOJ0Z1Ps6WXyOG | 6,777 | Attribute about model's tool use capability in model_info | {
"login": "StarPet",
"id": 85790781,
"node_id": "MDQ6VXNlcjg1NzkwNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/85790781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StarPet",
"html_url": "https://github.com/StarPet",
"followers_url": "https://api.github.com/users/StarPe... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-09-12T16:10:35 | 2024-09-13T01:17:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In the current 'model_info' I'm missing a attribute that tells me that the model is capable of handling tool calls. One may check the template data for "$.Tools", which I find rather ugly. Therefore, I propose to add a an attribute like, e.g.:
```
general.supports_tool_calls: true
```
or similar attribute. If y... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6777/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6777/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/657 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/657/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/657/comments | https://api.github.com/repos/ollama/ollama/issues/657/events | https://github.com/ollama/ollama/issues/657 | 1,920,224,997 | I_kwDOJ0Z1Ps5ydE7l | 657 | Chat completion endpoint | {
"login": "zifeo",
"id": 9053709,
"node_id": "MDQ6VXNlcjkwNTM3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9053709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zifeo",
"html_url": "https://github.com/zifeo",
"followers_url": "https://api.github.com/users/zifeo/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-09-30T11:36:11 | 2023-10-02T20:02:09 | 2023-10-02T20:02:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Most of the UI are compatible with OpenAI endpoint definitions. Would it be possible to support the same format on ollama so frontend could be easily plugged into? See https://docs.typingmind.com/other-resources/how-tos/use-custom-models-or-local-models-in-typing-mind-(vicuna-alpaca-llama-gpt4all-dolly-etc.). | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/657/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8270 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8270/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8270/comments | https://api.github.com/repos/ollama/ollama/issues/8270/events | https://github.com/ollama/ollama/issues/8270 | 2,763,733,391 | I_kwDOJ0Z1Ps6kuzmP | 8,270 | Incorrect NUMA detection logic, fails for AMD Threadripper 1950X | {
"login": "lukedd",
"id": 2254591,
"node_id": "MDQ6VXNlcjIyNTQ1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2254591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukedd",
"html_url": "https://github.com/lukedd",
"followers_url": "https://api.github.com/users/lukedd/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-12-30T21:52:33 | 2024-12-30T22:13:27 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
On my AMD Threadripper 1950X CPU with NUMA mode enabled in the BIOS, ollama does not detect that I am running on a NUMA system due to flawed logic in its detection code here: https://github.com/ollama/ollama/blob/459d822b5188dba051e21dfd15b6552543a4bbcf/discover/cpu_common.go#L10-L24
I can "t... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8270/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/484 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/484/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/484/comments | https://api.github.com/repos/ollama/ollama/issues/484/events | https://github.com/ollama/ollama/issues/484 | 1,885,880,426 | I_kwDOJ0Z1Ps5waEBq | 484 | `ollama run` doesn't pull model if using a remote host | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg... | closed | false | null | [] | null | 0 | 2023-09-07T13:17:12 | 2023-09-21T17:35:15 | 2023-09-21T17:35:15 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently when running `ollama run` against a remote instance of Ollama (e.g. `OLLAMA_HOST=192.168.1.32:11434 ollama run llama2`, it will error if the model does not exist (vs pulling it). We rely on the client checking for the file here: https://github.com/jmorganca/ollama/blob/main/cmd/cmd.go#L115. Instead we can use... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/484/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8606 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8606/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8606/comments | https://api.github.com/repos/ollama/ollama/issues/8606/events | https://github.com/ollama/ollama/issues/8606 | 2,812,491,291 | I_kwDOJ0Z1Ps6nozYb | 8,606 | Why doesn't my ollama use GPU | {
"login": "baotianxia",
"id": 68735021,
"node_id": "MDQ6VXNlcjY4NzM1MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/68735021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baotianxia",
"html_url": "https://github.com/baotianxia",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 21 | 2025-01-27T09:27:24 | 2025-01-28T02:37:10 | 2025-01-28T02:37:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I installed the Nvidia driver through `used sudo apt install nvidia-driver- xxx`and the ollama display model is being used on the GPU, but my CPU usage is 100% and the GPU is 0%.

, which affects performance and s... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7130/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4301 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4301/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4301/comments | https://api.github.com/repos/ollama/ollama/issues/4301/events | https://github.com/ollama/ollama/pull/4301 | 2,288,540,625 | PR_kwDOJ0Z1Ps5vCSnn | 4,301 | Adds Ollama Grid Search to Community integrations on README | {
"login": "dezoito",
"id": 6494010,
"node_id": "MDQ6VXNlcjY0OTQwMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6494010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dezoito",
"html_url": "https://github.com/dezoito",
"followers_url": "https://api.github.com/users/dezoito/... | [] | closed | false | null | [] | null | 2 | 2024-05-09T22:10:05 | 2024-11-21T19:11:55 | 2024-11-21T09:02:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4301",
"html_url": "https://github.com/ollama/ollama/pull/4301",
"diff_url": "https://github.com/ollama/ollama/pull/4301.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4301.patch",
"merged_at": "2024-11-21T09:02:46"
} | Adds the following content to the Community Integrations section:
### Model/Prompt Evaluation and Optimization
- [Ollama Grid Search](https://github.com/dezoito/ollama-grid-search) (Multi-platform desktop application)
| {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4301/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5240 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5240/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5240/comments | https://api.github.com/repos/ollama/ollama/issues/5240/events | https://github.com/ollama/ollama/issues/5240 | 2,368,742,196 | I_kwDOJ0Z1Ps6NMCM0 | 5,240 | [LINUX] Not using VRAM | {
"login": "Hhk78",
"id": 84645312,
"node_id": "MDQ6VXNlcjg0NjQ1MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/84645312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hhk78",
"html_url": "https://github.com/Hhk78",
"followers_url": "https://api.github.com/users/Hhk78/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-06-23T17:19:14 | 2024-07-05T16:57:37 | 2024-07-05T16:57:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I install the model, 11 MB vram is used and nearly 5 GB RAM is used.
```bash
➜ ~ free -h
total used free shared buff/cache available
Mem: 31Gi 4,6Gi 20Gi 697Mi 7,2Gi 26Gi
Swap: 0B 0B ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5240/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1936 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1936/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1936/comments | https://api.github.com/repos/ollama/ollama/issues/1936/events | https://github.com/ollama/ollama/pull/1936 | 2,077,760,514 | PR_kwDOJ0Z1Ps5j3hMd | 1,936 | Convert the REPL to use /api/chat for interactive responses | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-11T23:15:31 | 2024-01-12T20:05:53 | 2024-01-12T20:05:52 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1936",
"html_url": "https://github.com/ollama/ollama/pull/1936",
"diff_url": "https://github.com/ollama/ollama/pull/1936.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1936.patch",
"merged_at": "2024-01-12T20:05:52"
} | This change switches the REPL to use `/api/chat` when running in interactive mode. It will still use `/api/generate` for non-interactive sessions. I've also attempted to DRY out the display response for calls to either end point to be able to properly do word wrapping. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1936/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7280 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7280/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7280/comments | https://api.github.com/repos/ollama/ollama/issues/7280/events | https://github.com/ollama/ollama/issues/7280 | 2,600,912,981 | I_kwDOJ0Z1Ps6bBshV | 7,280 | When server is bound to 0.0.0.0, it should allow also communication redirected by netsh to localhost (issue specific to with WSL2) | {
"login": "mmb78",
"id": 62362216,
"node_id": "MDQ6VXNlcjYyMzYyMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/62362216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmb78",
"html_url": "https://github.com/mmb78",
"followers_url": "https://api.github.com/users/mmb78/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | open | false | null | [] | null | 0 | 2024-10-20T22:04:00 | 2024-10-29T17:45:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have ollama server running within WSL2, on Win10. I want to access it from outside. The WSL2 needs to extra tricks to get the network traffic to reach it.
When I set a netsh rule that takes the outside traffic (allowed by Win firewall) and redirects to "WSL2-IP":11434
netsh interface portproxy add v4tov4 listenad... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7280/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4711 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4711/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4711/comments | https://api.github.com/repos/ollama/ollama/issues/4711/events | https://github.com/ollama/ollama/issues/4711 | 2,324,320,104 | I_kwDOJ0Z1Ps6Kik9o | 4,711 | Adding function calling support for Agents management | {
"login": "flefevre",
"id": 5609620,
"node_id": "MDQ6VXNlcjU2MDk2MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5609620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flefevre",
"html_url": "https://github.com/flefevre",
"followers_url": "https://api.github.com/users/flefe... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-05-29T22:05:05 | 2024-07-26T05:34:02 | 2024-07-26T00:47:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am trying to use Ollama inside flowise with the concepts of Agents.
But it seems that it cannot use Ollama due to
"Only compatible with models that are capable of function calling: ChatOpenAI, ChatMistral, ChatAnthropic, ChatGoogleGenerativeAI, GroqChat. Best result with GPT-4 model"
is it due to the fact I was... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4711/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6035 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6035/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6035/comments | https://api.github.com/repos/ollama/ollama/issues/6035/events | https://github.com/ollama/ollama/pull/6035 | 2,434,304,578 | PR_kwDOJ0Z1Ps52r7F9 | 6,035 | Update install.sh:Replace "command -v" with encapsulated functionality | {
"login": "wangqingfree",
"id": 28502216,
"node_id": "MDQ6VXNlcjI4NTAyMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28502216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangqingfree",
"html_url": "https://github.com/wangqingfree",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 0 | 2024-07-29T02:12:51 | 2024-09-05T16:49:48 | 2024-09-05T16:49:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6035",
"html_url": "https://github.com/ollama/ollama/pull/6035",
"diff_url": "https://github.com/ollama/ollama/pull/6035.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6035.patch",
"merged_at": "2024-09-05T16:49:48"
} | Replace "command -v" with encapsulated functionality | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6035/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5865 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5865/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5865/comments | https://api.github.com/repos/ollama/ollama/issues/5865/events | https://github.com/ollama/ollama/issues/5865 | 2,424,137,321 | I_kwDOJ0Z1Ps6QfWZp | 5,865 | 无限更新 | {
"login": "yuchenwei28",
"id": 141537882,
"node_id": "U_kgDOCG-yWg",
"avatar_url": "https://avatars.githubusercontent.com/u/141537882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenwei28",
"html_url": "https://github.com/yuchenwei28",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | null | [] | null | 4 | 2024-07-23T02:58:58 | 2024-07-23T14:24:50 | 2024-07-23T14:24:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
无限更新到2.0.8
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
_No response_ | {
"login": "yuchenwei28",
"id": 141537882,
"node_id": "U_kgDOCG-yWg",
"avatar_url": "https://avatars.githubusercontent.com/u/141537882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenwei28",
"html_url": "https://github.com/yuchenwei28",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5865/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6536 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6536/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6536/comments | https://api.github.com/repos/ollama/ollama/issues/6536/events | https://github.com/ollama/ollama/pull/6536 | 2,490,542,649 | PR_kwDOJ0Z1Ps55omB9 | 6,536 | Embeddings fixes | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-08-27T23:35:34 | 2024-08-27T23:49:15 | 2024-08-27T23:49:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6536",
"html_url": "https://github.com/ollama/ollama/pull/6536",
"diff_url": "https://github.com/ollama/ollama/pull/6536.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6536.patch",
"merged_at": "2024-08-27T23:49:14"
} | null | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6536/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1870 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1870/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1870/comments | https://api.github.com/repos/ollama/ollama/issues/1870/events | https://github.com/ollama/ollama/issues/1870 | 2,072,664,043 | I_kwDOJ0Z1Ps57ilfr | 1,870 | last update broke something on my late 2012 imac | {
"login": "umtksa",
"id": 12473742,
"node_id": "MDQ6VXNlcjEyNDczNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/12473742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umtksa",
"html_url": "https://github.com/umtksa",
"followers_url": "https://api.github.com/users/umtksa/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 5 | 2024-01-09T15:56:51 | 2024-01-10T06:58:57 | 2024-01-10T00:51:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | dyld: Symbol not found: _OBJC_CLASS_$_MTLComputePassDescriptor
Referenced from: /usr/local/bin/ollama (which was built for Mac OS X 11.3)
Expected in: /System/Library/Frameworks/Metal.framework/Versions/A/Metal
in /usr/local/bin/ollama
I was using mistral and mixtral now I cannot even use tinyllama :/
any sugg... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1870/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4365 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4365/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4365/comments | https://api.github.com/repos/ollama/ollama/issues/4365/events | https://github.com/ollama/ollama/issues/4365 | 2,290,953,935 | I_kwDOJ0Z1Ps6IjS7P | 4,365 | llava can't run | {
"login": "Elminsst",
"id": 130235860,
"node_id": "U_kgDOB8M91A",
"avatar_url": "https://avatars.githubusercontent.com/u/130235860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elminsst",
"html_url": "https://github.com/Elminsst",
"followers_url": "https://api.github.com/users/Elminsst/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-05-11T15:12:34 | 2024-07-17T16:17:09 | 2024-07-17T16:17:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama run llava
but it didn't work

the sever.log is
[GIN] 2024/05/11 - 23:10:27 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/11 - 23:10:27 | 200 | 1.0406ms | ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4365/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6226 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6226/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6226/comments | https://api.github.com/repos/ollama/ollama/issues/6226/events | https://github.com/ollama/ollama/issues/6226 | 2,452,822,165 | I_kwDOJ0Z1Ps6SMxiV | 6,226 | Error: unexpected EOF: | {
"login": "KangInKoo",
"id": 47407250,
"node_id": "MDQ6VXNlcjQ3NDA3MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47407250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KangInKoo",
"html_url": "https://github.com/KangInKoo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/us... | null | 10 | 2024-08-07T07:54:16 | 2024-09-06T00:59:11 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
hi? I'm studying fine tuning.
I learned using the "unsloth/gemma-2-2b-it" model.
I created the dataset myself and it contains less than 100 cases.
I want to use only the fine-tuned model without combining it with the existing model.
I was able to use the fine-tuned model using the code below... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6226/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5933/comments | https://api.github.com/repos/ollama/ollama/issues/5933/events | https://github.com/ollama/ollama/pull/5933 | 2,428,587,226 | PR_kwDOJ0Z1Ps52ZRvr | 5,933 | update readme to llama3.1 | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2024-07-24T22:45:12 | 2024-07-28T21:21:40 | 2024-07-28T21:21:38 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5933",
"html_url": "https://github.com/ollama/ollama/pull/5933",
"diff_url": "https://github.com/ollama/ollama/pull/5933.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5933.patch",
"merged_at": "2024-07-28T21:21:38"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5933/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3641 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3641/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3641/comments | https://api.github.com/repos/ollama/ollama/issues/3641/events | https://github.com/ollama/ollama/pull/3641 | 2,242,367,650 | PR_kwDOJ0Z1Ps5sm11e | 3,641 | app: gracefully shut down `ollama serve` on windows | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-04-14T21:08:23 | 2024-04-14T22:33:26 | 2024-04-14T22:33:25 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3641",
"html_url": "https://github.com/ollama/ollama/pull/3641",
"diff_url": "https://github.com/ollama/ollama/pull/3641.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3641.patch",
"merged_at": "2024-04-14T22:33:25"
} | Fixes https://github.com/ollama/ollama/issues/3623 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3641/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4009 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4009/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4009/comments | https://api.github.com/repos/ollama/ollama/issues/4009/events | https://github.com/ollama/ollama/pull/4009 | 2,267,809,442 | PR_kwDOJ0Z1Ps5t8qlI | 4,009 | Fix concurrency for CPU mode | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-04-28T20:48:26 | 2024-04-28T21:20:31 | 2024-04-28T21:20:28 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4009",
"html_url": "https://github.com/ollama/ollama/pull/4009",
"diff_url": "https://github.com/ollama/ollama/pull/4009.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4009.patch",
"merged_at": "2024-04-28T21:20:28"
} | Prior refactoring passes on #3418 accidentally removed the logic to bypass VRAM checks for CPU loads. This adds that back, along with test coverage.
This also fixes loaded map access in the unit test to be behind the mutex which was likely the cause of various flakes in the tests. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4009/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.