url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4362 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4362/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4362/comments | https://api.github.com/repos/ollama/ollama/issues/4362/events | https://github.com/ollama/ollama/pull/4362 | 2,290,928,451 | PR_kwDOJ0Z1Ps5vKS8y | 4,362 | fix `ollama create`'s usage string | {
"login": "todashuta",
"id": 1555633,
"node_id": "MDQ6VXNlcjE1NTU2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1555633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/todashuta",
"html_url": "https://github.com/todashuta",
"followers_url": "https://api.github.com/users/to... | [] | closed | false | null | [] | null | 0 | 2024-05-11T14:04:53 | 2024-05-12T02:39:20 | 2024-05-11T21:47:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4362",
"html_url": "https://github.com/ollama/ollama/pull/4362",
"diff_url": "https://github.com/ollama/ollama/pull/4362.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4362.patch",
"merged_at": "2024-05-11T21:47:49"
} | Since `StringP()` automatically adds the initial value, the initial value description for Modelfile was duplicated.
I have fixed this by removing the redundant default value description from the usage.
Before:
```
$ ./ollama create -h
Create a model from a Modelfile
Usage:
ollama create MODEL [flags]
Fl... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4362/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8670 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8670/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8670/comments | https://api.github.com/repos/ollama/ollama/issues/8670/events | https://github.com/ollama/ollama/issues/8670 | 2,819,110,416 | I_kwDOJ0Z1Ps6oCDYQ | 8,670 | Ollama official website API for fetching the models and its information | {
"login": "ALAWIII",
"id": 60029291,
"node_id": "MDQ6VXNlcjYwMDI5Mjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/60029291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALAWIII",
"html_url": "https://github.com/ALAWIII",
"followers_url": "https://api.github.com/users/ALAWII... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | open | false | null | [] | null | 0 | 2025-01-29T19:40:32 | 2025-01-29T22:31:42 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | We need to build an API for the ollama website where all the models and their descriptions details are stored so that we can automatically fetch those data and embed them in our various apps! | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8670/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8670/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2258 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2258/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2258/comments | https://api.github.com/repos/ollama/ollama/issues/2258/events | https://github.com/ollama/ollama/pull/2258 | 2,106,167,737 | PR_kwDOJ0Z1Ps5lXLjp | 2,258 | docs: keep_alive | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 3 | 2024-01-29T18:31:25 | 2024-02-06T16:00:06 | 2024-02-06T16:00:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2258",
"html_url": "https://github.com/ollama/ollama/pull/2258",
"diff_url": "https://github.com/ollama/ollama/pull/2258.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2258.patch",
"merged_at": "2024-02-06T16:00:05"
} | Document the `keep_alive` parameter which keeps the model loaded into memory | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2258/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1170 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1170/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1170/comments | https://api.github.com/repos/ollama/ollama/issues/1170/events | https://github.com/ollama/ollama/issues/1170 | 1,998,558,099 | I_kwDOJ0Z1Ps53H5OT | 1,170 | Allow LLMs to Query a Database Directly | {
"login": "FaizelK",
"id": 24388421,
"node_id": "MDQ6VXNlcjI0Mzg4NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/24388421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FaizelK",
"html_url": "https://github.com/FaizelK",
"followers_url": "https://api.github.com/users/Faizel... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 19 | 2023-11-17T08:41:58 | 2024-09-01T22:48:23 | 2024-05-09T23:03:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have installed ollama and can run prompts, example: `ollama run llama2 "why is the sky blue?"`
Is there any way to connect to MYSQL database and start asking about database data, example:
`###### database file######
database.cnf
host="localhost"
user="admin"
passsword="admin"
database="mDatabase"
ollama ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1170/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6940 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6940/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6940/comments | https://api.github.com/repos/ollama/ollama/issues/6940/events | https://github.com/ollama/ollama/pull/6940 | 2,546,315,995 | PR_kwDOJ0Z1Ps58kXZO | 6,940 | CI: Fix win arm version defect | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-09-24T20:21:17 | 2024-09-24T22:18:13 | 2024-09-24T22:18:11 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6940",
"html_url": "https://github.com/ollama/ollama/pull/6940",
"diff_url": "https://github.com/ollama/ollama/pull/6940.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6940.patch",
"merged_at": "2024-09-24T22:18:10"
} | Build 0.3.12-rc5 reports a pre-release string on win-arm due to the version not being set properly in CI.
write-host in powershell writes directly to the console and will not be picked
up by a pipe. Echo, or write-output will.
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6940/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1833 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1833/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1833/comments | https://api.github.com/repos/ollama/ollama/issues/1833/events | https://github.com/ollama/ollama/pull/1833 | 2,068,998,839 | PR_kwDOJ0Z1Ps5jZjC8 | 1,833 | Dont use `-Wall` in static build | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-01-07T05:47:53 | 2024-01-07T15:39:20 | 2024-01-07T15:39:19 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1833",
"html_url": "https://github.com/ollama/ollama/pull/1833",
"diff_url": "https://github.com/ollama/ollama/pull/1833.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1833.patch",
"merged_at": "2024-01-07T15:39:19"
} | Fixes this warning:
```
% go build .
# github.com/jmorganca/ollama/llm
cgo-gcc-prolog:153:33: warning: unused variable '_cgo_a' [-Wunused-variable]
cgo-gcc-prolog:165:33: warning: unused variable '_cgo_a' [-Wunused-variable]
``` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1833/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/104 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/104/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/104/comments | https://api.github.com/repos/ollama/ollama/issues/104/events | https://github.com/ollama/ollama/pull/104 | 1,810,752,804 | PR_kwDOJ0Z1Ps5V1CBF | 104 | use readline | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-18T21:32:24 | 2023-07-19T20:36:28 | 2023-07-19T20:36:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/104",
"html_url": "https://github.com/ollama/ollama/pull/104",
"diff_url": "https://github.com/ollama/ollama/pull/104.diff",
"patch_url": "https://github.com/ollama/ollama/pull/104.patch",
"merged_at": "2023-07-19T20:36:24"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/104/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2341 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2341/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2341/comments | https://api.github.com/repos/ollama/ollama/issues/2341/events | https://github.com/ollama/ollama/pull/2341 | 2,116,779,679 | PR_kwDOJ0Z1Ps5l7ja6 | 2,341 | Revamp the windows tray code | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-02-04T01:00:37 | 2024-02-04T18:45:06 | 2024-02-04T18:45:02 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2341",
"html_url": "https://github.com/ollama/ollama/pull/2341",
"diff_url": "https://github.com/ollama/ollama/pull/2341.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2341.patch",
"merged_at": "2024-02-04T18:45:02"
} | To get more control over our windows app this pulls the win32 logic into our Go code instead of using an upstream library.
Still gobs of debug logging that I'll clean up soon, but it's now functional. The upgrade flow doesn't work yet of course. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2341/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6915 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6915/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6915/comments | https://api.github.com/repos/ollama/ollama/issues/6915/events | https://github.com/ollama/ollama/issues/6915 | 2,542,299,940 | I_kwDOJ0Z1Ps6XiGsk | 6,915 | qwen2.5 can't stop answering | {
"login": "xutiange",
"id": 16460665,
"node_id": "MDQ6VXNlcjE2NDYwNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/16460665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xutiange",
"html_url": "https://github.com/xutiange",
"followers_url": "https://api.github.com/users/xut... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-09-23T11:04:36 | 2024-10-16T08:21:49 | 2024-10-04T08:19:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I use qwen2.5, the model sometimes keeps responding and cannot be stopped.
### OS
Linux, Docker
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.3.11 | {
"login": "xutiange",
"id": 16460665,
"node_id": "MDQ6VXNlcjE2NDYwNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/16460665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xutiange",
"html_url": "https://github.com/xutiange",
"followers_url": "https://api.github.com/users/xut... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6915/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5394 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5394/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5394/comments | https://api.github.com/repos/ollama/ollama/issues/5394/events | https://github.com/ollama/ollama/issues/5394 | 2,382,430,437 | I_kwDOJ0Z1Ps6OAQDl | 5,394 | Ollama loads gemma2 27b with --ctx-size 16384 | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-30T20:28:35 | 2024-08-16T00:38:18 | 2024-08-16T00:38:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Isn't gemma2 context size 8192?
When I access GEMMA2:27b-instruct-q8_0 with OpenAI api, it loads the model with `--ctx-size 16384` according to log.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.48 | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5394/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7389 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7389/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7389/comments | https://api.github.com/repos/ollama/ollama/issues/7389/events | https://github.com/ollama/ollama/pull/7389 | 2,616,963,532 | PR_kwDOJ0Z1Ps6ABt-B | 7,389 | chore: update llama.h | {
"login": "eltociear",
"id": 22633385,
"node_id": "MDQ6VXNlcjIyNjMzMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22633385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eltociear",
"html_url": "https://github.com/eltociear",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-10-27T23:49:23 | 2024-10-28T23:17:52 | 2024-10-28T23:17:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7389",
"html_url": "https://github.com/ollama/ollama/pull/7389",
"diff_url": "https://github.com/ollama/ollama/pull/7389.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7389.patch",
"merged_at": null
} | indicies -> indices | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7389/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7371 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7371/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7371/comments | https://api.github.com/repos/ollama/ollama/issues/7371/events | https://github.com/ollama/ollama/issues/7371 | 2,615,459,449 | I_kwDOJ0Z1Ps6b5L55 | 7,371 | 加载qwen2.5-1.5b-instruct-fp16.gguf运行非常慢 | {
"login": "czhcc",
"id": 4754730,
"node_id": "MDQ6VXNlcjQ3NTQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4754730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czhcc",
"html_url": "https://github.com/czhcc",
"followers_url": "https://api.github.com/users/czhcc/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-10-26T02:14:25 | 2024-10-30T05:38:15 | 2024-10-30T05:38:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
但如果从仓库下载的qwen2.5-1.5b量化版还是很快的
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10 | {
"login": "czhcc",
"id": 4754730,
"node_id": "MDQ6VXNlcjQ3NTQ3MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4754730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czhcc",
"html_url": "https://github.com/czhcc",
"followers_url": "https://api.github.com/users/czhcc/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7371/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2783 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2783/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2783/comments | https://api.github.com/repos/ollama/ollama/issues/2783/events | https://github.com/ollama/ollama/issues/2783 | 2,156,801,774 | I_kwDOJ0Z1Ps6Aji7u | 2,783 | Connection Error with OllamaFunctions in Langchain | {
"login": "quartermaine",
"id": 24212117,
"node_id": "MDQ6VXNlcjI0MjEyMTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/24212117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quartermaine",
"html_url": "https://github.com/quartermaine",
"followers_url": "https://api.github.c... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjg... | closed | false | null | [] | null | 6 | 2024-02-27T14:54:24 | 2025-01-12T00:52:08 | 2025-01-12T00:52:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Description
I am attempting to replicate the [Langchain tutorial](https://python.langchain.com/docs/integrations/chat/ollama_functions) in order to use OllamaFunctions for web extraction, as also demonstrated [here](https://python.langchain.com/docs/use_cases/web_scraping#scraping-with-extraction) in a Google Cola... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2783/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1979 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1979/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1979/comments | https://api.github.com/repos/ollama/ollama/issues/1979/events | https://github.com/ollama/ollama/issues/1979 | 2,080,466,913 | I_kwDOJ0Z1Ps58AWfh | 1,979 | Unable to get Ollama to utilize GPU on Jetson Orin Nano 8Gb | {
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/foll... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 81 | 2024-01-13T20:37:34 | 2024-04-20T02:47:34 | 2024-03-25T19:51:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I've reviewed the great tutorial made by @bnodnarb here:
https://github.com/jmorganca/ollama/blob/main/docs/tutorials/nvidia-jetson.md
The Orin Nano is running Ubuntu 20.04 with Jetpack 5.1.2 (r35.4.1 L4T). The container is also running L4T version 35.4.1. Jetpack 5.1.2 comes with CUDA 11.4 installed with compatibi... | {
"login": "remy415",
"id": 105550370,
"node_id": "U_kgDOBkqSIg",
"avatar_url": "https://avatars.githubusercontent.com/u/105550370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remy415",
"html_url": "https://github.com/remy415",
"followers_url": "https://api.github.com/users/remy415/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1979/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1979/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/688 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/688/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/688/comments | https://api.github.com/repos/ollama/ollama/issues/688/events | https://github.com/ollama/ollama/issues/688 | 1,923,366,960 | I_kwDOJ0Z1Ps5ypEAw | 688 | Unable to create account with a secure password | {
"login": "FairyTail2000",
"id": 22645621,
"node_id": "MDQ6VXNlcjIyNjQ1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FairyTail2000",
"html_url": "https://github.com/FairyTail2000",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 6 | 2023-10-03T05:55:38 | 2023-10-12T00:35:30 | 2023-10-12T00:35:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was just about to create an account with the following passwords:
»,àî´æ=`"((Ý#±«"ü×%'yWðÍ&îPqØTX;¯þ;¿×æX˵¾ÖÛDþí,Á_+*ĬÊ<µ¾¡f'»êÎÖp¢e_P°óZk@XñÊ7ÒÊÖ©mðÂÝs5jÛCCýZ-C¹ÎÖúÃ'ô½¡7§îW(ÂcT_*Jo©h9>9Ãèh[Í
pw_E}]kz#uEnn`Lr@[FF{jfS+~M*rd/52iWxja%jobADqcWX\oaZ[;=bPM].5Kc(gJH-_+okZbeQ'wQ_nVVQV-C{r3/7}+%#:{,->y.K,'A-M/fR9gw%*... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/688/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3054 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3054/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3054/comments | https://api.github.com/repos/ollama/ollama/issues/3054/events | https://github.com/ollama/ollama/issues/3054 | 2,179,435,900 | I_kwDOJ0Z1Ps6B5418 | 3,054 | Immense amount of disk reads when paging with `mmap` | {
"login": "hedleyroos",
"id": 316314,
"node_id": "MDQ6VXNlcjMxNjMxNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/316314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hedleyroos",
"html_url": "https://github.com/hedleyroos",
"followers_url": "https://api.github.com/users/h... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-03-11T15:29:17 | 2024-08-02T08:22:41 | 2024-07-24T23:10:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Querying the `/generate` API with the llama2 models works as expected. I append `keep_alive=0` to the query string to keep the model in RAM, and from `iotop` I can see it immediately loads the model in RAM (I am in CPU only mode). The loading also seems to take place in a single sub-process or thread - unsure which sin... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3054/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/553 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/553/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/553/comments | https://api.github.com/repos/ollama/ollama/issues/553/events | https://github.com/ollama/ollama/pull/553 | 1,902,219,201 | PR_kwDOJ0Z1Ps5ao06Y | 553 | add word wrapping for lines which are longer than the terminal width | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-19T05:15:36 | 2023-09-26T23:24:35 | 2023-09-22T20:36:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/553",
"html_url": "https://github.com/ollama/ollama/pull/553",
"diff_url": "https://github.com/ollama/ollama/pull/553.diff",
"patch_url": "https://github.com/ollama/ollama/pull/553.patch",
"merged_at": "2023-09-22T20:36:08"
} | This change makes it so the REPL will properly wrap a line on a word boundary. The way it works is that it walks through each character of each token returned by the server, and then keeps a buffer of the last word. If the maximum boundary length is exceeded, it will backtrack using ANSI escape codes to the length of t... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/553/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/974 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/974/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/974/comments | https://api.github.com/repos/ollama/ollama/issues/974/events | https://github.com/ollama/ollama/pull/974 | 1,974,951,437 | PR_kwDOJ0Z1Ps5eeH0F | 974 | remove modelfile context deprecated in v0.0.7 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-11-02T20:02:47 | 2023-11-03T00:52:57 | 2023-11-03T00:52:56 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/974",
"html_url": "https://github.com/ollama/ollama/pull/974",
"diff_url": "https://github.com/ollama/ollama/pull/974.diff",
"patch_url": "https://github.com/ollama/ollama/pull/974.patch",
"merged_at": "2023-11-03T00:52:56"
} | This modelfile variable was deprecated in ollama v0.0.7, which was a very early stage of the project. It was also not documented anywhere, and no longer used in our library images. It should be ok to remove this now. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/974/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5817 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5817/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5817/comments | https://api.github.com/repos/ollama/ollama/issues/5817/events | https://github.com/ollama/ollama/pull/5817 | 2,421,077,019 | PR_kwDOJ0Z1Ps51_y_P | 5,817 | llm: consider `head_dim` in llama arch | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-20T22:11:50 | 2024-07-21T01:48:14 | 2024-07-21T01:48:12 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5817",
"html_url": "https://github.com/ollama/ollama/pull/5817",
"diff_url": "https://github.com/ollama/ollama/pull/5817.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5817.patch",
"merged_at": "2024-07-21T01:48:12"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5817/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7788 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7788/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7788/comments | https://api.github.com/repos/ollama/ollama/issues/7788/events | https://github.com/ollama/ollama/issues/7788 | 2,681,946,191 | I_kwDOJ0Z1Ps6f20BP | 7,788 | Ollama 0.4.3 ignores HTTPS_PROXY | {
"login": "0xmeyer",
"id": 125983009,
"node_id": "U_kgDOB4JZIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125983009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0xmeyer",
"html_url": "https://github.com/0xmeyer",
"followers_url": "https://api.github.com/users/0xmeyer/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-11-22T06:13:07 | 2024-11-25T23:08:35 | 2024-11-25T23:08:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Since Ollama `0.4.3` the environment variable `HTTPS_RPOXY` is ignored. My old deployment ran with Ollama `0.4.1` without any problems.
Should be reproducible:
```bash
$ export HTTPS_PROXY=http://<PROXY>:3128
$ ollama -v
Warning: could not connect to a running Ollama instance
Warning: ... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7788/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7788/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3221 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3221/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3221/comments | https://api.github.com/repos/ollama/ollama/issues/3221/events | https://github.com/ollama/ollama/issues/3221 | 2,191,832,413 | I_kwDOJ0Z1Ps6CpLVd | 3,221 | How to catch errors using ollama compatibility with OpenAI API | {
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 1 | 2024-03-18T10:36:23 | 2024-03-19T17:28:38 | 2024-03-19T17:28:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, im trying this code:
```python
def llama_openaiv2(prompt,
add_inst=True, #By default True, if you use a base model should write it as False
model="llama2",
temperature=0.0, #By default in openai is 1.0 o 0.7 depends of the model, openai from 0.0 to 2.0, llama2 from 0.0 to 1.0
... | {
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3221/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5260 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5260/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5260/comments | https://api.github.com/repos/ollama/ollama/issues/5260/events | https://github.com/ollama/ollama/issues/5260 | 2,371,200,516 | I_kwDOJ0Z1Ps6NVaYE | 5,260 | Code autopilot | {
"login": "perpendicularai",
"id": 146530480,
"node_id": "U_kgDOCLvgsA",
"avatar_url": "https://avatars.githubusercontent.com/u/146530480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perpendicularai",
"html_url": "https://github.com/perpendicularai",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2024-06-24T22:14:12 | 2024-07-02T01:25:17 | 2024-07-02T01:25:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/marketplace/code-autopilot-ai-coder | {
"login": "perpendicularai",
"id": 146530480,
"node_id": "U_kgDOCLvgsA",
"avatar_url": "https://avatars.githubusercontent.com/u/146530480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perpendicularai",
"html_url": "https://github.com/perpendicularai",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5260/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6064 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6064/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6064/comments | https://api.github.com/repos/ollama/ollama/issues/6064/events | https://github.com/ollama/ollama/pull/6064 | 2,436,506,166 | PR_kwDOJ0Z1Ps52zi4K | 6,064 | convert: update llama conversion for llama3.1 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-07-29T22:33:17 | 2024-08-21T19:57:11 | 2024-08-21T19:57:09 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6064",
"html_url": "https://github.com/ollama/ollama/pull/6064",
"diff_url": "https://github.com/ollama/ollama/pull/6064.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6064.patch",
"merged_at": "2024-08-21T19:57:09"
} | llama3.1 contains a new tensor for rope scaling factors. derive this new tensor from llama3.1 configs if they exist | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6064/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/6064/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1472 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1472/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1472/comments | https://api.github.com/repos/ollama/ollama/issues/1472/events | https://github.com/ollama/ollama/issues/1472 | 2,036,434,627 | I_kwDOJ0Z1Ps55YYbD | 1,472 | Support for fully airgapped environment | {
"login": "yyefet",
"id": 11426837,
"node_id": "MDQ6VXNlcjExNDI2ODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/11426837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyefet",
"html_url": "https://github.com/yyefet",
"followers_url": "https://api.github.com/users/yyefet/fo... | [] | closed | false | null | [] | null | 2 | 2023-12-11T20:04:01 | 2023-12-11T21:40:09 | 2023-12-11T21:37:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Feature Request to support fully air-gapped environments to ensure no calls/requests leave the server externally if user desires.
Proposing --airgap or --no-external flag to disable all telemetry, pull from public repos, and phoning home of any sorts. | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1472/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5973 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5973/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5973/comments | https://api.github.com/repos/ollama/ollama/issues/5973/events | https://github.com/ollama/ollama/issues/5973 | 2,431,507,573 | I_kwDOJ0Z1Ps6Q7dx1 | 5,973 | Error: template: :28:7: executing "" at <.ToolCalls>: can't evaluate field ToolCalls in type *api.Message | {
"login": "dashan996",
"id": 164734277,
"node_id": "U_kgDOCdGlRQ",
"avatar_url": "https://avatars.githubusercontent.com/u/164734277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dashan996",
"html_url": "https://github.com/dashan996",
"followers_url": "https://api.github.com/users/dashan... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 12 | 2024-07-26T06:18:18 | 2024-07-26T21:24:37 | 2024-07-26T21:24:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After downloaded llama3.1:70b and run it, I meet this question. I have tried other models and everyone works well. I deleted it and download again, but error still appears again.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.2.2 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5973/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5973/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8223 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8223/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8223/comments | https://api.github.com/repos/ollama/ollama/issues/8223/events | https://github.com/ollama/ollama/issues/8223 | 2,756,961,848 | I_kwDOJ0Z1Ps6kU-Y4 | 8,223 | Swagger UI implementation for basic testing on the ollama API | {
"login": "jordi-vancuijlenborg-vinci",
"id": 45209125,
"node_id": "MDQ6VXNlcjQ1MjA5MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/45209125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordi-vancuijlenborg-vinci",
"html_url": "https://github.com/jordi-vancuijlenborg-vinci"... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-12-24T01:02:52 | 2024-12-24T19:22:27 | 2024-12-24T19:22:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8223/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3664 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3664/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3664/comments | https://api.github.com/repos/ollama/ollama/issues/3664/events | https://github.com/ollama/ollama/pull/3664 | 2,244,867,811 | PR_kwDOJ0Z1Ps5svaO6 | 3,664 | fix padding to only return padding | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-16T00:32:11 | 2024-04-17T22:57:41 | 2024-04-17T22:57:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3664",
"html_url": "https://github.com/ollama/ollama/pull/3664",
"diff_url": "https://github.com/ollama/ollama/pull/3664.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3664.patch",
"merged_at": "2024-04-17T22:57:40"
} | follow up to #3663 to simplify padding() | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3664/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3108 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3108/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3108/comments | https://api.github.com/repos/ollama/ollama/issues/3108/events | https://github.com/ollama/ollama/issues/3108 | 2,184,218,049 | I_kwDOJ0Z1Ps6CMIXB | 3,108 | Usability improvement for ollama rm | {
"login": "aosan",
"id": 8534160,
"node_id": "MDQ6VXNlcjg1MzQxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8534160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aosan",
"html_url": "https://github.com/aosan",
"followers_url": "https://api.github.com/users/aosan/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-03-13T14:54:10 | 2024-03-23T19:51:28 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In the current implementation, `ollama rm` removes a model without a prompt for confirmation. Please consider the inclusion of a confirmation prompt, with the customary N/y option to avoid removing models by mistake, with `run` and `rm` being similar. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3108/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8019 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8019/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8019/comments | https://api.github.com/repos/ollama/ollama/issues/8019/events | https://github.com/ollama/ollama/pull/8019 | 2,728,479,652 | PR_kwDOJ0Z1Ps6EnePW | 8,019 | Delete redundancy code when never happends. | {
"login": "zhanluxianshen",
"id": 161462588,
"node_id": "U_kgDOCZ-5PA",
"avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhanluxianshen",
"html_url": "https://github.com/zhanluxianshen",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 0 | 2024-12-09T23:06:47 | 2024-12-11T00:07:18 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8019",
"html_url": "https://github.com/ollama/ollama/pull/8019",
"diff_url": "https://github.com/ollama/ollama/pull/8019.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8019.patch",
"merged_at": null
} | This code will never happend..
Its dead code. and just waste CPU instructions.. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8019/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/248 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/248/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/248/comments | https://api.github.com/repos/ollama/ollama/issues/248/events | https://github.com/ollama/ollama/issues/248 | 1,830,013,491 | I_kwDOJ0Z1Ps5tE8oz | 248 | Modelfile only packages in one license | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-07-31T20:55:54 | 2023-08-02T00:18:16 | 2023-08-02T00:18:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When two licenses are specified, one gets removed from the packaged modelfile
```
case "license", "template", "system", "prompt":
fn(api.ProgressResponse{Status: fmt.Sprintf("creating model %s layer", c.Name)})
// remove the prompt layer if one exists
mediaType := fmt.Sprintf("application/vnd.ollama.ima... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/248/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1519 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1519/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1519/comments | https://api.github.com/repos/ollama/ollama/issues/1519/events | https://github.com/ollama/ollama/issues/1519 | 2,041,485,230 | I_kwDOJ0Z1Ps55rpeu | 1,519 | LLM Model Cache files | {
"login": "PrasannaVnewtglobal",
"id": 145771576,
"node_id": "U_kgDOCLBMOA",
"avatar_url": "https://avatars.githubusercontent.com/u/145771576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrasannaVnewtglobal",
"html_url": "https://github.com/PrasannaVnewtglobal",
"followers_url": "https... | [] | closed | false | null | [] | null | 5 | 2023-12-14T11:17:35 | 2024-05-10T04:36:24 | 2024-05-10T00:55:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama store the LLM model in the modelfile "List", When I try to run the model in the first SSH session it giving the good results and store some caches, but when i try to open new session it not utilizing the previous response cache, where the cache file is present for the LLM model, i couldn't find the cache file. w... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1519/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/359 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/359/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/359/comments | https://api.github.com/repos/ollama/ollama/issues/359/events | https://github.com/ollama/ollama/issues/359 | 1,853,012,052 | I_kwDOJ0Z1Ps5ucrhU | 359 | Where are the Modelfiles? | {
"login": "khromov",
"id": 1207507,
"node_id": "MDQ6VXNlcjEyMDc1MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1207507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khromov",
"html_url": "https://github.com/khromov",
"followers_url": "https://api.github.com/users/khromov/... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 3 | 2023-08-16T11:07:07 | 2023-08-22T01:02:58 | 2023-08-22T01:02:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | At some point the model files seem to have been located in the repo, such as this result which shows up on Google but now they are gone. Where can we find them? https://github.com/jmorganca/ollama/blob/main/library/modelfiles/llama2 | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/359/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2241 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2241/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2241/comments | https://api.github.com/repos/ollama/ollama/issues/2241/events | https://github.com/ollama/ollama/pull/2241 | 2,104,343,768 | PR_kwDOJ0Z1Ps5lQ8gH | 2,241 | Do not repeat system prompt for chat templating | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2024-01-28T20:53:47 | 2024-01-28T22:15:57 | 2024-01-28T22:15:57 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2241",
"html_url": "https://github.com/ollama/ollama/pull/2241",
"diff_url": "https://github.com/ollama/ollama/pull/2241.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2241.patch",
"merged_at": "2024-01-28T22:15:57"
} | Before:
```
<|im_start|>system
You are a happy dog<|im_end|>
<|im_start|>assistant
hi im a friendly assistant<|im_end|>
<|im_start|>system
You are a happy dog<|im_end|>
<|im_start|>user
who are you?<|im_end|>
```
After:
```
<|im_start|>system
You are a happy dog<|im_end|>
<|im_start|>assistant
hi ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2241/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7949 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7949/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7949/comments | https://api.github.com/repos/ollama/ollama/issues/7949/events | https://github.com/ollama/ollama/issues/7949 | 2,719,919,741 | I_kwDOJ0Z1Ps6iHq59 | 7,949 | panic: failed to decode batch: could not find a kv cache slot goroutine 22 [running]: main.(*Server).run(0xc0000c2120, {0x556536b63ba0, 0xc00008a0a0}) github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:978 +0xcc7 | {
"login": "watch-Ultra",
"id": 177522180,
"node_id": "U_kgDOCpTGBA",
"avatar_url": "https://avatars.githubusercontent.com/u/177522180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/watch-Ultra",
"html_url": "https://github.com/watch-Ultra",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 9 | 2024-12-05T09:55:40 | 2025-01-04T05:00:33 | 2024-12-17T22:01:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7949/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7601 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7601/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7601/comments | https://api.github.com/repos/ollama/ollama/issues/7601/events | https://github.com/ollama/ollama/issues/7601 | 2,647,474,412 | I_kwDOJ0Z1Ps6dzUDs | 7,601 | Updating OpenCoder model information in library | {
"login": "elsatch",
"id": 653433,
"node_id": "MDQ6VXNlcjY1MzQzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsatch",
"html_url": "https://github.com/elsatch",
"followers_url": "https://api.github.com/users/elsatch/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-11-10T17:46:11 | 2024-11-10T21:21:06 | 2024-11-10T21:21:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been trying to use the new OpenCoder model with Continue extension and discovered it doesn't work. Asking at Continue's discord someone pointed me to this thread at HF:
https://huggingface.co/infly/OpenCoder-8B-Instruct/discussions/2#67304d50de572a8535e3d20b
In that thread you can f... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7601/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1238 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1238/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1238/comments | https://api.github.com/repos/ollama/ollama/issues/1238/events | https://github.com/ollama/ollama/issues/1238 | 2,006,254,412 | I_kwDOJ0Z1Ps53lQNM | 1,238 | Feature request: Chat logs auto-save by default | {
"login": "bitcoinmeetups",
"id": 2834754,
"node_id": "MDQ6VXNlcjI4MzQ3NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2834754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bitcoinmeetups",
"html_url": "https://github.com/bitcoinmeetups",
"followers_url": "https://api.gith... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-11-22T12:38:26 | 2024-03-12T20:16:24 | 2024-03-12T20:16:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I'm Mr. Cascade, a very friendly guy.
I would like to make the following feature request:
Chats automatically being saved by default.
This is also the expected behaviour. Similar software like terminal gpt has it enabled by default.
I'm on a server where I can't scroll up on Tmux so while testing I ... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1238/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1991 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1991/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1991/comments | https://api.github.com/repos/ollama/ollama/issues/1991/events | https://github.com/ollama/ollama/issues/1991 | 2,080,854,566 | I_kwDOJ0Z1Ps58B1Im | 1,991 | Error: Post "http://127.0.0.1:11434/api/generate": EOF | {
"login": "joesalvati68",
"id": 59943835,
"node_id": "MDQ6VXNlcjU5OTQzODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/59943835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joesalvati68",
"html_url": "https://github.com/joesalvati68",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 21 | 2024-01-14T18:54:14 | 2024-02-11T10:12:26 | 2024-01-28T20:00:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | (base) user@userAlienware:~$ ollama run vicuna
Error: Post "http://127.0.0.1:11434/api/generate": EOF
(base) user@userAlienware:~$
I keep getting this after initial install and I can't figure out why. Any ideas? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1991/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3270 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3270/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3270/comments | https://api.github.com/repos/ollama/ollama/issues/3270/events | https://github.com/ollama/ollama/pull/3270 | 2,197,461,029 | PR_kwDOJ0Z1Ps5qN8aS | 3,270 | refactor(cmd): distribute commands into root.go, create.go, run.go, etc. | {
"login": "igophper",
"id": 34326532,
"node_id": "MDQ6VXNlcjM0MzI2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/34326532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igophper",
"html_url": "https://github.com/igophper",
"followers_url": "https://api.github.com/users/igo... | [] | closed | false | null | [] | null | 1 | 2024-03-20T12:51:09 | 2024-04-07T09:57:30 | 2024-04-07T06:37:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3270",
"html_url": "https://github.com/ollama/ollama/pull/3270",
"diff_url": "https://github.com/ollama/ollama/pull/3270.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3270.patch",
"merged_at": null
} | Previously, all commands were located in a single cmd.go file. This refactor improves the organization and readability of the code by distributing commands into their respective files such as root.go, create.go, run.go, etc. | {
"login": "igophper",
"id": 34326532,
"node_id": "MDQ6VXNlcjM0MzI2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/34326532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igophper",
"html_url": "https://github.com/igophper",
"followers_url": "https://api.github.com/users/igo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3270/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3270/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2646 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2646/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2646/comments | https://api.github.com/repos/ollama/ollama/issues/2646/events | https://github.com/ollama/ollama/issues/2646 | 2,147,447,221 | I_kwDOJ0Z1Ps5__3G1 | 2,646 | Defect: EOF on running with Gemma:7b | {
"login": "kvchitrapu",
"id": 44282098,
"node_id": "MDQ6VXNlcjQ0MjgyMDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44282098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kvchitrapu",
"html_url": "https://github.com/kvchitrapu",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-02-21T18:39:05 | 2024-02-21T18:49:38 | 2024-02-21T18:49:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | OS: Mac M1 Pro
```
$ ollama run gemma:7b
pulling manifest
pulling 2c5f288be750... 100% ▕████████████████████████████████████████████▏ 4.8 GB
pulling 097a36493f71... 100% ▕████████████████████████████████████████████▏ 8.4 KB
pulling 109037bec39c... 100% ▕████████... | {
"login": "kvchitrapu",
"id": 44282098,
"node_id": "MDQ6VXNlcjQ0MjgyMDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44282098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kvchitrapu",
"html_url": "https://github.com/kvchitrapu",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2646/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2498 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2498/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2498/comments | https://api.github.com/repos/ollama/ollama/issues/2498/events | https://github.com/ollama/ollama/pull/2498 | 2,134,944,637 | PR_kwDOJ0Z1Ps5m5FV_ | 2,498 | Windows Preview | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-02-14T18:35:49 | 2024-02-14T18:40:05 | 2024-02-14T18:40:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2498",
"html_url": "https://github.com/ollama/ollama/pull/2498",
"diff_url": "https://github.com/ollama/ollama/pull/2498.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2498.patch",
"merged_at": null
} | Copy of #2481 using branch | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2498/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/820 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/820/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/820/comments | https://api.github.com/repos/ollama/ollama/issues/820/events | https://github.com/ollama/ollama/issues/820 | 1,947,728,944 | I_kwDOJ0Z1Ps50F_ww | 820 | interactive mode with prompt as argument | {
"login": "jonas-w",
"id": 32615971,
"node_id": "MDQ6VXNlcjMyNjE1OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/32615971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonas-w",
"html_url": "https://github.com/jonas-w",
"followers_url": "https://api.github.com/users/jonas-... | [] | closed | false | null | [] | null | 2 | 2023-10-17T15:41:54 | 2023-10-17T21:54:36 | 2023-10-17T20:34:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When passing a prompt as an argument e.g. `ollama run $MODEL "Hello World!"`, ollama will exit after this prompt and don't wait for further input, like it is with `ollama run $MODEL`.
Would it be possible to add an `-i` `--interactive` flag when passing the prompt directly? | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/820/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2488 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2488/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2488/comments | https://api.github.com/repos/ollama/ollama/issues/2488/events | https://github.com/ollama/ollama/issues/2488 | 2,133,781,415 | I_kwDOJ0Z1Ps5_Luun | 2,488 | How can fine tune with ollama? | {
"login": "KDH-Korea",
"id": 126445656,
"node_id": "U_kgDOB4loWA",
"avatar_url": "https://avatars.githubusercontent.com/u/126445656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KDH-Korea",
"html_url": "https://github.com/KDH-Korea",
"followers_url": "https://api.github.com/users/KDH-Ko... | [] | closed | false | null | [] | null | 3 | 2024-02-14T08:12:19 | 2024-02-20T22:52:07 | 2024-02-20T22:52:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I want to fine-tune the Mistral model imported using Ollama, but there is no information available, and it's even more challenging to find information in Korea where not many people are familiar with Ollama. I would appreciate it if you could provide information on how to fine-tune the model using Ollama. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2488/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/ollama/ollama/issues/2488/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2002 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2002/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2002/comments | https://api.github.com/repos/ollama/ollama/issues/2002/events | https://github.com/ollama/ollama/issues/2002 | 2,082,071,134 | I_kwDOJ0Z1Ps58GeJe | 2,002 | how to enable amd gpu for ollama ? | {
"login": "hemangjoshi37a",
"id": 12392345,
"node_id": "MDQ6VXNlcjEyMzkyMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/12392345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemangjoshi37a",
"html_url": "https://github.com/hemangjoshi37a",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 2 | 2024-01-15T13:47:33 | 2024-01-15T14:04:51 | 2024-01-15T13:56:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | how to enable amd gpu for ollama ? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2002/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7084 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7084/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7084/comments | https://api.github.com/repos/ollama/ollama/issues/7084/events | https://github.com/ollama/ollama/pull/7084 | 2,562,818,040 | PR_kwDOJ0Z1Ps59b7DO | 7,084 | Adding: OrionChat: A Web Interface for Seamless AI Conversation | {
"login": "EliasPereirah",
"id": 16616409,
"node_id": "MDQ6VXNlcjE2NjE2NDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16616409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EliasPereirah",
"html_url": "https://github.com/EliasPereirah",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 1 | 2024-10-02T23:32:12 | 2024-11-21T19:23:42 | 2024-11-21T19:23:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7084",
"html_url": "https://github.com/ollama/ollama/pull/7084",
"diff_url": "https://github.com/ollama/ollama/pull/7084.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7084.patch",
"merged_at": "2024-11-21T19:23:42"
} | OrionChat is a free web-based chat interface that simplifies interactions with multiple AI model providers. It provides a unified platform for chatting and exploring multiple large language models (LLMs), including:
**Ollama**,
OpenAI
Google Gemini
Claude (Anthropic)
Groq Inc.
Cerebras | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7084/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7050 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7050/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7050/comments | https://api.github.com/repos/ollama/ollama/issues/7050/events | https://github.com/ollama/ollama/pull/7050 | 2,557,223,671 | PR_kwDOJ0Z1Ps59Jgfb | 7,050 | Stop model before deletion if loaded (fixed #6957) | {
"login": "alexmavr",
"id": 680441,
"node_id": "MDQ6VXNlcjY4MDQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmavr",
"html_url": "https://github.com/alexmavr",
"followers_url": "https://api.github.com/users/alexmav... | [] | closed | false | null | [] | null | 0 | 2024-09-30T17:09:35 | 2024-10-01T22:46:21 | 2024-10-01T22:45:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7050",
"html_url": "https://github.com/ollama/ollama/pull/7050",
"diff_url": "https://github.com/ollama/ollama/pull/7050.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7050.patch",
"merged_at": "2024-10-01T22:45:43"
} | This PR adds the same logic as `StopHandler` before model removal. If the model is not loaded, no error is raised.
I also added a few tests on the server DeleteHandler, and fixed an error formatting bug where `ollama rm` would return a "file not found" error for the manifest file instead of a proper error message if... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7050/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4017 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4017/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4017/comments | https://api.github.com/repos/ollama/ollama/issues/4017/events | https://github.com/ollama/ollama/issues/4017 | 2,267,998,441 | I_kwDOJ0Z1Ps6HLujp | 4,017 | Hardware configuration: 64-core CPU, 1TB memory, use of llama3:8b is slow. why? | {
"login": "zhaohuaxi-Shi",
"id": 58802558,
"node_id": "MDQ6VXNlcjU4ODAyNTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/58802558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaohuaxi-Shi",
"html_url": "https://github.com/zhaohuaxi-Shi",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2024-04-29T02:20:58 | 2024-04-29T09:20:11 | 2024-04-29T09:20:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | user api/generate pre-load model , its useless | {
"login": "zhaohuaxi-Shi",
"id": 58802558,
"node_id": "MDQ6VXNlcjU4ODAyNTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/58802558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaohuaxi-Shi",
"html_url": "https://github.com/zhaohuaxi-Shi",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4017/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5632 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5632/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5632/comments | https://api.github.com/repos/ollama/ollama/issues/5632/events | https://github.com/ollama/ollama/pull/5632 | 2,403,447,460 | PR_kwDOJ0Z1Ps51HER6 | 5,632 | cmd: better version info when client/server not equal | {
"login": "alwqx",
"id": 9915368,
"node_id": "MDQ6VXNlcjk5MTUzNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alwqx",
"html_url": "https://github.com/alwqx",
"followers_url": "https://api.github.com/users/alwqx/follower... | [] | closed | false | null | [] | null | 1 | 2024-07-11T15:12:44 | 2024-08-07T03:40:26 | 2024-08-07T03:40:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5632",
"html_url": "https://github.com/ollama/ollama/pull/5632",
"diff_url": "https://github.com/ollama/ollama/pull/5632.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5632.patch",
"merged_at": null
} | This pr updates version info when client version != server version
## example
1. **client version == server version**. local build with command `GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=v0.2.1-rc3\"'" go build .`
```shell
$ ./ollama --version
ollama version is v0.2.1-rc3
```
... | {
"login": "alwqx",
"id": 9915368,
"node_id": "MDQ6VXNlcjk5MTUzNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alwqx",
"html_url": "https://github.com/alwqx",
"followers_url": "https://api.github.com/users/alwqx/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5632/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/29 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/29/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/29/comments | https://api.github.com/repos/ollama/ollama/issues/29/events | https://github.com/ollama/ollama/pull/29 | 1,783,023,896 | PR_kwDOJ0Z1Ps5UW3Ts | 29 | Pull model name | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-06-30T18:58:00 | 2023-07-01T00:00:22 | 2023-06-30T18:58:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/29",
"html_url": "https://github.com/ollama/ollama/pull/29",
"diff_url": "https://github.com/ollama/ollama/pull/29.diff",
"patch_url": "https://github.com/ollama/ollama/pull/29.patch",
"merged_at": "2023-06-30T18:58:58"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/29/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/29/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/252 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/252/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/252/comments | https://api.github.com/repos/ollama/ollama/issues/252/events | https://github.com/ollama/ollama/pull/252 | 1,831,619,913 | PR_kwDOJ0Z1Ps5W7YnX | 252 | Add model update to README.md | {
"login": "drhino",
"id": 2538708,
"node_id": "MDQ6VXNlcjI1Mzg3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2538708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drhino",
"html_url": "https://github.com/drhino",
"followers_url": "https://api.github.com/users/drhino/foll... | [] | closed | false | null | [] | null | 1 | 2023-08-01T16:14:59 | 2023-08-01T19:06:34 | 2023-08-01T19:06:33 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/252",
"html_url": "https://github.com/ollama/ollama/pull/252",
"diff_url": "https://github.com/ollama/ollama/pull/252.diff",
"patch_url": "https://github.com/ollama/ollama/pull/252.patch",
"merged_at": "2023-08-01T19:06:33"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/252/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6032 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6032/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6032/comments | https://api.github.com/repos/ollama/ollama/issues/6032/events | https://github.com/ollama/ollama/pull/6032 | 2,434,130,980 | PR_kwDOJ0Z1Ps52rVt1 | 6,032 | Update to `llama3.1` elsewhere in repo | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-28T21:58:33 | 2024-07-29T02:56:04 | 2024-07-29T02:56:02 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6032",
"html_url": "https://github.com/ollama/ollama/pull/6032",
"diff_url": "https://github.com/ollama/ollama/pull/6032.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6032.patch",
"merged_at": "2024-07-29T02:56:02"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6032/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1115 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1115/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1115/comments | https://api.github.com/repos/ollama/ollama/issues/1115/events | https://github.com/ollama/ollama/pull/1115 | 1,991,455,896 | PR_kwDOJ0Z1Ps5fV7hd | 1,115 | Add ollama.nvim to list of terminal links | {
"login": "huynle",
"id": 2416122,
"node_id": "MDQ6VXNlcjI0MTYxMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2416122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huynle",
"html_url": "https://github.com/huynle",
"followers_url": "https://api.github.com/users/huynle/foll... | [] | closed | false | null | [] | null | 0 | 2023-11-13T20:46:12 | 2023-11-13T22:00:18 | 2023-11-13T22:00:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1115",
"html_url": "https://github.com/ollama/ollama/pull/1115",
"diff_url": "https://github.com/ollama/ollama/pull/1115.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1115.patch",
"merged_at": "2023-11-13T22:00:18"
} | `ollama.nvim` is a good plugin, uses the ollama API directly! | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1115/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1115/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3297 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3297/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3297/comments | https://api.github.com/repos/ollama/ollama/issues/3297/events | https://github.com/ollama/ollama/issues/3297 | 2,202,945,064 | I_kwDOJ0Z1Ps6DTkYo | 3,297 | Do not allow upper-case letters in the model path | {
"login": "d3cker",
"id": 2236710,
"node_id": "MDQ6VXNlcjIyMzY3MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2236710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d3cker",
"html_url": "https://github.com/d3cker",
"followers_url": "https://api.github.com/users/d3cker/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA... | closed | false | null | [] | null | 3 | 2024-03-22T17:09:26 | 2024-04-21T00:58:59 | 2024-04-21T00:58:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello community.
I'm using locally hosted Docker repository on Synology NAS. Unfortunately I have an issue with one particular model: *wizardcoder-python-13b-v1.0.Q6_K.gguf*
```
$ sha256sum wizardcoder-python-13b-v1.0.Q6_K.gguf
a20f795d17d64e487b6b3446227ba2931bbcb3bc7bb7ebd652b9663efb1f0... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3297/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3297/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1303 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1303/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1303/comments | https://api.github.com/repos/ollama/ollama/issues/1303/events | https://github.com/ollama/ollama/issues/1303 | 2,014,658,584 | I_kwDOJ0Z1Ps54FUAY | 1,303 | Memory required to run differs from expectation | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 1 | 2023-11-28T15:06:37 | 2024-01-08T21:42:03 | 2024-01-08T21:42:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | After discussing internally, it was suggested that as long as we have enough total memory across ram and vram, the model should load. Layers are loaded into main memory then offloaded into vram. So I tried with different memory sizes and number of attached T4 cards with 16-ish GB vram each.
When there is 16 GB RAM ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1303/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5344 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5344/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5344/comments | https://api.github.com/repos/ollama/ollama/issues/5344/events | https://github.com/ollama/ollama/issues/5344 | 2,379,148,503 | I_kwDOJ0Z1Ps6NzuzX | 5,344 | "Mock" model | {
"login": "mweel1",
"id": 77025147,
"node_id": "MDQ6VXNlcjc3MDI1MTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/77025147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mweel1",
"html_url": "https://github.com/mweel1",
"followers_url": "https://api.github.com/users/mweel1/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-06-27T21:55:10 | 2024-07-08T23:10:49 | 2024-07-08T23:10:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
We are developing some application on ollama and the performance would be ok for a user, however when we are developing software the lag time to generate can be very slow.
Would it be possible to build a "development" model that didn't have a lot of parameters but at least let you build a product at a reasonable p... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5344/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4315 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4315/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4315/comments | https://api.github.com/repos/ollama/ollama/issues/4315/events | https://github.com/ollama/ollama/issues/4315 | 2,290,018,062 | I_kwDOJ0Z1Ps6IfucO | 4,315 | Llama3 model continually prompts itself in an infinite loop. | {
"login": "billwestrup",
"id": 168590261,
"node_id": "U_kgDOCgx7tQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168590261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/billwestrup",
"html_url": "https://github.com/billwestrup",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-10T16:14:53 | 2024-05-10T18:49:13 | 2024-05-10T16:27:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I run llama3 and then prompt the model with "hello" I get the following output, which loops continuously unless I stop it with ctrl-c: (see below)
llama run llama3
>>> hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?assistant
I'... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4315/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4162 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4162/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4162/comments | https://api.github.com/repos/ollama/ollama/issues/4162/events | https://github.com/ollama/ollama/pull/4162 | 2,279,323,788 | PR_kwDOJ0Z1Ps5ujrJU | 4,162 | Allocate a large enough kv cache for all parallel requests | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-05T04:43:06 | 2024-05-05T22:59:33 | 2024-05-05T22:59:32 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4162",
"html_url": "https://github.com/ollama/ollama/pull/4162",
"diff_url": "https://github.com/ollama/ollama/pull/4162.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4162.patch",
"merged_at": "2024-05-05T22:59:32"
} | This fixes `opts.NumCtx` being assigned correctly | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4162/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5959 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5959/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5959/comments | https://api.github.com/repos/ollama/ollama/issues/5959/events | https://github.com/ollama/ollama/issues/5959 | 2,430,606,819 | I_kwDOJ0Z1Ps6Q4B3j | 5,959 | Ollama is running but can't acces it from OpenWebUI | {
"login": "ns-bcr",
"id": 134287870,
"node_id": "U_kgDOCAER_g",
"avatar_url": "https://avatars.githubusercontent.com/u/134287870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ns-bcr",
"html_url": "https://github.com/ns-bcr",
"followers_url": "https://api.github.com/users/ns-bcr/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 18 | 2024-07-25T17:19:22 | 2024-10-22T09:52:24 | 2024-07-26T10:21:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hey,
So, I am having a problem, I have Ollama runnig on ubuntu-server 24.04 LTS.
It works properly locally but from my computer I can't acces it, like from it, I can run : `ollama run llama2` and it works.
I can also try to make this command from the server : `curl localhost:11434` and it wil... | {
"login": "ns-bcr",
"id": 134287870,
"node_id": "U_kgDOCAER_g",
"avatar_url": "https://avatars.githubusercontent.com/u/134287870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ns-bcr",
"html_url": "https://github.com/ns-bcr",
"followers_url": "https://api.github.com/users/ns-bcr/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5959/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5959/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8581 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8581/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8581/comments | https://api.github.com/repos/ollama/ollama/issues/8581/events | https://github.com/ollama/ollama/issues/8581 | 2,811,029,937 | I_kwDOJ0Z1Ps6njOmx | 8,581 | Model Location | {
"login": "JohnnyLeuthard",
"id": 14182453,
"node_id": "MDQ6VXNlcjE0MTgyNDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/14182453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnnyLeuthard",
"html_url": "https://github.com/JohnnyLeuthard",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 7 | 2025-01-25T15:08:39 | 2025-01-28T21:33:49 | 2025-01-28T21:33:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Why is it so difficult and unreliable to move the models folder,. I have a Mac mini and it worked for about a day. Nothing has changed and I have even gone through and reinstalled and set it all up again, rebooted, verified permissions. heck I even went as far as grad everyone read/write just to eliminate that. Yet STI... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8581/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5398 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5398/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5398/comments | https://api.github.com/repos/ollama/ollama/issues/5398/events | https://github.com/ollama/ollama/issues/5398 | 2,382,938,437 | I_kwDOJ0Z1Ps6OCMFF | 5,398 | OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS not having an effect on Ubuntu 22.04 LTS | {
"login": "mrmiket64",
"id": 99057519,
"node_id": "U_kgDOBed_bw",
"avatar_url": "https://avatars.githubusercontent.com/u/99057519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrmiket64",
"html_url": "https://github.com/mrmiket64",
"followers_url": "https://api.github.com/users/mrmiket6... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-07-01T06:17:07 | 2024-07-02T20:30:57 | 2024-07-02T20:30:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Short description: I have set "OLLAMA_NUM_PARALLEL=4" and "OLLAMA_MAX_LOADED_MODELS=2" but I cannot load two models at a time on Ollama 0.1.48
Note 1: The variables were having an effect and working as expected in an older Ollama version, I think it was v0.1.34.
Note 2: I asked a friend to... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5398/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/178 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/178/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/178/comments | https://api.github.com/repos/ollama/ollama/issues/178/events | https://github.com/ollama/ollama/pull/178 | 1,816,813,133 | PR_kwDOJ0Z1Ps5WJmLs | 178 | use gin-contrib/cors middleware | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-22T16:04:28 | 2023-07-22T16:40:15 | 2023-07-22T16:40:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/178",
"html_url": "https://github.com/ollama/ollama/pull/178",
"diff_url": "https://github.com/ollama/ollama/pull/178.diff",
"patch_url": "https://github.com/ollama/ollama/pull/178.patch",
"merged_at": "2023-07-22T16:40:01"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/178/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8474 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8474/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8474/comments | https://api.github.com/repos/ollama/ollama/issues/8474/events | https://github.com/ollama/ollama/issues/8474 | 2,796,572,419 | I_kwDOJ0Z1Ps6msE8D | 8,474 | Model running on 100% GPU runs on CPU | {
"login": "RGFTheCoder",
"id": 24970643,
"node_id": "MDQ6VXNlcjI0OTcwNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/24970643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RGFTheCoder",
"html_url": "https://github.com/RGFTheCoder",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2025-01-18T01:41:16 | 2025-01-19T15:00:25 | 2025-01-18T04:03:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've been using `hf.co/QuantFactory/Qwen2.5-14B-Instruct-GGUF:Q6_K` for a while and recently noticed a slowdown on a recent update to 0.5.4. This isn't fixed on 0.5.7. Ollama reports that the model is running on gpu 100%, but my usage shows that my cpu runs at 50% util, and my gpu barely gets 5%... | {
"login": "RGFTheCoder",
"id": 24970643,
"node_id": "MDQ6VXNlcjI0OTcwNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/24970643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RGFTheCoder",
"html_url": "https://github.com/RGFTheCoder",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8474/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5167 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5167/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5167/comments | https://api.github.com/repos/ollama/ollama/issues/5167/events | https://github.com/ollama/ollama/issues/5167 | 2,364,109,368 | I_kwDOJ0Z1Ps6M6XI4 | 5,167 | Unable to set "encoding_format" and "dimensions" parameters for the "mxbai-embed-large" | {
"login": "netandreus",
"id": 313477,
"node_id": "MDQ6VXNlcjMxMzQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/313477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netandreus",
"html_url": "https://github.com/netandreus",
"followers_url": "https://api.github.com/users/n... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-06-20T10:36:31 | 2024-06-20T10:36:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This is great that Ollama has an [mxbai-embed-large](https://ollama.com/library/mxbai-embed-large:latest/blobs/b837481ff855) embedding model. I am trying to use this model with "ubinary" encoding_format and 512 dimensions like this (according to [this blog post](https://www.mixedbread.ai/blog/... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5167/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3273 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3273/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3273/comments | https://api.github.com/repos/ollama/ollama/issues/3273/events | https://github.com/ollama/ollama/pull/3273 | 2,198,148,232 | PR_kwDOJ0Z1Ps5qQUH6 | 3,273 | Add unicode support for windows model paths | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-03-20T17:40:46 | 2024-04-16T21:00:13 | 2024-04-16T21:00:13 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3273",
"html_url": "https://github.com/ollama/ollama/pull/3273",
"diff_url": "https://github.com/ollama/ollama/pull/3273.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3273.patch",
"merged_at": null
} | This should fix various model load and library load errors reported by non-english users.
I've verified happy-path on en-us but need to set up a repro for non-english and/or unicode characters before we should merge this. Once confirmed, I'll tie this to those issues to close them on merge. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3273/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5122 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5122/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5122/comments | https://api.github.com/repos/ollama/ollama/issues/5122/events | https://github.com/ollama/ollama/pull/5122 | 2,360,657,997 | PR_kwDOJ0Z1Ps5y3vXk | 5,122 | types/model: remove Digest | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-06-18T20:31:51 | 2024-06-19T03:28:12 | 2024-06-19T03:28:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5122",
"html_url": "https://github.com/ollama/ollama/pull/5122",
"diff_url": "https://github.com/ollama/ollama/pull/5122.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5122.patch",
"merged_at": "2024-06-19T03:28:11"
} | The Digest type in its current form is awkward to work with and presents challenges with regard to how it serializes via String using the '-' prefix.
We currently only use this in ollama.com, so we'll move our specific needs around digest parsing and validation there. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5122/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8609 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8609/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8609/comments | https://api.github.com/repos/ollama/ollama/issues/8609/events | https://github.com/ollama/ollama/issues/8609 | 2,813,352,158 | I_kwDOJ0Z1Ps6nsFje | 8,609 | Mistyrious things with local installation. | {
"login": "britus",
"id": 2138234,
"node_id": "MDQ6VXNlcjIxMzgyMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2138234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/britus",
"html_url": "https://github.com/britus",
"followers_url": "https://api.github.com/users/britus/foll... | [] | open | false | null | [] | null | 1 | 2025-01-27T15:37:12 | 2025-01-27T15:46:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello developers.
I found something funny. It's a German dialogue with the AI. My first request was a question about the history of Shenzhen. I was confused by the number format of GDP in US dollars. Then I had the AI calculate 'PI * 45' to figure out the decimal format of the software. Then I made a suggestion how n... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8609/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1234 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1234/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1234/comments | https://api.github.com/repos/ollama/ollama/issues/1234/events | https://github.com/ollama/ollama/issues/1234 | 2,005,420,877 | I_kwDOJ0Z1Ps53iEtN | 1,234 | Support text to speech (TTS) models such as Suno AI Bark | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-11-22T01:54:14 | 2024-11-04T17:43:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is it possible to have a native support for Bark TTS or langchain version of it? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1234/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1234/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5111 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5111/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5111/comments | https://api.github.com/repos/ollama/ollama/issues/5111/events | https://github.com/ollama/ollama/issues/5111 | 2,359,269,648 | I_kwDOJ0Z1Ps6Mn5kQ | 5,111 | RAM not being fully utilized (?) | {
"login": "rb81",
"id": 48117105,
"node_id": "MDQ6VXNlcjQ4MTE3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/48117105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rb81",
"html_url": "https://github.com/rb81",
"followers_url": "https://api.github.com/users/rb81/followers"... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-06-18T08:28:02 | 2024-06-18T13:04:50 | 2024-06-18T11:27:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've seen others complain about similar things but no solid answer. I'm running Ollama on Ubuntu Server with 64GB of RAM (CPU only). Inference time is better than my MacBook Air M1 with 8GB of RAM, but not as much as I would have expected. When looking at the stats, it seems RAM remains unused d... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5111/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/810 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/810/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/810/comments | https://api.github.com/repos/ollama/ollama/issues/810/events | https://github.com/ollama/ollama/pull/810 | 1,946,152,677 | PR_kwDOJ0Z1Ps5c80BP | 810 | Update install.sh | {
"login": "vieux",
"id": 1032519,
"node_id": "MDQ6VXNlcjEwMzI1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1032519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vieux",
"html_url": "https://github.com/vieux",
"followers_url": "https://api.github.com/users/vieux/follower... | [] | closed | false | null | [] | null | 1 | 2023-10-16T21:44:08 | 2023-10-16T22:51:05 | 2023-10-16T22:50:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/810",
"html_url": "https://github.com/ollama/ollama/pull/810",
"diff_url": "https://github.com/ollama/ollama/pull/810.diff",
"patch_url": "https://github.com/ollama/ollama/pull/810.patch",
"merged_at": "2023-10-16T22:50:57"
} | otherwise, the `ARCH` variable is unbound in `*)` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/810/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7754 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7754/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7754/comments | https://api.github.com/repos/ollama/ollama/issues/7754/events | https://github.com/ollama/ollama/issues/7754 | 2,674,247,413 | I_kwDOJ0Z1Ps6fZcb1 | 7,754 | 300+mb of ram while idle | {
"login": "Omar-000",
"id": 176088407,
"node_id": "U_kgDOCn7lVw",
"avatar_url": "https://avatars.githubusercontent.com/u/176088407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Omar-000",
"html_url": "https://github.com/Omar-000",
"followers_url": "https://api.github.com/users/Omar-000/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-11-20T03:49:32 | 2024-11-28T15:55:16 | 2024-11-28T15:55:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

I made sure to stop all running models and i restarted my system also
### OS
Linux
### GPU
AMD, Intel
### CPU
Intel
### Ollama version
0.3.12 | {
"login": "Omar-000",
"id": 176088407,
"node_id": "U_kgDOCn7lVw",
"avatar_url": "https://avatars.githubusercontent.com/u/176088407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Omar-000",
"html_url": "https://github.com/Omar-000",
"followers_url": "https://api.github.com/users/Omar-000/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7754/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1995 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1995/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1995/comments | https://api.github.com/repos/ollama/ollama/issues/1995/events | https://github.com/ollama/ollama/issues/1995 | 2,080,887,536 | I_kwDOJ0Z1Ps58B9Lw | 1,995 | no healthy upstream | {
"login": "vesellov",
"id": 5828660,
"node_id": "MDQ6VXNlcjU4Mjg2NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5828660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vesellov",
"html_url": "https://github.com/vesellov",
"followers_url": "https://api.github.com/users/vesel... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-01-14T20:33:59 | 2024-01-14T21:46:11 | 2024-01-14T21:44:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello Team.
Great tool you built. Thank you for that!
I am getting `no healthy upstream` when trying to open the ollama.ai web site... probably too many people loves Ollama today :heart: | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1995/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1995/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2337 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2337/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2337/comments | https://api.github.com/repos/ollama/ollama/issues/2337/events | https://github.com/ollama/ollama/issues/2337 | 2,116,457,742 | I_kwDOJ0Z1Ps5-JpUO | 2,337 | Support model allenai/OLMo-7B | {
"login": "o-agassizii",
"id": 110026216,
"node_id": "U_kgDOBo7d6A",
"avatar_url": "https://avatars.githubusercontent.com/u/110026216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/o-agassizii",
"html_url": "https://github.com/o-agassizii",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 23 | 2024-02-03T12:10:31 | 2025-01-15T08:02:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If possible, could support for this model be added to ollama?
https://huggingface.co/allenai/OLMo-7B | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2337/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2337/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8578 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8578/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8578/comments | https://api.github.com/repos/ollama/ollama/issues/8578/events | https://github.com/ollama/ollama/pull/8578 | 2,810,900,387 | PR_kwDOJ0Z1Ps6I-epn | 8,578 | Enhance install.sh with download resumption and improvements | {
"login": "navidhasanitabar",
"id": 35690837,
"node_id": "MDQ6VXNlcjM1NjkwODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/35690837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navidhasanitabar",
"html_url": "https://github.com/navidhasanitabar",
"followers_url": "https://... | [] | closed | false | null | [] | null | 0 | 2025-01-25T10:01:23 | 2025-01-25T10:11:42 | 2025-01-25T10:11:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8578",
"html_url": "https://github.com/ollama/ollama/pull/8578",
"diff_url": "https://github.com/ollama/ollama/pull/8578.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8578.patch",
"merged_at": null
} | Added support for resuming interrupted downloads. | {
"login": "navidhasanitabar",
"id": 35690837,
"node_id": "MDQ6VXNlcjM1NjkwODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/35690837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navidhasanitabar",
"html_url": "https://github.com/navidhasanitabar",
"followers_url": "https://... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8578/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1370 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1370/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1370/comments | https://api.github.com/repos/ollama/ollama/issues/1370/events | https://github.com/ollama/ollama/issues/1370 | 2,023,050,319 | I_kwDOJ0Z1Ps54lUxP | 1,370 | Add support for NeuralHermes-2.5-Mistral-7B | {
"login": "Aspie96",
"id": 13873909,
"node_id": "MDQ6VXNlcjEzODczOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13873909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aspie96",
"html_url": "https://github.com/Aspie96",
"followers_url": "https://api.github.com/users/Aspie9... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2023-12-04T05:19:31 | 2024-09-04T03:24:34 | 2024-09-04T03:24:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Yes, it's yet another Mistral-based chatbot.
Would you consider adding support for [NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)?
Thank you very much! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1370/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1370/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4178 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4178/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4178/comments | https://api.github.com/repos/ollama/ollama/issues/4178/events | https://github.com/ollama/ollama/issues/4178 | 2,279,705,726 | I_kwDOJ0Z1Ps6H4Yx- | 4,178 | pull starcoder2:7b-fp16 results in error EOF | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-05T20:03:07 | 2024-05-06T18:33:49 | 2024-05-06T18:33:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`ollama pull starcoder2:7b-fp16` when pulling manifest outputs `Error: EOF`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4178/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7930 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7930/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7930/comments | https://api.github.com/repos/ollama/ollama/issues/7930/events | https://github.com/ollama/ollama/issues/7930 | 2,717,321,118 | I_kwDOJ0Z1Ps6h9wee | 7,930 | failed to decode batch: could not find a kv cache slot | {
"login": "wangpf09",
"id": 39894166,
"node_id": "MDQ6VXNlcjM5ODk0MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/39894166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangpf09",
"html_url": "https://github.com/wangpf09",
"followers_url": "https://api.github.com/users/wan... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-12-04T10:50:35 | 2024-12-05T12:10:57 | 2024-12-04T11:38:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i run the ollama 0.4.6、0.4.7 and the source code,all have this error
and i used apple m2
```
time=2024-12-04T18:45:31.343+08:00 level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=2052 keep=5 new=2048
panic: failed to decode batch: could not find a kv cache slot... | {
"login": "wangpf09",
"id": 39894166,
"node_id": "MDQ6VXNlcjM5ODk0MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/39894166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangpf09",
"html_url": "https://github.com/wangpf09",
"followers_url": "https://api.github.com/users/wan... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7930/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3548 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3548/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3548/comments | https://api.github.com/repos/ollama/ollama/issues/3548/events | https://github.com/ollama/ollama/pull/3548 | 2,232,603,496 | PR_kwDOJ0Z1Ps5sFh2S | 3,548 | build.go: introduce a friendlier way to build Ollama | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-04-09T04:58:39 | 2024-04-19T20:31:25 | 2024-04-09T21:18:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3548",
"html_url": "https://github.com/ollama/ollama/pull/3548",
"diff_url": "https://github.com/ollama/ollama/pull/3548.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3548.patch",
"merged_at": "2024-04-09T21:18:47"
} | This commit introduces a more friendly way to build Ollama dependencies and the binary without abusing `go generate` and removing the unnecessary extra steps it brings with it.
This script also provides nicer feedback to the user about what is happening during the build process.
At the end, it prints a helpful me... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3548/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3621 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3621/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3621/comments | https://api.github.com/repos/ollama/ollama/issues/3621/events | https://github.com/ollama/ollama/pull/3621 | 2,241,141,554 | PR_kwDOJ0Z1Ps5si7Ea | 3,621 | Update README.md with StreamDeploy | {
"login": "jl-codes",
"id": 19557526,
"node_id": "MDQ6VXNlcjE5NTU3NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/19557526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jl-codes",
"html_url": "https://github.com/jl-codes",
"followers_url": "https://api.github.com/users/jl-... | [] | closed | false | null | [] | null | 3 | 2024-04-13T00:16:41 | 2024-05-06T18:14:42 | 2024-05-06T18:14:41 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3621",
"html_url": "https://github.com/ollama/ollama/pull/3621",
"diff_url": "https://github.com/ollama/ollama/pull/3621.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3621.patch",
"merged_at": "2024-05-06T18:14:41"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3621/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4500 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4500/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4500/comments | https://api.github.com/repos/ollama/ollama/issues/4500/events | https://github.com/ollama/ollama/issues/4500 | 2,302,666,704 | I_kwDOJ0Z1Ps6JP-fQ | 4,500 | High CPU Usage and Model Stoppage Issue in Ollama on Linux CentOS7 Without GPU | {
"login": "sirfuwh",
"id": 58595497,
"node_id": "MDQ6VXNlcjU4NTk1NDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/58595497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirfuwh",
"html_url": "https://github.com/sirfuwh",
"followers_url": "https://api.github.com/users/sirfuw... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-17T12:48:45 | 2024-05-17T15:29:53 | 2024-05-17T15:29:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm using Ollama as the framework for large AI models, with open-webUI or anythingLLM as the frontend.
My machine is running Linux CentOS7 with 32GB of memory and a 24-core CPU, but no GPU.
When running models like phi3 or others in Ollama, the CPU usage is around 1200% (htop shows 12 cores ... | {
"login": "sirfuwh",
"id": 58595497,
"node_id": "MDQ6VXNlcjU4NTk1NDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/58595497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirfuwh",
"html_url": "https://github.com/sirfuwh",
"followers_url": "https://api.github.com/users/sirfuw... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4500/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/410 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/410/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/410/comments | https://api.github.com/repos/ollama/ollama/issues/410/events | https://github.com/ollama/ollama/issues/410 | 1,867,133,843 | I_kwDOJ0Z1Ps5vSjOT | 410 | Code llama 34b instruct? | {
"login": "petergeneric",
"id": 870655,
"node_id": "MDQ6VXNlcjg3MDY1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/870655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petergeneric",
"html_url": "https://github.com/petergeneric",
"followers_url": "https://api.github.com/u... | [] | closed | false | null | [] | null | 1 | 2023-08-25T14:13:20 | 2023-08-25T19:09:14 | 2023-08-25T19:09:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any chance of getting the larger 13b and 34b codellama models available? The 7b models are nice but a bit limited | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/410/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/410/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/554 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/554/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/554/comments | https://api.github.com/repos/ollama/ollama/issues/554/events | https://github.com/ollama/ollama/pull/554 | 1,903,358,659 | PR_kwDOJ0Z1Ps5asq9K | 554 | fix path for windows | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-19T16:36:53 | 2023-09-19T16:42:13 | 2023-09-19T16:42:12 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/554",
"html_url": "https://github.com/ollama/ollama/pull/554",
"diff_url": "https://github.com/ollama/ollama/pull/554.diff",
"patch_url": "https://github.com/ollama/ollama/pull/554.patch",
"merged_at": "2023-09-19T16:42:12"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/554/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7080 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7080/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7080/comments | https://api.github.com/repos/ollama/ollama/issues/7080/events | https://github.com/ollama/ollama/issues/7080 | 2,562,037,576 | I_kwDOJ0Z1Ps6YtZdI | 7,080 | Support for NVLM | {
"login": "mitar",
"id": 585279,
"node_id": "MDQ6VXNlcjU4NTI3OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/585279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitar",
"html_url": "https://github.com/mitar",
"followers_url": "https://api.github.com/users/mitar/followers"... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 7 | 2024-10-02T15:40:33 | 2024-10-09T19:23:40 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | NVLM is model from Nvidia: https://nvlm-project.github.io/ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7080/reactions",
"total_count": 33,
"+1": 33,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7080/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5391 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5391/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5391/comments | https://api.github.com/repos/ollama/ollama/issues/5391/events | https://github.com/ollama/ollama/issues/5391 | 2,382,244,315 | I_kwDOJ0Z1Ps6N_inb | 5,391 | How do I find LLMs in the GitHub repository? | {
"login": "qzc438",
"id": 61488260,
"node_id": "MDQ6VXNlcjYxNDg4MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qzc438",
"html_url": "https://github.com/qzc438",
"followers_url": "https://api.github.com/users/qzc438/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-06-30T12:59:04 | 2024-07-02T13:43:28 | 2024-07-02T13:43:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As the title describes, how do I find LLMs in the GitHub repository? Does the GitHub repository have a specific location for storing LLMs? | {
"login": "qzc438",
"id": 61488260,
"node_id": "MDQ6VXNlcjYxNDg4MjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qzc438",
"html_url": "https://github.com/qzc438",
"followers_url": "https://api.github.com/users/qzc438/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5391/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1242 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1242/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1242/comments | https://api.github.com/repos/ollama/ollama/issues/1242/events | https://github.com/ollama/ollama/issues/1242 | 2,006,632,608 | I_kwDOJ0Z1Ps53msig | 1,242 | Mac ollama install and run results in template error | {
"login": "mkontsek",
"id": 2892242,
"node_id": "MDQ6VXNlcjI4OTIyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2892242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkontsek",
"html_url": "https://github.com/mkontsek",
"followers_url": "https://api.github.com/users/mkont... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 2 | 2023-11-22T15:49:13 | 2023-11-22T17:14:13 | 2023-11-22T17:14:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | OS: macOS 14.1.1 (23B81)
RAM: 32GB
Steps to reproduce:
1. Download https://ollama.ai/download/Ollama-darwin.zip
2. Open zip
3. Move app to Applications
4. Install model from GUI prompt
5. Open terminal and run `ollama run llama2`
Observed:
Error: template: :2:11: executing "" at <.Context>: can't evaluate ... | {
"login": "mkontsek",
"id": 2892242,
"node_id": "MDQ6VXNlcjI4OTIyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2892242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkontsek",
"html_url": "https://github.com/mkontsek",
"followers_url": "https://api.github.com/users/mkont... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1242/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2533 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2533/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2533/comments | https://api.github.com/repos/ollama/ollama/issues/2533/events | https://github.com/ollama/ollama/issues/2533 | 2,137,783,483 | I_kwDOJ0Z1Ps5_a_y7 | 2,533 | Setting Query vector Size | {
"login": "stealthier-ai",
"id": 99160607,
"node_id": "U_kgDOBekSHw",
"avatar_url": "https://avatars.githubusercontent.com/u/99160607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stealthier-ai",
"html_url": "https://github.com/stealthier-ai",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | open | false | null | [] | null | 0 | 2024-02-16T02:55:48 | 2024-03-11T19:13:37 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I used Ollama Embeddings through langchain with one of the models to embed a large number of documents. The LLM I am using is a multi-lingual model and has already been tested on a significant document set in solely English. When I query the vector store through LanceDB I receive the error "ValueError: Query vector siz... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2533/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1096 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1096/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1096/comments | https://api.github.com/repos/ollama/ollama/issues/1096/events | https://github.com/ollama/ollama/issues/1096 | 1,989,239,328 | I_kwDOJ0Z1Ps52kWIg | 1,096 | how to training my local data use ollama on k8s pod | {
"login": "xinmans",
"id": 2713008,
"node_id": "MDQ6VXNlcjI3MTMwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2713008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinmans",
"html_url": "https://github.com/xinmans",
"followers_url": "https://api.github.com/users/xinmans/... | [] | closed | false | null | [] | null | 1 | 2023-11-12T04:54:52 | 2023-12-04T23:27:29 | 2023-12-04T23:27:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1096/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/230 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/230/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/230/comments | https://api.github.com/repos/ollama/ollama/issues/230/events | https://github.com/ollama/ollama/pull/230 | 1,825,023,018 | PR_kwDOJ0Z1Ps5WlJP8 | 230 | update model file docs | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-07-27T19:16:32 | 2023-07-28T14:33:54 | 2023-07-28T14:33:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/230",
"html_url": "https://github.com/ollama/ollama/pull/230",
"diff_url": "https://github.com/ollama/ollama/pull/230.diff",
"patch_url": "https://github.com/ollama/ollama/pull/230.patch",
"merged_at": "2023-07-28T14:33:53"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/230/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2757 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2757/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2757/comments | https://api.github.com/repos/ollama/ollama/issues/2757/events | https://github.com/ollama/ollama/issues/2757 | 2,153,200,472 | I_kwDOJ0Z1Ps6AVztY | 2,757 | openAI TTS cannot read all replies | {
"login": "samqin123",
"id": 103937568,
"node_id": "U_kgDOBjH2IA",
"avatar_url": "https://avatars.githubusercontent.com/u/103937568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samqin123",
"html_url": "https://github.com/samqin123",
"followers_url": "https://api.github.com/users/samqin... | [] | closed | false | null | [] | null | 1 | 2024-02-26T03:56:55 | 2024-03-12T04:32:50 | 2024-03-12T04:32:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | [brief] Mac M1 /Docker-compose deployed
[description] after switched on TTS configuaration and choose OPENAI tts engineen, the replies can read automatically upon chat replies show. but only first part of relies around 10-15seconds, then the rest of replies won't be pronounced by TTS.
[expection] TTS shall read all ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2757/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/549 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/549/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/549/comments | https://api.github.com/repos/ollama/ollama/issues/549/events | https://github.com/ollama/ollama/issues/549 | 1,900,070,108 | I_kwDOJ0Z1Ps5xQMTc | 549 | Models sometimes prompt themselves | {
"login": "txstc55",
"id": 13168188,
"node_id": "MDQ6VXNlcjEzMTY4MTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13168188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/txstc55",
"html_url": "https://github.com/txstc55",
"followers_url": "https://api.github.com/users/txstc5... | [] | closed | false | null | [] | null | 1 | 2023-09-18T02:46:45 | 2023-09-18T16:05:11 | 2023-09-18T16:05:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm using uncensored model, the issue happened with uncensored-latest, uncensored 70b and any other uncensored model. Sometimes when I prompt the model, after it made a response, it will prompt itself with something like:
```
### Input:
something that generated by the model itself
### Response:
something that is... | {
"login": "txstc55",
"id": 13168188,
"node_id": "MDQ6VXNlcjEzMTY4MTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13168188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/txstc55",
"html_url": "https://github.com/txstc55",
"followers_url": "https://api.github.com/users/txstc5... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/549/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6298 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6298/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6298/comments | https://api.github.com/repos/ollama/ollama/issues/6298/events | https://github.com/ollama/ollama/issues/6298 | 2,459,012,049 | I_kwDOJ0Z1Ps6SkYvR | 6,298 | Install Ollama with Winget on Windows | {
"login": "nikiluk",
"id": 6605974,
"node_id": "MDQ6VXNlcjY2MDU5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6605974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikiluk",
"html_url": "https://github.com/nikiluk",
"followers_url": "https://api.github.com/users/nikiluk/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | open | false | null | [] | null | 0 | 2024-08-10T09:47:31 | 2024-09-05T19:53:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Installing Ollama with winget is working perfectly, however not documented in the README.md

| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6298/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6298/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2350 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2350/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2350/comments | https://api.github.com/repos/ollama/ollama/issues/2350/events | https://github.com/ollama/ollama/issues/2350 | 2,117,191,170 | I_kwDOJ0Z1Ps5-McYC | 2,350 | Unable to access ollama server from WSL | {
"login": "TeamDman",
"id": 9356891,
"node_id": "MDQ6VXNlcjkzNTY4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9356891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TeamDman",
"html_url": "https://github.com/TeamDman",
"followers_url": "https://api.github.com/users/TeamD... | [] | closed | false | null | [] | null | 1 | 2024-02-04T16:46:56 | 2024-02-04T16:47:24 | 2024-02-04T16:47:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Running `ollama serve` in WSL should let me visit [http://127.0.0.1:11434/](http://127.0.0.1:11434/) in my Windows browser.
This worked the other day, now it doesn't.
Using netcat and `python3 -m http.server -b 192.168.1.178 8000` to test other apps/ports, it looks like only Ollama is refusing to participate.
Tr... | {
"login": "TeamDman",
"id": 9356891,
"node_id": "MDQ6VXNlcjkzNTY4OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9356891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TeamDman",
"html_url": "https://github.com/TeamDman",
"followers_url": "https://api.github.com/users/TeamD... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2350/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3951 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3951/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3951/comments | https://api.github.com/repos/ollama/ollama/issues/3951/events | https://github.com/ollama/ollama/pull/3951 | 2,266,319,171 | PR_kwDOJ0Z1Ps5t3yY5 | 3,951 | check file type before zip | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-26T18:40:12 | 2024-04-26T21:51:24 | 2024-04-26T21:51:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3951",
"html_url": "https://github.com/ollama/ollama/pull/3951",
"diff_url": "https://github.com/ollama/ollama/pull/3951.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3951.patch",
"merged_at": "2024-04-26T21:51:23"
} | also include all json | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3951/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/313 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/313/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/313/comments | https://api.github.com/repos/ollama/ollama/issues/313/events | https://github.com/ollama/ollama/pull/313 | 1,843,977,056 | PR_kwDOJ0Z1Ps5Xk0NL | 313 | fix embeddings invalid values | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-08-09T20:37:52 | 2023-08-10T14:17:02 | 2023-08-10T14:17:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/313",
"html_url": "https://github.com/ollama/ollama/pull/313",
"diff_url": "https://github.com/ollama/ollama/pull/313.diff",
"patch_url": "https://github.com/ollama/ollama/pull/313.patch",
"merged_at": "2023-08-10T14:17:01"
} | Embeddings were occasionally returning invalid values which meant we needed to reload and retry. This fix removes the cache token count which was causing this issue, and improves results. This also matches the llama.cpp example more closely.
It also adds the`unsafe.Slice` parsing that Mike suggested in my previous P... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/313/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6491 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6491/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6491/comments | https://api.github.com/repos/ollama/ollama/issues/6491/events | https://github.com/ollama/ollama/issues/6491 | 2,484,757,549 | I_kwDOJ0Z1Ps6UGmQt | 6,491 | Jamba 1.5 Model | {
"login": "sanjibnarzary",
"id": 1001052,
"node_id": "MDQ6VXNlcjEwMDEwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1001052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanjibnarzary",
"html_url": "https://github.com/sanjibnarzary",
"followers_url": "https://api.github.... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-08-24T17:56:39 | 2024-10-01T02:47:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Jamba 1.5 Open Model Family: The Most Powerful and Efficient Long Context Models.
**Features**
**Long context handling**: With a 256K effective context window, the longest in the market, Jamba 1.5 models can improve the quality of key enterprise applications, such as lengthy document summarization and analysis, a... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6491/reactions",
"total_count": 21,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6491/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5405 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5405/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5405/comments | https://api.github.com/repos/ollama/ollama/issues/5405/events | https://github.com/ollama/ollama/pull/5405 | 2,383,473,189 | PR_kwDOJ0Z1Ps50DQdU | 5,405 | server/routers.go: Fix checkNameExists | {
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 3 | 2024-07-01T10:34:21 | 2024-07-31T01:25:08 | 2024-07-30T23:29:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5405",
"html_url": "https://github.com/ollama/ollama/pull/5405",
"diff_url": "https://github.com/ollama/ollama/pull/5405.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5405.patch",
"merged_at": null
} | When copy a model with a existed name, it suppose to error out, but it success.
The old `checkNameExists` only report exist model name in case the name is different, but they are same when both of
them convert to upper case, it means `TEST` and `test` is a same name, but `test` and `test` is not a same name.
```
... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5405/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2766 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2766/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2766/comments | https://api.github.com/repos/ollama/ollama/issues/2766/events | https://github.com/ollama/ollama/issues/2766 | 2,154,640,079 | I_kwDOJ0Z1Ps6AbTLP | 2,766 | Some issues on Windows | {
"login": "vrubzov1957",
"id": 54937209,
"node_id": "MDQ6VXNlcjU0OTM3MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/54937209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrubzov1957",
"html_url": "https://github.com/vrubzov1957",
"followers_url": "https://api.github.com/... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-02-26T16:49:23 | 2024-02-27T16:39:59 | 2024-02-27T16:39:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Guys, have some issues with Ollama on Windows (11 + WSL2).
Ollama version - was downloaded 24.02.2024 from off-site, version for Windows.
1. Ollama models works on CPU, not on GPU (Nvidia 1080 11G). Once upon a time it somehow run on the video card - but the pattern of how and when it works could not be found out,... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2766/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2595 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2595/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2595/comments | https://api.github.com/repos/ollama/ollama/issues/2595/events | https://github.com/ollama/ollama/issues/2595 | 2,142,735,480 | I_kwDOJ0Z1Ps5_t4x4 | 2,595 | Conversation context no longer taken into account? | {
"login": "dictoon",
"id": 321290,
"node_id": "MDQ6VXNlcjMyMTI5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/321290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dictoon",
"html_url": "https://github.com/dictoon",
"followers_url": "https://api.github.com/users/dictoon/fo... | [] | closed | false | null | [] | null | 14 | 2024-02-19T16:19:03 | 2024-08-28T19:21:37 | 2024-02-20T03:39:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm running ollama version is 0.1.25 on macOS.
It looks like the LLM is no longer taking earlier messages into account, even though they definitely fit in the context window of the models I'm using.
I'm having a conversation like this:
```
- User: Here is some text, please summarize it.
- Assistant: <outputs... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2595/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3122 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3122/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3122/comments | https://api.github.com/repos/ollama/ollama/issues/3122/events | https://github.com/ollama/ollama/pull/3122 | 2,184,686,312 | PR_kwDOJ0Z1Ps5piuel | 3,122 | Better tmpdir cleanup | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-03-13T18:53:51 | 2024-03-20T15:28:07 | 2024-03-20T15:28:03 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3122",
"html_url": "https://github.com/ollama/ollama/pull/3122",
"diff_url": "https://github.com/ollama/ollama/pull/3122.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3122.patch",
"merged_at": "2024-03-20T15:28:03"
} | If expanding the runners fails, don't leave a corrupt/incomplete payloads dir. We now write a pid file out to the tmpdir, which allows us to scan for stale tmpdirs and remove this as long as there isn't still a process running.
Fixes #3051
Fixes #2472
Fixes #2658 (indirectly)
Verified on Mac, Linux and Windo... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3122/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8399 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8399/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8399/comments | https://api.github.com/repos/ollama/ollama/issues/8399/events | https://github.com/ollama/ollama/issues/8399 | 2,783,413,053 | I_kwDOJ0Z1Ps6l54M9 | 8,399 | unable to use nvidia GPU & how to fix | {
"login": "belmont",
"id": 11472085,
"node_id": "MDQ6VXNlcjExNDcyMDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/11472085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/belmont",
"html_url": "https://github.com/belmont",
"followers_url": "https://api.github.com/users/belmon... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2025-01-13T09:17:24 | 2025-01-15T23:38:22 | 2025-01-15T23:38:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been suffering 3 hours this morning to make nvidia work with ollama fresh install. Whatever model i tried It did not use the nvidia H100 GPUs even if the systemctl status ollama is nicely showing the GPUs. For this you need to install nvidia toolkit. I have picked the latest of driver, t... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8399/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5310 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5310/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5310/comments | https://api.github.com/repos/ollama/ollama/issues/5310/events | https://github.com/ollama/ollama/pull/5310 | 2,376,280,174 | PR_kwDOJ0Z1Ps5zr4Tc | 5,310 | Update OpenAI Compatibility Docs with Image Chat Support | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-06-26T21:04:55 | 2024-08-02T20:05:58 | 2024-08-02T20:05:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5310",
"html_url": "https://github.com/ollama/ollama/pull/5310",
"diff_url": "https://github.com/ollama/ollama/pull/5310.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5310.patch",
"merged_at": "2024-08-02T20:05:57"
} | Referencing #5208 | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5310/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6880 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6880/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6880/comments | https://api.github.com/repos/ollama/ollama/issues/6880/events | https://github.com/ollama/ollama/issues/6880 | 2,537,040,644 | I_kwDOJ0Z1Ps6XOCsE | 6,880 | Feature Request: Support logprobs before GTA 6 comes out | {
"login": "iurimatias",
"id": 176720,
"node_id": "MDQ6VXNlcjE3NjcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/176720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iurimatias",
"html_url": "https://github.com/iurimatias",
"followers_url": "https://api.github.com/users/i... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 5 | 2024-09-19T18:02:17 | 2025-01-07T19:25:00 | 2025-01-07T19:25:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ERROR: type should be string, got "\r\nhttps://github.com/ollama/ollama/pull/1640#issuecomment-2352584858 AND https://github.com/ollama/ollama/issues/2415#issuecomment-2361193021" | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6880/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 6,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6880/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.