url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/4767 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4767/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4767/comments | https://api.github.com/repos/ollama/ollama/issues/4767/events | https://github.com/ollama/ollama/issues/4767 | 2,329,186,910 | I_kwDOJ0Z1Ps6K1JJe | 4,767 | Model response corruption and leaking data between session. | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 8 | 2024-06-01T14:58:55 | 2024-06-25T13:46:38 | 2024-06-25T13:46:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`main` when running a model (specifically `llama3:8b-instruct-fp16` will begin to generate gibberish. It will also leak state between sessions. Swapping out the models will reset the issue, but it will quickly return after a few runs against the model.
This issue does not happen with 0.1.38... | {
"login": "MarkWard0110",
"id": 90335263,
"node_id": "MDQ6VXNlcjkwMzM1MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/90335263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkWard0110",
"html_url": "https://github.com/MarkWard0110",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4767/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/163 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/163/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/163/comments | https://api.github.com/repos/ollama/ollama/issues/163/events | https://github.com/ollama/ollama/issues/163 | 1,815,984,944 | I_kwDOJ0Z1Ps5sPbsw | 163 | How to call ollama to use downloaded weights? | {
"login": "smuskal",
"id": 4172879,
"node_id": "MDQ6VXNlcjQxNzI4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4172879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smuskal",
"html_url": "https://github.com/smuskal",
"followers_url": "https://api.github.com/users/smuskal/... | [] | closed | false | null | [] | null | 2 | 2023-07-21T15:17:44 | 2023-07-21T18:51:25 | 2023-07-21T17:09:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Meta enabled download of the models - https://github.com/facebookresearch/llama/blob/main/README.md How can ollama be directed to use downloaded model/weights? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/163/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/955 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/955/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/955/comments | https://api.github.com/repos/ollama/ollama/issues/955/events | https://github.com/ollama/ollama/pull/955 | 1,970,820,193 | PR_kwDOJ0Z1Ps5eQHrH | 955 | docs: add examples using bash to compare models | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 1 | 2023-10-31T16:14:16 | 2023-11-10T14:59:33 | 2023-11-10T14:59:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/955",
"html_url": "https://github.com/ollama/ollama/pull/955",
"diff_url": "https://github.com/ollama/ollama/pull/955.diff",
"patch_url": "https://github.com/ollama/ollama/pull/955.patch",
"merged_at": "2023-11-10T14:59:32"
} | This includes two bash scripts. the first will run a bunch of questions in sourcequestions against llama2. The second lets you pick 4 models on your system and run the same questions against all of them, making it easier to compare. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/955/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7451 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7451/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7451/comments | https://api.github.com/repos/ollama/ollama/issues/7451/events | https://github.com/ollama/ollama/pull/7451 | 2,627,364,621 | PR_kwDOJ0Z1Ps6AjJM4 | 7,451 | Added ollama-haskell library | {
"login": "tusharad",
"id": 40828350,
"node_id": "MDQ6VXNlcjQwODI4MzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/40828350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharad",
"html_url": "https://github.com/tusharad",
"followers_url": "https://api.github.com/users/tus... | [] | closed | false | null | [] | null | 1 | 2024-10-31T16:56:45 | 2024-11-17T23:18:05 | 2024-11-17T23:18:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7451",
"html_url": "https://github.com/ollama/ollama/pull/7451",
"diff_url": "https://github.com/ollama/ollama/pull/7451.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7451.patch",
"merged_at": "2024-11-17T23:18:04"
} | This PR adds a new Haskell library binding, [ollama-haskell](https://github.com/tusharad/ollama-haskell), to the "Libraries" section of the Ollama README. This library provides a straightforward way for Haskell developers to interface with the Ollama API.
@jessegross | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7451/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7451/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4211 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4211/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4211/comments | https://api.github.com/repos/ollama/ollama/issues/4211/events | https://github.com/ollama/ollama/issues/4211 | 2,281,899,836 | I_kwDOJ0Z1Ps6IAwc8 | 4,211 | lightblue/suzume-llama-3-8B-multilingual | {
"login": "plitc",
"id": 6014760,
"node_id": "MDQ6VXNlcjYwMTQ3NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6014760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plitc",
"html_url": "https://github.com/plitc",
"followers_url": "https://api.github.com/users/plitc/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-05-06T22:30:46 | 2024-05-09T21:13:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Dear ollama team,
please add support for llama3 based models like: https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual
thanks in advance and best regards
Daniel Plominski | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4211/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3312 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3312/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3312/comments | https://api.github.com/repos/ollama/ollama/issues/3312/events | https://github.com/ollama/ollama/issues/3312 | 2,203,929,872 | I_kwDOJ0Z1Ps6DXU0Q | 3,312 | More choices with in Windows Installation | {
"login": "nightboysfm",
"id": 66571988,
"node_id": "MDQ6VXNlcjY2NTcxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/66571988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nightboysfm",
"html_url": "https://github.com/nightboysfm",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2024-03-23T15:37:11 | 2024-03-24T19:33:40 | 2024-03-24T19:33:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I want to have some controls on how and where the software is installing itself.
For various reasons I think it's not pretty good because it install itself in your user profile which is bad for numerous reasons:
- You'll get insanely huge AD roaming profiles....
- Same FSLogix profi... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3312/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3312/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7314 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7314/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7314/comments | https://api.github.com/repos/ollama/ollama/issues/7314/events | https://github.com/ollama/ollama/issues/7314 | 2,605,071,525 | I_kwDOJ0Z1Ps6bRjyl | 7,314 | ollama run llamaX.X The directory where the model is downloaded and stored | {
"login": "Intoseaworks",
"id": 7384487,
"node_id": "MDQ6VXNlcjczODQ0ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7384487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Intoseaworks",
"html_url": "https://github.com/Intoseaworks",
"followers_url": "https://api.github.com... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-10-22T10:49:26 | 2024-10-22T17:37:55 | 2024-10-22T17:37:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I run ollama run llama3.1 on my local terminal. After downloading, where is the model stored? Why can't I find it? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7314/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2375 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2375/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2375/comments | https://api.github.com/repos/ollama/ollama/issues/2375/events | https://github.com/ollama/ollama/issues/2375 | 2,121,493,434 | I_kwDOJ0Z1Ps5-c2u6 | 2,375 | install process forces to copy `ollama` executable into only specific folders | {
"login": "eirnym",
"id": 485399,
"node_id": "MDQ6VXNlcjQ4NTM5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/485399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eirnym",
"html_url": "https://github.com/eirnym",
"followers_url": "https://api.github.com/users/eirnym/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677279472,
"node_id": ... | open | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 3 | 2024-02-06T19:09:43 | 2024-05-02T21:40:04 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I understand why `ollama` executable might be added to $PATH. For me it's convenience that particular executable will be found in user's `$PATH` folder. The downsides of the process are locations are predefined, they can't be changed or step entirely skipped (under users' conscious choice).
Could you please add this... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2375/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2375/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4029 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4029/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4029/comments | https://api.github.com/repos/ollama/ollama/issues/4029/events | https://github.com/ollama/ollama/issues/4029 | 2,269,357,039 | I_kwDOJ0Z1Ps6HQ6Pv | 4,029 | Error: unable to push make sure this namespace exists | {
"login": "katmai",
"id": 1245160,
"node_id": "MDQ6VXNlcjEyNDUxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1245160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katmai",
"html_url": "https://github.com/katmai",
"followers_url": "https://api.github.com/users/katmai/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-04-29T15:35:15 | 2024-09-09T15:28:49 | 2024-04-29T17:38:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
hi, i was trying to push my local model, and i've run into this error.
i did create the namespace in the interface and added the ssh key in settings.
```
atlas@chia01:~$ make buildall
Rebuilding all...
2024/04/29 17:34:13 parser.go:73: WARN Unknown command: .
2024/04/29 17:34:13 parser... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4029/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/856 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/856/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/856/comments | https://api.github.com/repos/ollama/ollama/issues/856/events | https://github.com/ollama/ollama/issues/856 | 1,954,671,930 | I_kwDOJ0Z1Ps50ge06 | 856 | Implement Function call support for LLama2 models | {
"login": "AlexandrePoisson",
"id": 13329302,
"node_id": "MDQ6VXNlcjEzMzI5MzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/13329302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexandrePoisson",
"html_url": "https://github.com/AlexandrePoisson",
"followers_url": "https://... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6100196012,
"node_id": ... | closed | false | null | [] | null | 2 | 2023-10-20T16:40:30 | 2023-12-04T20:17:10 | 2023-12-04T20:17:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # Implement Function call support
I want to use langchain's capability to [create_tagging_chain](https://python.langchain.com/docs/use_cases/tagging) with Ollama to constraint the output on a specific JSON format.
Problem is that it works only for models which supports OpenAI function calling API. see [related is... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/856/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2103 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2103/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2103/comments | https://api.github.com/repos/ollama/ollama/issues/2103/events | https://github.com/ollama/ollama/issues/2103 | 2,091,730,711 | I_kwDOJ0Z1Ps58rUcX | 2,103 | GPU on Fedora 39 | {
"login": "schwarz-christoph",
"id": 38505168,
"node_id": "MDQ6VXNlcjM4NTA1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/38505168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schwarz-christoph",
"html_url": "https://github.com/schwarz-christoph",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-01-20T00:07:16 | 2024-04-12T21:48:03 | 2024-04-12T21:48:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | After I instaled ollama on my Fedora Workstation 39 the install script installs automatically the NVIDIA Drivers for my GPU but after reboot the Graphics where broken and also all other Drivers like Wifi were not loaded | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2103/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8400 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8400/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8400/comments | https://api.github.com/repos/ollama/ollama/issues/8400/events | https://github.com/ollama/ollama/issues/8400 | 2,783,598,037 | I_kwDOJ0Z1Ps6l6lXV | 8,400 | Model pulling behind proxy index out of range | {
"login": "xyzBart",
"id": 28602402,
"node_id": "MDQ6VXNlcjI4NjAyNDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/28602402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyzBart",
"html_url": "https://github.com/xyzBart",
"followers_url": "https://api.github.com/users/xyzBar... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-13T10:40:21 | 2025-01-13T16:58:10 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I'm getting the following error when trying to go through corpo proxy when downloading models with ollama pull:
```plaintext
panic: runtime error: index out of range [0] with length 0
goroutine 7 [running]:
github.com/ollama/ollama/server.(*blobDownload).Prepare(0xc0001cf1f0... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8400/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4833 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4833/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4833/comments | https://api.github.com/repos/ollama/ollama/issues/4833/events | https://github.com/ollama/ollama/pull/4833 | 2,335,585,537 | PR_kwDOJ0Z1Ps5xiZhd | 4,833 | Added E-Matrix | {
"login": "elliotwellick",
"id": 78291665,
"node_id": "MDQ6VXNlcjc4MjkxNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/78291665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elliotwellick",
"html_url": "https://github.com/elliotwellick",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2024-06-05T10:59:26 | 2024-06-14T17:04:26 | 2024-06-14T17:04:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4833",
"html_url": "https://github.com/ollama/ollama/pull/4833",
"diff_url": "https://github.com/ollama/ollama/pull/4833.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4833.patch",
"merged_at": null
} | Ollama based AI chatbot for Matrix | {
"login": "elliotwellick",
"id": 78291665,
"node_id": "MDQ6VXNlcjc4MjkxNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/78291665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elliotwellick",
"html_url": "https://github.com/elliotwellick",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4833/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2711 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2711/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2711/comments | https://api.github.com/repos/ollama/ollama/issues/2711/events | https://github.com/ollama/ollama/issues/2711 | 2,151,437,784 | I_kwDOJ0Z1Ps6APFXY | 2,711 | This does not look right! | {
"login": "Waradu",
"id": 89935135,
"node_id": "MDQ6VXNlcjg5OTM1MTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/89935135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Waradu",
"html_url": "https://github.com/Waradu",
"followers_url": "https://api.github.com/users/Waradu/fo... | [] | closed | false | null | [] | null | 1 | 2024-02-23T16:40:00 | 2024-02-23T16:42:06 | 2024-02-23T16:42:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
Not much more to say | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2711/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4250 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4250/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4250/comments | https://api.github.com/repos/ollama/ollama/issues/4250/events | https://github.com/ollama/ollama/issues/4250 | 2,284,634,939 | I_kwDOJ0Z1Ps6ILMM7 | 4,250 | Get list models is broken | {
"login": "sr-tream",
"id": 12231048,
"node_id": "MDQ6VXNlcjEyMjMxMDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/12231048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sr-tream",
"html_url": "https://github.com/sr-tream",
"followers_url": "https://api.github.com/users/sr-... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-08T04:07:17 | 2024-05-09T07:17:18 | 2024-05-08T18:09:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The `ollama list` command and `/api/tags` request do not return models list.
Firstly - ollama try to find models is lowercase:
```
Error: open /var/lib/ollama/.ollama/models/manifests/registry.ollama.ai/library/dolphin-mistral/7b-v2.6-dpo-laser-q6_k: no such file or directory
```
Real... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4250/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6107 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6107/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6107/comments | https://api.github.com/repos/ollama/ollama/issues/6107/events | https://github.com/ollama/ollama/pull/6107 | 2,441,026,025 | PR_kwDOJ0Z1Ps53C3J- | 6,107 | llama: Fix parallel requests | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-31T22:05:57 | 2024-07-31T23:36:51 | 2024-07-31T23:36:49 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6107",
"html_url": "https://github.com/ollama/ollama/pull/6107",
"diff_url": "https://github.com/ollama/ollama/pull/6107.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6107.patch",
"merged_at": "2024-07-31T23:36:49"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6107/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7951 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7951/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7951/comments | https://api.github.com/repos/ollama/ollama/issues/7951/events | https://github.com/ollama/ollama/issues/7951 | 2,720,895,791 | I_kwDOJ0Z1Ps6iLZMv | 7,951 | Please can you add logits extraction functionalities ? | {
"login": "Elimane0800",
"id": 125673323,
"node_id": "U_kgDOB32faw",
"avatar_url": "https://avatars.githubusercontent.com/u/125673323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elimane0800",
"html_url": "https://github.com/Elimane0800",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 3 | 2024-12-05T16:36:40 | 2024-12-29T20:37:46 | 2024-12-29T20:37:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any way to extract ollama models logits please ? It'll be very helpful for those who work in AI research. If there's already a func like this with ollama please expose it to us so that we can continue using ollama. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7951/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7060 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7060/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7060/comments | https://api.github.com/repos/ollama/ollama/issues/7060/events | https://github.com/ollama/ollama/issues/7060 | 2,558,735,268 | I_kwDOJ0Z1Ps6YgzOk | 7,060 | Mistral-large LLM requires 56GB of RAM. Please note this. | {
"login": "bulrush15",
"id": 7031486,
"node_id": "MDQ6VXNlcjcwMzE0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7031486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bulrush15",
"html_url": "https://github.com/bulrush15",
"followers_url": "https://api.github.com/users/bu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": ... | closed | false | null | [] | null | 4 | 2024-10-01T09:50:15 | 2025-01-06T07:41:39 | 2025-01-06T07:41:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please make a note of this on the readme document here and on the library page at https://ollama.com/library.
Ollama gave me this error message when I tried to run mistral-large. It's huge.
| {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7060/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/712 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/712/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/712/comments | https://api.github.com/repos/ollama/ollama/issues/712/events | https://github.com/ollama/ollama/issues/712 | 1,928,915,216 | I_kwDOJ0Z1Ps5y-OkQ | 712 | Where is the model file path on MacOS | {
"login": "RoversX",
"id": 85817538,
"node_id": "MDQ6VXNlcjg1ODE3NTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/85817538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RoversX",
"html_url": "https://github.com/RoversX",
"followers_url": "https://api.github.com/users/Rovers... | [] | closed | false | null | [] | null | 13 | 2023-10-05T19:22:16 | 2024-07-11T07:15:35 | 2023-12-04T19:37:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, I would like to know where is the model path on Mac OS and how can I fully uninstall Ollama because I installed it in the wrong place.
Thanks | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/712/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2462 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2462/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2462/comments | https://api.github.com/repos/ollama/ollama/issues/2462/events | https://github.com/ollama/ollama/issues/2462 | 2,130,170,390 | I_kwDOJ0Z1Ps5-99IW | 2,462 | moondream1 model support | {
"login": "questsin",
"id": 1900759,
"node_id": "MDQ6VXNlcjE5MDA3NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1900759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/questsin",
"html_url": "https://github.com/questsin",
"followers_url": "https://api.github.com/users/quest... | [] | closed | false | null | [] | null | 2 | 2024-02-12T13:43:12 | 2024-03-08T22:22:34 | 2024-02-13T17:27:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | how to port the tiny vision model at https://huggingface.co/vikhyatk/moondream1
with Tensor type FP16 using SigLIP, Phi-1.5 and the LLaVa training dataset, to ollama for local ubuntu execution
moondream uses the following python3 libraries
- accelerate==0.25.0
- huggingface-hub==0.20.1
- Pillow==10.1.0
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2462/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2462/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1762 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1762/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1762/comments | https://api.github.com/repos/ollama/ollama/issues/1762/events | https://github.com/ollama/ollama/issues/1762 | 2,062,693,163 | I_kwDOJ0Z1Ps568jMr | 1,762 | I have a problem I get many # | {
"login": "necro304",
"id": 31376673,
"node_id": "MDQ6VXNlcjMxMzc2Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/31376673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/necro304",
"html_url": "https://github.com/necro304",
"followers_url": "https://api.github.com/users/nec... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-01-02T16:55:27 | 2024-03-11T21:18:36 | 2024-03-11T21:18:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1762/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/889 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/889/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/889/comments | https://api.github.com/repos/ollama/ollama/issues/889/events | https://github.com/ollama/ollama/issues/889 | 1,959,471,941 | I_kwDOJ0Z1Ps50yytF | 889 | 404 page not found when pushing a model with UPPERcase letters | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 1 | 2023-10-24T15:01:37 | 2023-10-24T20:52:30 | 2023-10-24T20:52:29 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | if you create a model, say mattw/HornyEchidna-13b-v0.1 that has uppercase letters in it, then you cannot push it to the registry. The error is:
```
ollama push mattw/HornyEchidna-13b-v0.1
retrieving manifest
pushing c9244955e935... 0% | ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/889/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3555 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3555/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3555/comments | https://api.github.com/repos/ollama/ollama/issues/3555/events | https://github.com/ollama/ollama/pull/3555 | 2,233,113,378 | PR_kwDOJ0Z1Ps5sHQfU | 3,555 | Correct directory reference in macapp/README | {
"login": "alexmavr",
"id": 680441,
"node_id": "MDQ6VXNlcjY4MDQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexmavr",
"html_url": "https://github.com/alexmavr",
"followers_url": "https://api.github.com/users/alexmav... | [] | closed | false | null | [] | null | 0 | 2024-04-09T10:23:12 | 2024-10-01T22:51:30 | 2024-04-09T13:48:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3555",
"html_url": "https://github.com/ollama/ollama/pull/3555",
"diff_url": "https://github.com/ollama/ollama/pull/3555.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3555.patch",
"merged_at": "2024-04-09T13:48:46"
} | Minor README change that was likely omitted from https://github.com/ollama/ollama/commit/9da9e8fb7254df1148f9619bec781e52dc954678 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3555/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/181 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/181/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/181/comments | https://api.github.com/repos/ollama/ollama/issues/181/events | https://github.com/ollama/ollama/issues/181 | 1,816,981,127 | I_kwDOJ0Z1Ps5sTO6H | 181 | Can we train custom models using pdfs? | {
"login": "akasranjan005",
"id": 14335683,
"node_id": "MDQ6VXNlcjE0MzM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/14335683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akasranjan005",
"html_url": "https://github.com/akasranjan005",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 6 | 2023-07-23T04:17:40 | 2023-11-14T14:36:28 | 2023-11-14T14:36:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Instead of just the system prompts can we also pass custom knowledge base? mostly in pdf or any other format? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/181/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/181/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4284 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4284/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4284/comments | https://api.github.com/repos/ollama/ollama/issues/4284/events | https://github.com/ollama/ollama/issues/4284 | 2,287,693,064 | I_kwDOJ0Z1Ps6IW20I | 4,284 | Multiple GPU allocate compute buffer error | {
"login": "cashlo",
"id": 767464,
"node_id": "MDQ6VXNlcjc2NzQ2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/767464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cashlo",
"html_url": "https://github.com/cashlo",
"followers_url": "https://api.github.com/users/cashlo/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-09T13:23:36 | 2024-05-09T22:25:29 | 2024-05-09T22:25:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have a 3090 ti and a 1070, when I try to run llama3:70b-instruct-q3_K_S
I got this error
```
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090 Ti, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
llm_load_ten... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4284/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6393 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6393/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6393/comments | https://api.github.com/repos/ollama/ollama/issues/6393/events | https://github.com/ollama/ollama/pull/6393 | 2,470,791,528 | PR_kwDOJ0Z1Ps54m1pV | 6,393 | Paligemma Support | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 4 | 2024-08-16T18:53:21 | 2025-01-04T08:04:37 | 2024-11-21T18:40:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6393",
"html_url": "https://github.com/ollama/ollama/pull/6393",
"diff_url": "https://github.com/ollama/ollama/pull/6393.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6393.patch",
"merged_at": null
} | This PR is for implementing [Paligemma](https://huggingface.co/collections/google/paligemma-release-6643a9ffbf57de2ae0448dda) support within Ollama using GGML. Paligemma is a one-shot image-text to text model from Google.
The main features of this PR are the pre-processing of image-text prompts to Paligemma, incorpo... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6393/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/445 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/445/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/445/comments | https://api.github.com/repos/ollama/ollama/issues/445/events | https://github.com/ollama/ollama/issues/445 | 1,874,743,075 | I_kwDOJ0Z1Ps5vvk8j | 445 | Option for max number of tokens to generate | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-08-31T03:56:31 | 2023-08-31T14:01:20 | 2023-08-31T14:01:20 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/445/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4480 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4480/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4480/comments | https://api.github.com/repos/ollama/ollama/issues/4480/events | https://github.com/ollama/ollama/issues/4480 | 2,301,379,876 | I_kwDOJ0Z1Ps6JLEUk | 4,480 | Ollama tries to re-create existing models path | {
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-16T21:28:23 | 2024-11-06T14:47:56 | 2024-11-06T14:47:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When starting the ollama service, i have a env variable set OLLAMA_MODELS=/mnt/storage/fast/ollama_models
Ollama fails to start, it attempts to mkdir /mnt/storage/fast it fails (oviously) and crashes.
I assume there to be some kind of permission issue as the drive is a ntfs drive (due to... | {
"login": "LumiWasTaken",
"id": 49376128,
"node_id": "MDQ6VXNlcjQ5Mzc2MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/49376128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LumiWasTaken",
"html_url": "https://github.com/LumiWasTaken",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4480/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5400 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5400/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5400/comments | https://api.github.com/repos/ollama/ollama/issues/5400/events | https://github.com/ollama/ollama/issues/5400 | 2,383,032,808 | I_kwDOJ0Z1Ps6OCjHo | 5,400 | Gemma2 work incorrect in parallel request | {
"login": "dudukk1312",
"id": 33556384,
"node_id": "MDQ6VXNlcjMzNTU2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/33556384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dudukk1312",
"html_url": "https://github.com/dudukk1312",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-07-01T07:10:29 | 2024-08-09T23:24:54 | 2024-08-09T23:24:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm testing for the multiple requests that can be handled in my system.
For Gemma2, when executing a single request, the respond is perfect.
But for multiple requests, the response looks so dummy.
... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5400/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8422 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8422/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8422/comments | https://api.github.com/repos/ollama/ollama/issues/8422/events | https://github.com/ollama/ollama/issues/8422 | 2,787,231,675 | I_kwDOJ0Z1Ps6mIce7 | 8,422 | Support for llamaindex/vdr-2b-multi-v1: Multilingual Visual Document Retrieval Model | {
"login": "JPC612",
"id": 177754485,
"node_id": "U_kgDOCphRdQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177754485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JPC612",
"html_url": "https://github.com/JPC612",
"followers_url": "https://api.github.com/users/JPC612/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2025-01-14T14:07:34 | 2025-01-14T14:07:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
vdr-2b-multi-v1 is a cutting-edge multilingual embedding model designed for visual document retrieval across various languages and domains. The model encodes document page screenshots into dense single-vector representations, allowing efficient search and querying of visually rich multilingual documents without OCR or... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8422/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1553 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1553/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1553/comments | https://api.github.com/repos/ollama/ollama/issues/1553/events | https://github.com/ollama/ollama/issues/1553 | 2,044,324,202 | I_kwDOJ0Z1Ps552elq | 1,553 | customise number of experts in mixtral | {
"login": "scienlabs",
"id": 129055509,
"node_id": "U_kgDOB7E7FQ",
"avatar_url": "https://avatars.githubusercontent.com/u/129055509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scienlabs",
"html_url": "https://github.com/scienlabs",
"followers_url": "https://api.github.com/users/scienl... | [] | open | false | null | [] | null | 4 | 2023-12-15T20:21:19 | 2024-08-02T03:53:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Could you someone provide guidance or documentation on how to adjust the number of experts in mixtral? I'm particularly interested in understanding if there's a way to dynamically adjust this number based on the requirements of different tasks or scenarios. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1553/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8672 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8672/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8672/comments | https://api.github.com/repos/ollama/ollama/issues/8672/events | https://github.com/ollama/ollama/pull/8672 | 2,819,339,493 | PR_kwDOJ0Z1Ps6JbBf1 | 8,672 | openai: set num_ctx through extra body | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 0 | 2025-01-29T21:14:09 | 2025-01-29T21:22:12 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8672",
"html_url": "https://github.com/ollama/ollama/pull/8672",
"diff_url": "https://github.com/ollama/ollama/pull/8672.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8672.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8672/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1953/comments | https://api.github.com/repos/ollama/ollama/issues/1953/events | https://github.com/ollama/ollama/issues/1953 | 2,078,997,078 | I_kwDOJ0Z1Ps576vpW | 1,953 | config for the server to change the location of the models | {
"login": "aemonge",
"id": 1322348,
"node_id": "MDQ6VXNlcjEzMjIzNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aemonge",
"html_url": "https://github.com/aemonge",
"followers_url": "https://api.github.com/users/aemonge/... | [] | closed | false | null | [] | null | 7 | 2024-01-12T14:41:12 | 2024-02-01T14:59:47 | 2024-02-01T14:59:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can we have a /etc/ollama.json file to change the default path for the models? | {
"login": "aemonge",
"id": 1322348,
"node_id": "MDQ6VXNlcjEzMjIzNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1322348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aemonge",
"html_url": "https://github.com/aemonge",
"followers_url": "https://api.github.com/users/aemonge/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1953/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1548 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1548/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1548/comments | https://api.github.com/repos/ollama/ollama/issues/1548/events | https://github.com/ollama/ollama/issues/1548 | 2,044,105,727 | I_kwDOJ0Z1Ps551pP_ | 1,548 | When is the Windows Version of Ollama Coming out? | {
"login": "Arnav3241",
"id": 105989942,
"node_id": "U_kgDOBlFHNg",
"avatar_url": "https://avatars.githubusercontent.com/u/105989942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arnav3241",
"html_url": "https://github.com/Arnav3241",
"followers_url": "https://api.github.com/users/Arnav3... | [] | closed | false | null | [] | null | 5 | 2023-12-15T17:32:58 | 2024-03-02T14:27:51 | 2023-12-19T19:30:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey there,
When is the Windows Version of Ollama Coming out?
I am several hundreds of people are waiting for it eagerly including since the beginning of this project. Hope to see that update soon as most people use Windows. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1548/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5835 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5835/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5835/comments | https://api.github.com/repos/ollama/ollama/issues/5835/events | https://github.com/ollama/ollama/issues/5835 | 2,421,642,075 | I_kwDOJ0Z1Ps6QV1Nb | 5,835 | orian-ollama-webui | {
"login": "werruww",
"id": 157249411,
"node_id": "U_kgDOCV9vgw",
"avatar_url": "https://avatars.githubusercontent.com/u/157249411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/werruww",
"html_url": "https://github.com/werruww",
"followers_url": "https://api.github.com/users/werruww/foll... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-07-21T23:11:13 | 2024-09-04T04:08:32 | 2024-09-04T04:08:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Failed to post request http://Localhost:11434
edge
win10
The extension works on http://localhost:11434 but gives an error | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5835/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4794 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4794/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4794/comments | https://api.github.com/repos/ollama/ollama/issues/4794/events | https://github.com/ollama/ollama/issues/4794 | 2,330,482,605 | I_kwDOJ0Z1Ps6K6Fet | 4,794 | Error: llama runner process has terminated: exit status 0xc000001d | {
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-06-03T08:17:11 | 2024-06-03T08:17:29 | 2024-06-03T08:17:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
run llama3:latest
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4794/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/8269 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8269/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8269/comments | https://api.github.com/repos/ollama/ollama/issues/8269/events | https://github.com/ollama/ollama/issues/8269 | 2,763,208,278 | I_kwDOJ0Z1Ps6kszZW | 8,269 | Community Contribution: Open-Source Chinese Tutorial for Ollama | {
"login": "AXYZdong",
"id": 45477220,
"node_id": "MDQ6VXNlcjQ1NDc3MjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/45477220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AXYZdong",
"html_url": "https://github.com/AXYZdong",
"followers_url": "https://api.github.com/users/AXY... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-30T13:24:23 | 2024-12-30T13:35:09 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, we'd like to contribute to the Ollama community by announcing the release of our open-source Chinese tutorial.
This tutorial aims to be comprehensive and easy to understand, covering:
- Ollama Introduction
- Ollama Installation and Configuration
- Custom Model Import
- Ollama REST API
- Using Ollama wi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8269/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6424 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6424/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6424/comments | https://api.github.com/repos/ollama/ollama/issues/6424/events | https://github.com/ollama/ollama/pull/6424 | 2,474,013,357 | PR_kwDOJ0Z1Ps54xXBi | 6,424 | Fix overlapping artifact name on CI | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-19T19:08:23 | 2024-08-19T19:12:02 | 2024-08-19T19:11:58 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6424",
"html_url": "https://github.com/ollama/ollama/pull/6424",
"diff_url": "https://github.com/ollama/ollama/pull/6424.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6424.patch",
"merged_at": "2024-08-19T19:11:58"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6424/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4722 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4722/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4722/comments | https://api.github.com/repos/ollama/ollama/issues/4722/events | https://github.com/ollama/ollama/issues/4722 | 2,325,607,927 | I_kwDOJ0Z1Ps6KnfX3 | 4,722 | Slower performance on Arm64 with Phi3 and Lexi-Llama on 1.39 | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | open | false | null | [] | null | 1 | 2024-05-30T12:51:26 | 2024-05-31T18:18:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am getting 5x slower speeds with the new binary/version of ollama-linux-arm64 1.39 compared to the 1.38 release, and/or earlier.
Not sure what's going on. Am using Orange Pi 5 Plus.
Like I said - 1.38 works flawlessly, even on my Raspberry Pi 5.
### OS
Linux
### GPU
Other
### CPU
O... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4722/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1819 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1819/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1819/comments | https://api.github.com/repos/ollama/ollama/issues/1819/events | https://github.com/ollama/ollama/pull/1819 | 2,068,346,258 | PR_kwDOJ0Z1Ps5jXbwf | 1,819 | Support multiple LLM libs; ROCm v5 and v6; Rosetta, AVX, and AVX2 compatible CPU builds | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-01-06T03:52:13 | 2024-01-11T22:00:51 | 2024-01-11T22:00:48 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1819",
"html_url": "https://github.com/ollama/ollama/pull/1819",
"diff_url": "https://github.com/ollama/ollama/pull/1819.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1819.patch",
"merged_at": "2024-01-11T22:00:48"
} | In some cases we may want multiple variants for a given GPU type or CPU. This adds logic to have an optional Variant which we can use to select an optimal library, but also allows us to try multiple variants in case some fail to load.
This change includes updates to the Dockerfile.build to compile 2 variants for ROC... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1819/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2934 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2934/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2934/comments | https://api.github.com/repos/ollama/ollama/issues/2934/events | https://github.com/ollama/ollama/issues/2934 | 2,169,008,855 | I_kwDOJ0Z1Ps6BSHLX | 2,934 | Unable to init GPU: unknown error | {
"login": "PLNech",
"id": 1821404,
"node_id": "MDQ6VXNlcjE4MjE0MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1821404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PLNech",
"html_url": "https://github.com/PLNech",
"followers_url": "https://api.github.com/users/PLNech/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-03-05T11:54:16 | 2024-08-08T02:34:33 | 2024-03-24T18:15:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi there! My ollama-based project (thanks for the amazing framework <3) suddenly stopped using the GPU as backend.
It used to work well and I could confirm that the `GPU layers` offloading was happening from logs a few days ago.
Today the specific error I see in journals are: `Failed to load dynamic library /tmp/... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2934/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2934/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4317 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4317/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4317/comments | https://api.github.com/repos/ollama/ollama/issues/4317/events | https://github.com/ollama/ollama/pull/4317 | 2,290,049,993 | PR_kwDOJ0Z1Ps5vHajY | 4,317 | Don't clamp ctx size in `PredictServerFit` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-10T16:32:57 | 2024-05-10T17:17:12 | 2024-05-10T17:17:12 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4317",
"html_url": "https://github.com/ollama/ollama/pull/4317",
"diff_url": "https://github.com/ollama/ollama/pull/4317.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4317.patch",
"merged_at": "2024-05-10T17:17:12"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4317/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4030 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4030/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4030/comments | https://api.github.com/repos/ollama/ollama/issues/4030/events | https://github.com/ollama/ollama/issues/4030 | 2,269,360,438 | I_kwDOJ0Z1Ps6HQ7E2 | 4,030 | Integrate json_repair package to overcome the slow json feature | {
"login": "ArslanKAS",
"id": 43797457,
"node_id": "MDQ6VXNlcjQzNzk3NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/43797457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArslanKAS",
"html_url": "https://github.com/ArslanKAS",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-04-29T15:36:54 | 2024-05-01T22:37:35 | 2024-05-01T22:37:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi there,
Ollama is a great tool to get started with LLMs but the issue I'm facing is the JSON processing. The JSON format enforcer argument by the Ollama package just takes too long. It turns a 8 second response from LLaMa3 into a 90 second response. I don't know why it takes too long. I've come across a great pack... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4030/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/668 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/668/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/668/comments | https://api.github.com/repos/ollama/ollama/issues/668/events | https://github.com/ollama/ollama/issues/668 | 1,920,910,935 | I_kwDOJ0Z1Ps5yfsZX | 668 | Client only displays `Unexpected EOF` when error happens during `/generate` | {
"login": "ratnadeep007",
"id": 17860233,
"node_id": "MDQ6VXNlcjE3ODYwMjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/17860233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratnadeep007",
"html_url": "https://github.com/ratnadeep007",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 12 | 2023-10-01T18:42:59 | 2024-03-05T19:16:47 | 2023-10-27T20:55:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Issue:
codellama 13b run while codellama 7b fails with following error:
`Error: error reading llm response: unexpected EOF`
I can codellama 13b with same prompt.
I have 16GB RAM | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/668/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/668/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3489 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3489/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3489/comments | https://api.github.com/repos/ollama/ollama/issues/3489/events | https://github.com/ollama/ollama/issues/3489 | 2,225,537,142 | I_kwDOJ0Z1Ps6EpwB2 | 3,489 | Ollama goes into uninterruptible sleep mode and cannot be shutdown | {
"login": "sumitkm",
"id": 578882,
"node_id": "MDQ6VXNlcjU3ODg4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/578882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumitkm",
"html_url": "https://github.com/sumitkm",
"followers_url": "https://api.github.com/users/sumitkm/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 8 | 2024-04-04T13:32:39 | 2024-05-21T18:27:04 | 2024-05-21T18:27:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This is probably same as #2029 but I'll try and provide more details.
As mentioned in the above issue Ollama works fine from a clean boot. If I put computer to sleep and then wake it up, Ollama gets into a state where it can't be `stopped` via `systemctl` or killed via `kill <pid>`. Here is... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3489/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/3489/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3773 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3773/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3773/comments | https://api.github.com/repos/ollama/ollama/issues/3773/events | https://github.com/ollama/ollama/pull/3773 | 2,254,499,244 | PR_kwDOJ0Z1Ps5tPwKW | 3,773 | Add PHP as an example | {
"login": "alnutile",
"id": 365385,
"node_id": "MDQ6VXNlcjM2NTM4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/365385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alnutile",
"html_url": "https://github.com/alnutile",
"followers_url": "https://api.github.com/users/alnutil... | [] | closed | false | null | [] | null | 1 | 2024-04-20T10:53:16 | 2024-11-21T17:59:34 | 2024-11-21T17:59:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3773",
"html_url": "https://github.com/ollama/ollama/pull/3773",
"diff_url": "https://github.com/ollama/ollama/pull/3773.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3773.patch",
"merged_at": null
} | I could turn this into a markdown file if that is better | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3773/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3168 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3168/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3168/comments | https://api.github.com/repos/ollama/ollama/issues/3168/events | https://github.com/ollama/ollama/issues/3168 | 2,188,411,676 | I_kwDOJ0Z1Ps6CcIMc | 3,168 | WIndows security: ALERT: Trojan:Script/Wacatac.B!ml detected | {
"login": "jabbor",
"id": 50017850,
"node_id": "MDQ6VXNlcjUwMDE3ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/50017850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jabbor",
"html_url": "https://github.com/jabbor",
"followers_url": "https://api.github.com/users/jabbor/fo... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 11 | 2024-03-15T12:30:41 | 2024-08-14T03:25:17 | 2024-03-20T15:53:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This program is dangerous and executes commands from an attacker.
File C:\Users\xxxxxxxxx\AppData\Local\Programs\Ollama\ollama app.exe | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3168/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8555 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8555/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8555/comments | https://api.github.com/repos/ollama/ollama/issues/8555/events | https://github.com/ollama/ollama/issues/8555 | 2,808,363,638 | I_kwDOJ0Z1Ps6nZDp2 | 8,555 | Request for Custom Installation Directory in Ollama on Windows to Avoid System Drive Space Issues | {
"login": "lanyur",
"id": 721637,
"node_id": "MDQ6VXNlcjcyMTYzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/721637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanyur",
"html_url": "https://github.com/lanyur",
"followers_url": "https://api.github.com/users/lanyur/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2025-01-24T02:42:58 | 2025-01-24T09:25:18 | 2025-01-24T09:25:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I hope that the installation of Ollama on Windows can include an option to specify the installation directory. The system drive on Windows often runs out of space due to the Windows update mechanism. Moreover, if there is no configuration to specify the directory during installation, the downloaded large model files wi... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8555/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2590 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2590/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2590/comments | https://api.github.com/repos/ollama/ollama/issues/2590/events | https://github.com/ollama/ollama/issues/2590 | 2,142,106,142 | I_kwDOJ0Z1Ps5_rfIe | 2,590 | Read-only file system | {
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/ia... | [] | closed | false | null | [] | null | 0 | 2024-02-19T11:04:07 | 2024-02-20T06:22:19 | 2024-02-20T06:22:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 在运行curl -fsSL https://ollama.com/install.sh | sh
时报错

| {
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/ia... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2590/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7116 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7116/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7116/comments | https://api.github.com/repos/ollama/ollama/issues/7116/events | https://github.com/ollama/ollama/issues/7116 | 2,570,511,676 | I_kwDOJ0Z1Ps6ZNuU8 | 7,116 | Make target audience clearer on website/in repository README | {
"login": "victorb",
"id": 459764,
"node_id": "MDQ6VXNlcjQ1OTc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/459764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/victorb",
"html_url": "https://github.com/victorb",
"followers_url": "https://api.github.com/users/victorb/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 8 | 2024-10-07T13:51:59 | 2024-10-17T20:34:28 | 2024-10-17T20:34:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, it seems like Ollama is mostly geared towards developers and adjacent audiences, yet that isn't 100% clear on the website or in the repository README.
This results with many end users who might be better served by other tooling, give Ollama a try regardless, even though it might be overly technical compar... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7116/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7116/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3953/comments | https://api.github.com/repos/ollama/ollama/issues/3953/events | https://github.com/ollama/ollama/issues/3953 | 2,266,389,526 | I_kwDOJ0Z1Ps6HFlwW | 3,953 | Support VLLM as a backend | {
"login": "kannon92",
"id": 3780425,
"node_id": "MDQ6VXNlcjM3ODA0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3780425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kannon92",
"html_url": "https://github.com/kannon92",
"followers_url": "https://api.github.com/users/kanno... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2024-04-26T19:29:55 | 2024-12-23T14:37:43 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I realize that this is a big ask but I am learning more and more about inferencing and I've heard that VLLM tends to have better performance for many GPU training.
OLLAMA is a great UX and I love the tight integration with llama.cpp. But it would be nice to start exploring how one could use OLLAMA models with... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3953/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3953/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1581 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1581/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1581/comments | https://api.github.com/repos/ollama/ollama/issues/1581/events | https://github.com/ollama/ollama/issues/1581 | 2,046,621,921 | I_kwDOJ0Z1Ps55_Pjh | 1,581 | I'm using ollama to run via API on Debian (WSL). When I give the command "ollama serve" the terminal is busy | {
"login": "EliasPereirah",
"id": 16616409,
"node_id": "MDQ6VXNlcjE2NjE2NDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16616409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EliasPereirah",
"html_url": "https://github.com/EliasPereirah",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2023-12-18T13:02:43 | 2023-12-19T18:58:56 | 2023-12-19T18:58:56 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1581/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2635 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2635/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2635/comments | https://api.github.com/repos/ollama/ollama/issues/2635/events | https://github.com/ollama/ollama/issues/2635 | 2,146,908,129 | I_kwDOJ0Z1Ps5_9zfh | 2,635 | questions for mistral | {
"login": "VadimBoev",
"id": 33985574,
"node_id": "MDQ6VXNlcjMzOTg1NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33985574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VadimBoev",
"html_url": "https://github.com/VadimBoev",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 3 | 2024-02-21T14:34:16 | 2024-02-21T20:40:44 | 2024-02-21T20:40:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi.
How do I get him to respond in only one language?
And how to remove the censorious language from him so that he can express himself obscenely? | {
"login": "VadimBoev",
"id": 33985574,
"node_id": "MDQ6VXNlcjMzOTg1NTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33985574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VadimBoev",
"html_url": "https://github.com/VadimBoev",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2635/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2635/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2311 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2311/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2311/comments | https://api.github.com/repos/ollama/ollama/issues/2311/events | https://github.com/ollama/ollama/pull/2311 | 2,112,495,826 | PR_kwDOJ0Z1Ps5lszEY | 2,311 | Gosec | {
"login": "alpe",
"id": 28003,
"node_id": "MDQ6VXNlcjI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/28003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alpe",
"html_url": "https://github.com/alpe",
"followers_url": "https://api.github.com/users/alpe/followers",
"follo... | [] | closed | false | null | [] | null | 1 | 2024-02-01T13:16:48 | 2024-11-21T08:15:27 | 2024-11-21T08:15:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2311",
"html_url": "https://github.com/ollama/ollama/pull/2311",
"diff_url": "https://github.com/ollama/ollama/pull/2311.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2311.patch",
"merged_at": null
} | Start fixing some [gosec](https://github.com/securego/gosec) reports
* more restrictive file permission for ~/.ollama files
* add `--request-header-timeout` param to server to prevent [slowloris](https://www.netscout.com/what-is-ddos/slowloris-attacks) DDos
* annotate false positive in `llm/payload_common.go` and ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2311/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2311/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/85 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/85/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/85/comments | https://api.github.com/repos/ollama/ollama/issues/85/events | https://github.com/ollama/ollama/issues/85 | 1,806,769,240 | I_kwDOJ0Z1Ps5rsRxY | 85 | Too many tokens error not provided to users | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2023-07-17T00:15:57 | 2023-07-19T02:41:56 | 2023-07-19T02:41:56 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running a model and passing a context larger than the context window, the `ollama` server prints an error: `llama_tokenize: too many tokens` but not the client. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/85/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/85/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3521 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3521/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3521/comments | https://api.github.com/repos/ollama/ollama/issues/3521/events | https://github.com/ollama/ollama/issues/3521 | 2,229,593,240 | I_kwDOJ0Z1Ps6E5OSY | 3,521 | Run ollama list/pull got an error | {
"login": "imagebody",
"id": 11532038,
"node_id": "MDQ6VXNlcjExNTMyMDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/11532038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imagebody",
"html_url": "https://github.com/imagebody",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 9 | 2024-04-07T06:40:45 | 2024-05-18T04:13:35 | 2024-05-18T04:13:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I run ollama in docker container,like this:
ollama pull llama2:70b
Then, I get an error:
Error: something went wrong, please see the ollama server logs for details
### What did you expect to see?
_No response_
### Steps to reproduce
_No response_
### Are there any recent changes that... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3521/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/754 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/754/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/754/comments | https://api.github.com/repos/ollama/ollama/issues/754/events | https://github.com/ollama/ollama/issues/754 | 1,936,679,010 | I_kwDOJ0Z1Ps5zb2Bi | 754 | Support for Autogen | {
"login": "greg-peters",
"id": 32651304,
"node_id": "MDQ6VXNlcjMyNjUxMzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/32651304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greg-peters",
"html_url": "https://github.com/greg-peters",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2023-10-11T02:44:02 | 2023-12-19T05:51:14 | 2023-10-11T13:24:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | #305 Requesting support to use ollama with Autogen | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/754/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3508 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3508/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3508/comments | https://api.github.com/repos/ollama/ollama/issues/3508/events | https://github.com/ollama/ollama/pull/3508 | 2,228,998,613 | PR_kwDOJ0Z1Ps5r5ZPc | 3,508 | no rope parameters | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-06T01:00:32 | 2024-04-06T01:46:07 | 2024-04-06T01:46:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3508",
"html_url": "https://github.com/ollama/ollama/pull/3508",
"diff_url": "https://github.com/ollama/ollama/pull/3508.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3508.patch",
"merged_at": "2024-04-06T01:46:06"
} | rope parameters should be omitted because the model should set it | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3508/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/930 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/930/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/930/comments | https://api.github.com/repos/ollama/ollama/issues/930/events | https://github.com/ollama/ollama/issues/930 | 1,964,773,211 | I_kwDOJ0Z1Ps51HA9b | 930 | FR: Enhance command for pull and run | {
"login": "hemanth",
"id": 18315,
"node_id": "MDQ6VXNlcjE4MzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/18315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemanth",
"html_url": "https://github.com/hemanth",
"followers_url": "https://api.github.com/users/hemanth/follower... | [] | closed | false | null | [] | null | 1 | 2023-10-27T05:03:59 | 2023-10-27T05:23:15 | 2023-10-27T05:23:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Instead of:
```sh
ollama pull hemanth/booksummarizer
ollama run hemanth/booksummarizer
```
It would be nice to have
```sh
ollama --pull-run hemanth/booksummarizer
```
or
```sh
ollama pullrun hemanth/booksummarizer
```
or
```sh
ollama pull hemanth/booksummarizer --run
``` | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/930/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6802 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6802/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6802/comments | https://api.github.com/repos/ollama/ollama/issues/6802/events | https://github.com/ollama/ollama/issues/6802 | 2,526,417,861 | I_kwDOJ0Z1Ps6WlhPF | 6,802 | Potential Bug in AMD GPU Detection leading to rocBLAS Initialization Failure | {
"login": "Bihan",
"id": 3397983,
"node_id": "MDQ6VXNlcjMzOTc5ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3397983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bihan",
"html_url": "https://github.com/Bihan",
"followers_url": "https://api.github.com/users/Bihan/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-09-14T13:33:33 | 2024-10-26T21:04:15 | 2024-10-26T21:04:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This issue happens **randomly**—sometimes the AMD GPU detection works, but at other times, it fails, resulting in the following error
`rocBLAS error: Could not initialize Tensile host: No devices found.`
When the issue occurs, it seems that certain GPU nodes in the `/sys/class/kfd/kfd/top... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6802/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1023 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1023/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1023/comments | https://api.github.com/repos/ollama/ollama/issues/1023/events | https://github.com/ollama/ollama/pull/1023 | 1,980,201,430 | PR_kwDOJ0Z1Ps5evs1E | 1,023 | docs: clarify where the models are stored in the faq | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-11-06T22:40:07 | 2023-11-08T01:59:55 | 2023-11-08T01:59:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1023",
"html_url": "https://github.com/ollama/ollama/pull/1023",
"diff_url": "https://github.com/ollama/ollama/pull/1023.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1023.patch",
"merged_at": "2023-11-08T01:59:55"
} | Where are the models is a common enough question in the Discord. This clarifies it a bit further helping the user understand the structure. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1023/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1023/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6801 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6801/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6801/comments | https://api.github.com/repos/ollama/ollama/issues/6801/events | https://github.com/ollama/ollama/issues/6801 | 2,526,318,686 | I_kwDOJ0Z1Ps6WlJBe | 6,801 | Ollama can't update the binary | {
"login": "suizideFloat",
"id": 4026849,
"node_id": "MDQ6VXNlcjQwMjY4NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4026849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suizideFloat",
"html_url": "https://github.com/suizideFloat",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 8 | 2024-09-14T12:00:58 | 2025-01-15T21:27:17 | 2025-01-15T21:27:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
AS I understand to update ollama to current version you run teh command ```curl -fsSL https://ollama.com/install.sh | sh ``` as if you install for the first time. At least I did that before and it worked.
Now I am running 0.3.8 ; "ollama version is 0.3.8" and want to update to 0.3.10 via the de... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6801/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4751 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4751/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4751/comments | https://api.github.com/repos/ollama/ollama/issues/4751/events | https://github.com/ollama/ollama/pull/4751 | 2,328,018,179 | PR_kwDOJ0Z1Ps5xIwUy | 4,751 | Add golang pprof | {
"login": "llhhbc",
"id": 6076617,
"node_id": "MDQ6VXNlcjYwNzY2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6076617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llhhbc",
"html_url": "https://github.com/llhhbc",
"followers_url": "https://api.github.com/users/llhhbc/foll... | [] | open | false | null | [] | null | 0 | 2024-05-31T14:47:31 | 2024-05-31T15:09:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4751",
"html_url": "https://github.com/ollama/ollama/pull/4751",
"diff_url": "https://github.com/ollama/ollama/pull/4751.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4751.patch",
"merged_at": null
} | Add golang pprof for analysis proc | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4751/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7928 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7928/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7928/comments | https://api.github.com/repos/ollama/ollama/issues/7928/events | https://github.com/ollama/ollama/issues/7928 | 2,716,809,386 | I_kwDOJ0Z1Ps6h7ziq | 7,928 | ollama api(port:11434)Regarding parallel processing and CPU single core usage | {
"login": "dmmhk",
"id": 74334936,
"node_id": "MDQ6VXNlcjc0MzM0OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/74334936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmmhk",
"html_url": "https://github.com/dmmhk",
"followers_url": "https://api.github.com/users/dmmhk/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": ... | closed | false | null | [] | null | 1 | 2024-12-04T07:42:34 | 2024-12-23T08:03:48 | 2024-12-23T08:03:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Regarding parallel processing and CPU single core usage**
API :http://localhost:11434/api/generate
The waiting time for API requests simultaneously is too long,CPU processing does not use multi-core;
Are there any optimization methods available?
**OS**
Windows 11
**GPU**
Nvidia 3080 16G
**CPU**
AMD R9... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7928/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4623 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4623/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4623/comments | https://api.github.com/repos/ollama/ollama/issues/4623/events | https://github.com/ollama/ollama/issues/4623 | 2,316,273,721 | I_kwDOJ0Z1Ps6KD4g5 | 4,623 | The ollama server is stopped when I submitted jobs parallelly | {
"login": "inoue0426",
"id": 8393063,
"node_id": "MDQ6VXNlcjgzOTMwNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8393063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inoue0426",
"html_url": "https://github.com/inoue0426",
"followers_url": "https://api.github.com/users/in... | [
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-05-24T21:16:03 | 2024-08-09T23:22:40 | 2024-08-09T23:22:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
When I submitted below, suddenly the server just shows POST and doesn't show HEAD and stop.
Do you know how to deal with this?
## Python interface.
```python
def run_test(t):
subprocess.run(['./summary.sh', str(t)])
Parallel(n_jobs=-1)(delayed(run_test)(t) for t in tqd... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4623/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1823 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1823/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1823/comments | https://api.github.com/repos/ollama/ollama/issues/1823/events | https://github.com/ollama/ollama/issues/1823 | 2,068,577,852 | I_kwDOJ0Z1Ps57S_48 | 1,823 | [ENHANCEMENT] Add github action for tests and lint on this repo. | {
"login": "rgaidot",
"id": 5269,
"node_id": "MDQ6VXNlcjUyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgaidot",
"html_url": "https://github.com/rgaidot",
"followers_url": "https://api.github.com/users/rgaidot/followers"... | [] | closed | false | null | [] | null | 1 | 2024-01-06T12:13:40 | 2024-01-06T12:47:54 | 2024-01-06T12:18:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, I saw that the tests and the linter were executed in another repo, it would be interesting to put it in the main repo. https://github.com/jmorganca/ollama/blob/mxyng/build-gpus/.github/workflows/test.yaml | {
"login": "rgaidot",
"id": 5269,
"node_id": "MDQ6VXNlcjUyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgaidot",
"html_url": "https://github.com/rgaidot",
"followers_url": "https://api.github.com/users/rgaidot/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1823/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/488 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/488/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/488/comments | https://api.github.com/repos/ollama/ollama/issues/488/events | https://github.com/ollama/ollama/pull/488 | 1,886,606,495 | PR_kwDOJ0Z1Ps5Z0giw | 488 | add cuda docker image | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-07T21:54:50 | 2023-09-08T14:38:21 | 2023-09-08T14:38:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/488",
"html_url": "https://github.com/ollama/ollama/pull/488",
"diff_url": "https://github.com/ollama/ollama/pull/488.diff",
"patch_url": "https://github.com/ollama/ollama/pull/488.patch",
"merged_at": "2023-09-08T14:38:20"
} | `Dockerfile.cuda` requires [`nvidia-container-toolkit`](https://gitlab.com/nvidia/container-toolkit/container-toolkit) to run successfully:
```
$ docker build -t ollama:cuda -f Dockerfile.cuda .
$ docker run -d --gpus=all -p 11434:11434 -v $HOME/.ollama:/home/ollama/.ollama ollama:cuda
``` | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/488/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7906 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7906/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7906/comments | https://api.github.com/repos/ollama/ollama/issues/7906/events | https://github.com/ollama/ollama/pull/7906 | 2,711,123,848 | PR_kwDOJ0Z1Ps6Dtyf4 | 7,906 | Update README.md | {
"login": "dmayboroda",
"id": 2357342,
"node_id": "MDQ6VXNlcjIzNTczNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2357342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmayboroda",
"html_url": "https://github.com/dmayboroda",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 0 | 2024-12-02T09:01:12 | 2024-12-02T09:14:47 | 2024-12-02T09:14:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7906",
"html_url": "https://github.com/ollama/ollama/pull/7906",
"diff_url": "https://github.com/ollama/ollama/pull/7906.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7906.patch",
"merged_at": "2024-12-02T09:14:47"
} | Hey Ollama builders!
That's a small update for README, I added a Minima project that use OIlama docker container.
Thank you! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7906/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7715 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7715/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7715/comments | https://api.github.com/repos/ollama/ollama/issues/7715/events | https://github.com/ollama/ollama/issues/7715 | 2,666,880,721 | I_kwDOJ0Z1Ps6e9V7R | 7,715 | Ollama 0.4 not using VRAM on AMD RX 7900 XTX | {
"login": "galizhur",
"id": 48592261,
"node_id": "MDQ6VXNlcjQ4NTkyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/48592261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galizhur",
"html_url": "https://github.com/galizhur",
"followers_url": "https://api.github.com/users/gal... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | null | [] | null | 4 | 2024-11-18T02:28:34 | 2024-12-04T10:45:22 | 2024-11-18T23:33:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Any model that I load seems to use system RAM instead the VRAM. This doesn't happen on version 0.3 of ollama. Something is clearly wrong because ollama freezes after a couple of requests and the only way to make it work again is to restart it every minute.
Here is the server log [server.log](... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7715/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8167 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8167/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8167/comments | https://api.github.com/repos/ollama/ollama/issues/8167/events | https://github.com/ollama/ollama/issues/8167 | 2,749,286,233 | I_kwDOJ0Z1Ps6j3sdZ | 8,167 | Error: max retries exceeded for all ollama model pulls (read: connection reset by peer) | {
"login": "saisun229",
"id": 36823386,
"node_id": "MDQ6VXNlcjM2ODIzMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36823386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saisun229",
"html_url": "https://github.com/saisun229",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-12-19T05:18:48 | 2025-01-13T07:26:25 | 2025-01-13T07:26:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama run llama3.1:8b
pulling manifest
pulling da63a910e349... 0% ▕ ▏ 0 B/ 52 GB
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/da/da63a910e34997d... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8167/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1024 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1024/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1024/comments | https://api.github.com/repos/ollama/ollama/issues/1024/events | https://github.com/ollama/ollama/issues/1024 | 1,980,359,679 | I_kwDOJ0Z1Ps52CeP_ | 1,024 | Packaging ollama + cuda for Arch Linux | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | [] | closed | false | null | [] | null | 11 | 2023-11-07T01:19:51 | 2023-12-25T21:47:15 | 2023-11-19T13:38:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi! Arch Linux package maintainer for the `ollama` and `ollama-cuda` packages here.
Thanks for creating Ollama, it makes LLMs more fun to deal with!
When compiling `v0.1.8` for Arch Linux, using this PKGBUILD:
```bash
pkgname=ollama-cuda
pkgdesc='Create, run and share large language models (LLMs) with CUDA'
... | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1024/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1024/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/755 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/755/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/755/comments | https://api.github.com/repos/ollama/ollama/issues/755/events | https://github.com/ollama/ollama/issues/755 | 1,936,738,736 | I_kwDOJ0Z1Ps5zcEmw | 755 | Ollama re-attempts to pull model when served on a remote server | {
"login": "zenarcher007",
"id": 62445349,
"node_id": "MDQ6VXNlcjYyNDQ1MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/62445349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zenarcher007",
"html_url": "https://github.com/zenarcher007",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2023-10-11T03:45:36 | 2023-10-11T03:52:49 | 2023-10-11T03:50:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am running the Ollama server on a remote server, streaming the default port "11434" to localhost served via an SSH tunnel. On my local machine, every time the client,`ollama run`, is run, Ollama attempts to pull the model on the server, even if it is already installed, and verify its hash: a process which takes addit... | {
"login": "zenarcher007",
"id": 62445349,
"node_id": "MDQ6VXNlcjYyNDQ1MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/62445349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zenarcher007",
"html_url": "https://github.com/zenarcher007",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/755/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1748 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1748/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1748/comments | https://api.github.com/repos/ollama/ollama/issues/1748/events | https://github.com/ollama/ollama/pull/1748 | 2,060,864,862 | PR_kwDOJ0Z1Ps5i-V9A | 1,748 | [API Client] Feature: Custom HTTP headers for every request | {
"login": "secondtruth",
"id": 416441,
"node_id": "MDQ6VXNlcjQxNjQ0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/416441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/secondtruth",
"html_url": "https://github.com/secondtruth",
"followers_url": "https://api.github.com/user... | [] | open | false | null | [] | null | 0 | 2023-12-30T21:47:00 | 2024-04-21T22:58:12 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1748",
"html_url": "https://github.com/ollama/ollama/pull/1748",
"diff_url": "https://github.com/ollama/ollama/pull/1748.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1748.patch",
"merged_at": null
} | This pull request adds the ability to include a custom header to every request made by the Go API Client. The custom header is set using a new `Header` field in the `Client` struct.
This allows for more flexibility and customization when making requests to the API. My use case is setting an `Authorization` header to a... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1748/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/292 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/292/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/292/comments | https://api.github.com/repos/ollama/ollama/issues/292/events | https://github.com/ollama/ollama/pull/292 | 1,837,896,002 | PR_kwDOJ0Z1Ps5XQatd | 292 | Added a Websocket interface | {
"login": "FultonBrowne",
"id": 50185337,
"node_id": "MDQ6VXNlcjUwMTg1MzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/50185337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FultonBrowne",
"html_url": "https://github.com/FultonBrowne",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 3 | 2023-08-05T18:48:38 | 2024-02-09T14:36:50 | 2023-08-09T02:02:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/292",
"html_url": "https://github.com/ollama/ollama/pull/292",
"diff_url": "https://github.com/ollama/ollama/pull/292.diff",
"patch_url": "https://github.com/ollama/ollama/pull/292.patch",
"merged_at": null
} | This adds an api endpoint "/api/generatestream" that serves a Websocket that receives JSON in the same format as the "/api/generate" endpoint.
Why?
Well for something I'm working on and I image other things people will do later on a more "stream" style interface may be useful and may provide a better user experienc... | {
"login": "FultonBrowne",
"id": 50185337,
"node_id": "MDQ6VXNlcjUwMTg1MzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/50185337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FultonBrowne",
"html_url": "https://github.com/FultonBrowne",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/292/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7485 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7485/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7485/comments | https://api.github.com/repos/ollama/ollama/issues/7485/events | https://github.com/ollama/ollama/issues/7485 | 2,631,633,971 | I_kwDOJ0Z1Ps6c24wz | 7,485 | I hope ollama can provide rerank models and speech recognition models. | {
"login": "ardyli",
"id": 10685140,
"node_id": "MDQ6VXNlcjEwNjg1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/10685140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ardyli",
"html_url": "https://github.com/ardyli",
"followers_url": "https://api.github.com/users/ardyli/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 5 | 2024-11-04T01:26:25 | 2024-12-19T13:46:11 | 2024-11-17T14:13:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 希望ollama能提供rerank模型、能提供语音识别模型。
I hope ollama can provide rerank models and speech recognition models. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7485/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7485/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/6157 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6157/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6157/comments | https://api.github.com/repos/ollama/ollama/issues/6157/events | https://github.com/ollama/ollama/issues/6157 | 2,446,898,762 | I_kwDOJ0Z1Ps6R2LZK | 6,157 | always "Error: something went wrong, please see the ollama server logs for details" but no useful info in service log | {
"login": "EachSheep",
"id": 48718428,
"node_id": "MDQ6VXNlcjQ4NzE4NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/48718428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EachSheep",
"html_url": "https://github.com/EachSheep",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-08-04T07:31:56 | 2024-08-06T06:01:00 | 2024-08-06T06:01:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I encountered an insufficient storage space error while downloading llama3.1:70b-instruct-fp16. To resolve this, I backed up the files from `/usr/share/ollama/.ollama/models` to another drive with more space, located at `/users/shared/ollama/.ollama/models`, and configured `/etc/systemd/system/o... | {
"login": "EachSheep",
"id": 48718428,
"node_id": "MDQ6VXNlcjQ4NzE4NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/48718428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EachSheep",
"html_url": "https://github.com/EachSheep",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6157/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5496 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5496/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5496/comments | https://api.github.com/repos/ollama/ollama/issues/5496/events | https://github.com/ollama/ollama/issues/5496 | 2,392,126,141 | I_kwDOJ0Z1Ps6OlPK9 | 5,496 | why GGML_CUDA_FORCE_MMQ is yes and CUDA_USE_TENSOR_CORES is no | {
"login": "rabbitpig",
"id": 2839943,
"node_id": "MDQ6VXNlcjI4Mzk5NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2839943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabbitpig",
"html_url": "https://github.com/rabbitpig",
"followers_url": "https://api.github.com/users/ra... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 2 | 2024-07-05T08:30:32 | 2024-10-24T02:39:37 | 2024-10-24T02:39:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi:
I am a newbie and curious about how these configs come out.

Someone can tell me how to chane these.
Thanks. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5496/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5496/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7638 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7638/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7638/comments | https://api.github.com/repos/ollama/ollama/issues/7638/events | https://github.com/ollama/ollama/issues/7638 | 2,653,722,091 | I_kwDOJ0Z1Ps6eLJXr | 7,638 | I built from source a while back and everything worked ok, the current code is dumping core, bus error. | {
"login": "phalexo",
"id": 4603365,
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phalexo",
"html_url": "https://github.com/phalexo",
"followers_url": "https://api.github.com/users/phalexo/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-11-13T00:18:17 | 2024-12-10T19:54:21 | 2024-12-10T19:54:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I used to build it with
go generate ./...
go build .
Is it different now? Does it detect automatically CUDA at /usr/local/cuda?
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
cloned github latest | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7638/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4233 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4233/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4233/comments | https://api.github.com/repos/ollama/ollama/issues/4233/events | https://github.com/ollama/ollama/issues/4233 | 2,283,910,906 | I_kwDOJ0Z1Ps6IIbb6 | 4,233 | Allow for specifying and checking the directory where models are stored | {
"login": "K0-RR",
"id": 43277609,
"node_id": "MDQ6VXNlcjQzMjc3NjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/43277609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/K0-RR",
"html_url": "https://github.com/K0-RR",
"followers_url": "https://api.github.com/users/K0-RR/follow... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-05-07T17:48:54 | 2024-05-07T18:37:55 | 2024-05-07T18:37:55 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How do I locate the downloaded files? This should be possible via some `ollama` command.
When pulling we should be asked where do we want the data to be saved.
Both `~/.ollama/models` and `/usr/share/ollama/` **don't exist** on my machine and yet I'm interacting with llama3! | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4233/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2399 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2399/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2399/comments | https://api.github.com/repos/ollama/ollama/issues/2399/events | https://github.com/ollama/ollama/pull/2399 | 2,124,064,224 | PR_kwDOJ0Z1Ps5mUTW9 | 2,399 | Fix hanging issue when sending empty content | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-07T23:17:05 | 2024-02-08T00:30:34 | 2024-02-08T00:30:33 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2399",
"html_url": "https://github.com/ollama/ollama/pull/2399",
"diff_url": "https://github.com/ollama/ollama/pull/2399.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2399.patch",
"merged_at": "2024-02-08T00:30:33"
} | This fixes an issue where the prompt would be templated as an empty string `""`.
Fixes https://github.com/ollama/ollama/issues/2397
```shell
# loads model
% curl http://localhost:11434/api/chat -d '{
"model": "llama2",
"messages": [],
"stream": false
}'
{"model":"llama2","created_at":"2024... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2399/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6270 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6270/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6270/comments | https://api.github.com/repos/ollama/ollama/issues/6270/events | https://github.com/ollama/ollama/issues/6270 | 2,456,912,746 | I_kwDOJ0Z1Ps6ScYNq | 6,270 | ollama does not work continuously | {
"login": "peanutpaste",
"id": 62008312,
"node_id": "MDQ6VXNlcjYyMDA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/62008312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peanutpaste",
"html_url": "https://github.com/peanutpaste",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-08-09T01:12:40 | 2024-09-05T23:01:31 | 2024-09-05T23:01:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After I start the ollama service, it will freeze when I run it for a while. When I access it, it will keep loading, but there is no result. I kill the thread, and after restarting it, it can be used for a while (or a certain number of accesses), and then it will freeze again. Why is this?
My Di... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6270/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2925 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2925/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2925/comments | https://api.github.com/repos/ollama/ollama/issues/2925/events | https://github.com/ollama/ollama/pull/2925 | 2,167,724,263 | PR_kwDOJ0Z1Ps5ooyyH | 2,925 | [ENH]: OpenAI embeddings endpoint | {
"login": "tazarov",
"id": 1157440,
"node_id": "MDQ6VXNlcjExNTc0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1157440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tazarov",
"html_url": "https://github.com/tazarov",
"followers_url": "https://api.github.com/users/tazarov/... | [] | closed | false | null | [] | null | 3 | 2024-03-04T20:50:07 | 2024-11-21T09:24:36 | 2024-11-21T09:24:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2925",
"html_url": "https://github.com/ollama/ollama/pull/2925",
"diff_url": "https://github.com/ollama/ollama/pull/2925.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2925.patch",
"merged_at": null
} | Closes #2856
> Note: This is a stacked PR on top of #2924 | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2925/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2925/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8468 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8468/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8468/comments | https://api.github.com/repos/ollama/ollama/issues/8468/events | https://github.com/ollama/ollama/issues/8468 | 2,794,695,100 | I_kwDOJ0Z1Ps6mk6m8 | 8,468 | .create() giving out ResponseError | {
"login": "ShivamSrng",
"id": 67229090,
"node_id": "MDQ6VXNlcjY3MjI5MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/67229090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivamSrng",
"html_url": "https://github.com/ShivamSrng",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2025-01-17T06:35:19 | 2025-01-18T05:01:27 | 2025-01-17T07:26:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**OS**
Windows
**GPU**
Nvidia 3070Ti
NVIDIA-SMI 566.36
Driver Version: 566.36
CUDA Version: 12.7
**CPU**
Intel
**Ollama version**
0.5.7
**Model**
llama3.1:8b
The model file I am writing is:
```
modelfile = """
FROM llama3.1
PARAMETER seed 42
PARAMETER temperatur... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8468/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6195 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6195/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6195/comments | https://api.github.com/repos/ollama/ollama/issues/6195/events | https://github.com/ollama/ollama/issues/6195 | 2,450,174,184 | I_kwDOJ0Z1Ps6SCrDo | 6,195 | When I start the container with http_proxy and https_proxy configured, the ollama service will not start properly | {
"login": "0sengseng0",
"id": 73268510,
"node_id": "MDQ6VXNlcjczMjY4NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/73268510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0sengseng0",
"html_url": "https://github.com/0sengseng0",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-08-06T07:38:45 | 2024-08-23T21:07:02 | 2024-08-23T21:07:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
## start command
docker run -d --gpus=all -v ollama:/root/.ollama -p 31434:11434 -e "OLLAMA_DEBUG=1" -e "CUDA_VISIBLE_DEVICES=0" -e "http_proxy=http://192.168.*.*:11080" -e "https_proxy=http://192.168.*.*:11080" --name ollama ollama/ollama
## operating command
[root@main ~]# docker exec -... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6195/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6087 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6087/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6087/comments | https://api.github.com/repos/ollama/ollama/issues/6087/events | https://github.com/ollama/ollama/issues/6087 | 2,439,021,540 | I_kwDOJ0Z1Ps6RYIPk | 6,087 | hope to add "Telechat" | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-07-31T03:15:51 | 2024-10-21T04:28:13 | 2024-10-21T04:28:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/Tele-AI/Telechat | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6087/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1844 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1844/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1844/comments | https://api.github.com/repos/ollama/ollama/issues/1844/events | https://github.com/ollama/ollama/pull/1844 | 2,069,230,947 | PR_kwDOJ0Z1Ps5jaQfi | 1,844 | Workaround memory memory limitations | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 2 | 2024-01-07T18:12:47 | 2024-01-08T21:03:26 | 2024-01-08T21:03:25 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1844",
"html_url": "https://github.com/ollama/ollama/pull/1844",
"diff_url": "https://github.com/ollama/ollama/pull/1844.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1844.patch",
"merged_at": null
} | This isn't a proper fix, but until we more completely calculate memory requirements, this seems to avoid crashes when approaching the limit on smaller memory CUDA GPUs. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1844/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3040 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3040/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3040/comments | https://api.github.com/repos/ollama/ollama/issues/3040/events | https://github.com/ollama/ollama/issues/3040 | 2,177,750,063 | I_kwDOJ0Z1Ps6BzdQv | 3,040 | cant Pull a model from Ollama.com | {
"login": "Than281",
"id": 66867089,
"node_id": "MDQ6VXNlcjY2ODY3MDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/66867089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Than281",
"html_url": "https://github.com/Than281",
"followers_url": "https://api.github.com/users/Than28... | [] | closed | false | null | [] | null | 3 | 2024-03-10T14:38:12 | 2024-03-10T16:03:44 | 2024-03-10T16:03:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when I enter `ollama run llama2-uncensored`or other model tab ,there's always `pull model manifest: file does not exist`
| {
"login": "Than281",
"id": 66867089,
"node_id": "MDQ6VXNlcjY2ODY3MDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/66867089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Than281",
"html_url": "https://github.com/Than281",
"followers_url": "https://api.github.com/users/Than28... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3040/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2969 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2969/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2969/comments | https://api.github.com/repos/ollama/ollama/issues/2969/events | https://github.com/ollama/ollama/pull/2969 | 2,172,810,064 | PR_kwDOJ0Z1Ps5o6Ocn | 2,969 | Hijack the build for msys2 :P | {
"login": "Kreijstal",
"id": 2415206,
"node_id": "MDQ6VXNlcjI0MTUyMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2415206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kreijstal",
"html_url": "https://github.com/Kreijstal",
"followers_url": "https://api.github.com/users/Kr... | [] | closed | false | null | [] | null | 1 | 2024-03-07T02:25:53 | 2024-11-21T08:17:51 | 2024-11-21T08:17:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2969",
"html_url": "https://github.com/ollama/ollama/pull/2969",
"diff_url": "https://github.com/ollama/ollama/pull/2969.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2969.patch",
"merged_at": null
} | This makes it so that you can build with MINGW/GCC in windows rather than with MSVC.
fixes #2887 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2969/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4910 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4910/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4910/comments | https://api.github.com/repos/ollama/ollama/issues/4910/events | https://github.com/ollama/ollama/pull/4910 | 2,340,773,570 | PR_kwDOJ0Z1Ps5x0N5M | 4,910 | fix create model when template detection errors | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-06-07T16:06:19 | 2024-06-07T18:07:40 | 2024-06-07T18:07:39 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4910",
"html_url": "https://github.com/ollama/ollama/pull/4910",
"diff_url": "https://github.com/ollama/ollama/pull/4910.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4910.patch",
"merged_at": "2024-06-07T18:07:39"
} | this change fixes a bug where a template detection error will cause create to error instead of not including a template | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4910/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1840 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1840/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1840/comments | https://api.github.com/repos/ollama/ollama/issues/1840/events | https://github.com/ollama/ollama/pull/1840 | 2,069,129,706 | PR_kwDOJ0Z1Ps5jZ8oq | 1,840 | docker-commpose added | {
"login": "jinnabaalu",
"id": 11784253,
"node_id": "MDQ6VXNlcjExNzg0MjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/11784253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinnabaalu",
"html_url": "https://github.com/jinnabaalu",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2024-01-07T13:25:28 | 2024-09-05T20:23:17 | 2024-09-05T20:23:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1840",
"html_url": "https://github.com/ollama/ollama/pull/1840",
"diff_url": "https://github.com/ollama/ollama/pull/1840.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1840.patch",
"merged_at": null
} | Created a docker-compose for running as a container | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1840/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6679 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6679/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6679/comments | https://api.github.com/repos/ollama/ollama/issues/6679/events | https://github.com/ollama/ollama/issues/6679 | 2,510,663,593 | I_kwDOJ0Z1Ps6Vpa-p | 6,679 | HTTP_PROXY Not Being Used in Model Requests | {
"login": "cmilhaupt",
"id": 15024866,
"node_id": "MDQ6VXNlcjE1MDI0ODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/15024866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmilhaupt",
"html_url": "https://github.com/cmilhaupt",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 30 | 2024-09-06T15:15:22 | 2024-09-18T19:53:56 | 2024-09-18T19:53:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The `HTTP_PROXY` and `HTTPS_PROXY` variables aren't being used when requesting the model manifest file and pulling of the model itself. The symptom that lead me to the extra debugging started when trying to pull a model and getting the following error:
```
$ ollama pull gemma2:27b
pulling man... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6679/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/405 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/405/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/405/comments | https://api.github.com/repos/ollama/ollama/issues/405/events | https://github.com/ollama/ollama/pull/405 | 1,865,604,112 | PR_kwDOJ0Z1Ps5Yt9Xm | 405 | add 34b model type | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-08-24T17:36:03 | 2023-08-24T17:37:23 | 2023-08-24T17:37:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/405",
"html_url": "https://github.com/ollama/ollama/pull/405",
"diff_url": "https://github.com/ollama/ollama/pull/405.diff",
"patch_url": "https://github.com/ollama/ollama/pull/405.patch",
"merged_at": "2023-08-24T17:37:22"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/405/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8487 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8487/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8487/comments | https://api.github.com/repos/ollama/ollama/issues/8487/events | https://github.com/ollama/ollama/pull/8487 | 2,797,713,644 | PR_kwDOJ0Z1Ps6IRdY9 | 8,487 | Update gpu.md | {
"login": "EndoTheDev",
"id": 186935988,
"node_id": "U_kgDOCyRqtA",
"avatar_url": "https://avatars.githubusercontent.com/u/186935988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EndoTheDev",
"html_url": "https://github.com/EndoTheDev",
"followers_url": "https://api.github.com/users/End... | [] | closed | false | null | [] | null | 2 | 2025-01-19T16:33:00 | 2025-01-20T03:15:14 | 2025-01-20T02:45:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8487",
"html_url": "https://github.com/ollama/ollama/pull/8487",
"diff_url": "https://github.com/ollama/ollama/pull/8487.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8487.patch",
"merged_at": "2025-01-20T02:45:35"
} | changed Laptop to Linux since it does not only affect laptop user but Linux user in general | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8487/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1837 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1837/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1837/comments | https://api.github.com/repos/ollama/ollama/issues/1837/events | https://github.com/ollama/ollama/issues/1837 | 2,069,046,483 | I_kwDOJ0Z1Ps57UyTT | 1,837 | Ollama crashes quite often for Fedora 39 with NVIDIA T1200 Laptop GPU | {
"login": "ilovepumpkin",
"id": 11310645,
"node_id": "MDQ6VXNlcjExMzEwNjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11310645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ilovepumpkin",
"html_url": "https://github.com/ilovepumpkin",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 4 | 2024-01-07T08:53:07 | 2024-01-08T21:42:04 | 2024-01-08T21:42:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
When I use ollama with NVIDIA T1200 Laptop GPU on Fedora 39, it crashes quite often regardless what models I am running. Is there any way to troubleshoot this issue?
Here is the output of `nvidia-smi`
```
+---------------------------------------------------------------------------------------+
| N... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1837/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1831 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1831/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1831/comments | https://api.github.com/repos/ollama/ollama/issues/1831/events | https://github.com/ollama/ollama/issues/1831 | 2,068,888,882 | I_kwDOJ0Z1Ps57UL0y | 1,831 | Make a second docker image for "NVidia GPUs" | {
"login": "frankhaugen",
"id": 13543585,
"node_id": "MDQ6VXNlcjEzNTQzNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13543585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhaugen",
"html_url": "https://github.com/frankhaugen",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 6 | 2024-01-06T23:30:48 | 2024-01-09T01:07:35 | 2024-01-09T01:07:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I wanted to setup Ollama, (great project, the CPU variant ran out of the box and I had it doing its thing, deserve a 🥇 ), HOWEVER, I have spent 7 hours flailing with docker, (I am on a Windows 11 machine with a 4070 TI card), to get this to work.
I even had chatGPT, Bing Copilot, Rider AI Assistant and Linux Guru ... | {
"login": "frankhaugen",
"id": 13543585,
"node_id": "MDQ6VXNlcjEzNTQzNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13543585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankhaugen",
"html_url": "https://github.com/frankhaugen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1831/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1187 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1187/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1187/comments | https://api.github.com/repos/ollama/ollama/issues/1187/events | https://github.com/ollama/ollama/issues/1187 | 2,000,135,117 | I_kwDOJ0Z1Ps53N6PN | 1,187 | Scaling/Concurrent Requests | {
"login": "jjsarf",
"id": 34278274,
"node_id": "MDQ6VXNlcjM0Mjc4Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/34278274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jjsarf",
"html_url": "https://github.com/jjsarf",
"followers_url": "https://api.github.com/users/jjsarf/fo... | [] | closed | false | null | [] | null | 4 | 2023-11-18T01:07:51 | 2024-02-20T01:12:13 | 2024-02-20T01:12:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello again. Great project. This may not be an issue, but I did notice that placing a second request while another one is currently processing makes the new request timeout.
Is this by design? This is not the case when using HuggingFace UI >0.4
Thanks. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1187/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2577 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2577/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2577/comments | https://api.github.com/repos/ollama/ollama/issues/2577/events | https://github.com/ollama/ollama/issues/2577 | 2,141,052,285 | I_kwDOJ0Z1Ps5_nd19 | 2,577 | JSON mode should disallow trailing whitespace | {
"login": "joliss",
"id": 524783,
"node_id": "MDQ6VXNlcjUyNDc4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/524783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joliss",
"html_url": "https://github.com/joliss",
"followers_url": "https://api.github.com/users/joliss/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-02-18T15:50:38 | 2024-12-05T00:48:47 | 2024-12-05T00:48:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using `--format json`, small models sometimes try to add more text after the object, resulting in infinite whitespace.
```
$ ollama run qwen:0.5b-chat-v1.5-q4_K_M 'Reply with only an example JSON object' --format json
{
"name": "John",
"age": 30,
"city": "New York"
}
[whitespace con... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2577/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2577/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7235 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7235/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7235/comments | https://api.github.com/repos/ollama/ollama/issues/7235/events | https://github.com/ollama/ollama/issues/7235 | 2,593,792,670 | I_kwDOJ0Z1Ps6amiKe | 7,235 | OpenAI AI Compatiable | {
"login": "tobegit3hub",
"id": 2715000,
"node_id": "MDQ6VXNlcjI3MTUwMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2715000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tobegit3hub",
"html_url": "https://github.com/tobegit3hub",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-10-17T06:49:51 | 2024-11-06T11:14:47 | 2024-11-06T11:14:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7235/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2551 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2551/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2551/comments | https://api.github.com/repos/ollama/ollama/issues/2551/events | https://github.com/ollama/ollama/issues/2551 | 2,139,329,765 | I_kwDOJ0Z1Ps5_g5Tl | 2,551 | Can we change where the models are stored in windows | {
"login": "Gyro0o",
"id": 117460296,
"node_id": "U_kgDOBwBNSA",
"avatar_url": "https://avatars.githubusercontent.com/u/117460296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gyro0o",
"html_url": "https://github.com/Gyro0o",
"followers_url": "https://api.github.com/users/Gyro0o/follower... | [] | closed | false | null | [] | null | 30 | 2024-02-16T19:52:24 | 2025-01-27T15:20:38 | 2024-02-17T00:15:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As far as I know the models are automatically downloaded to C:/Users/username/.ollama
But can we change the directory to another one due to storage issues? | {
"login": "Gyro0o",
"id": 117460296,
"node_id": "U_kgDOBwBNSA",
"avatar_url": "https://avatars.githubusercontent.com/u/117460296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gyro0o",
"html_url": "https://github.com/Gyro0o",
"followers_url": "https://api.github.com/users/Gyro0o/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2551/reactions",
"total_count": 32,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 6,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2551/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1616 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1616/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1616/comments | https://api.github.com/repos/ollama/ollama/issues/1616/events | https://github.com/ollama/ollama/issues/1616 | 2,049,473,064 | I_kwDOJ0Z1Ps56KHoo | 1,616 | How to skip animation? | {
"login": "kokizzu",
"id": 1061610,
"node_id": "MDQ6VXNlcjEwNjE2MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1061610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kokizzu",
"html_url": "https://github.com/kokizzu",
"followers_url": "https://api.github.com/users/kokizzu/... | [] | closed | false | null | [] | null | 4 | 2023-12-19T21:37:09 | 2023-12-20T23:48:45 | 2023-12-20T23:48:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | for example when i run:
```
ollama run mistral
>>> some prompt
... very slow letter by letter output ... <-- how to make this faster?
``` | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1616/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.