url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/1718 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1718/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1718/comments | https://api.github.com/repos/ollama/ollama/issues/1718/events | https://github.com/ollama/ollama/issues/1718 | 2,056,056,305 | I_kwDOJ0Z1Ps56jO3x | 1,718 | incomplete json in api responses | {
"login": "ralyodio",
"id": 27381,
"node_id": "MDQ6VXNlcjI3Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/27381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ralyodio",
"html_url": "https://github.com/ralyodio",
"followers_url": "https://api.github.com/users/ralyodio/foll... | [] | closed | false | null | [] | null | 2 | 2023-12-26T05:51:29 | 2023-12-26T17:22:36 | 2023-12-26T17:22:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried both /api/chat and /api/generate endpoints which seem to produce the same results. however I'm getting invalid json on every response. | {
"login": "ralyodio",
"id": 27381,
"node_id": "MDQ6VXNlcjI3Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/27381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ralyodio",
"html_url": "https://github.com/ralyodio",
"followers_url": "https://api.github.com/users/ralyodio/foll... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1718/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7626 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7626/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7626/comments | https://api.github.com/repos/ollama/ollama/issues/7626/events | https://github.com/ollama/ollama/issues/7626 | 2,651,476,994 | I_kwDOJ0Z1Ps6eClQC | 7,626 | Role field should not be repeated in streamed response chunks | {
"login": "jackmpcollins",
"id": 6640905,
"node_id": "MDQ6VXNlcjY2NDA5MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6640905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackmpcollins",
"html_url": "https://github.com/jackmpcollins",
"followers_url": "https://api.github.... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q... | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2024-11-12T08:46:18 | 2024-11-18T07:52:26 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API repeats `"role": "assistant"` in all returned chunks. This is different to OpenAI's API which just has this in the first chunk. This breaks compatibility with the `client.beta.chat.completions.stream` helper from the opena... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7626/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7626/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3868 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3868/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3868/comments | https://api.github.com/repos/ollama/ollama/issues/3868/events | https://github.com/ollama/ollama/issues/3868 | 2,260,448,866 | I_kwDOJ0Z1Ps6Gu7Zi | 3,868 | Hope to get it out on the shelves llama3-Chinese | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2024-04-24T06:23:11 | 2024-07-20T14:29:51 | 2024-07-20T14:29:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 比如 https://github.com/UnicomAI/Unichat-llama3-Chinese | {
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enr... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3868/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4333 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4333/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4333/comments | https://api.github.com/repos/ollama/ollama/issues/4333/events | https://github.com/ollama/ollama/issues/4333 | 2,290,617,478 | I_kwDOJ0Z1Ps6IiAyG | 4,333 | `segmentation fault` when running `codellama:34b` on A100 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-05-11T02:48:28 | 2024-07-22T18:05:25 | 2024-07-22T18:05:25 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
CLI:
```
$ ollama run codellama:34b
Error: llama runner process has terminated: signal: segmentation fault
```
Logs:
```
May 11 02:47:28 gpu ollama[27286]: time=2024-05-11T02:47:28.033Z level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.a... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4333/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4333/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8443 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8443/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8443/comments | https://api.github.com/repos/ollama/ollama/issues/8443/events | https://github.com/ollama/ollama/pull/8443 | 2,790,742,559 | PR_kwDOJ0Z1Ps6H5iUB | 8,443 | llama/llama-mmap: fix missing include | {
"login": "wgottwalt",
"id": 12194808,
"node_id": "MDQ6VXNlcjEyMTk0ODA4",
"avatar_url": "https://avatars.githubusercontent.com/u/12194808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgottwalt",
"html_url": "https://github.com/wgottwalt",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 0 | 2025-01-15T20:04:49 | 2025-01-15T20:04:49 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8443",
"html_url": "https://github.com/ollama/ollama/pull/8443",
"diff_url": "https://github.com/ollama/ollama/pull/8443.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8443.patch",
"merged_at": null
} | Proper memory and vector headers (like in GCC 15.1) do not provide the uint32_t type, so cstdint is required.
llama-mmap.h:55:20: error: ‘uint32_t’ has not been declared
55 | void write_u32(uint32_t val) const; | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8443/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7597 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7597/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7597/comments | https://api.github.com/repos/ollama/ollama/issues/7597/events | https://github.com/ollama/ollama/issues/7597 | 2,647,257,116 | I_kwDOJ0Z1Ps6dyfAc | 7,597 | detect missing GPU runners and don't report incorrect GPU info/logs | {
"login": "kaleocheng",
"id": 7939352,
"node_id": "MDQ6VXNlcjc5MzkzNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7939352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaleocheng",
"html_url": "https://github.com/kaleocheng",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 20 | 2024-11-10T13:41:47 | 2024-11-17T20:18:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
$ ollama -v
ollama version is 0.4.1
$ ollama run llama3.2-vision:latest
$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2-vision:latest 38107a0cd119 12 GB 100% GPU 2 minutes from now
```
from the logs it als... | {
"login": "kaleocheng",
"id": 7939352,
"node_id": "MDQ6VXNlcjc5MzkzNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7939352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaleocheng",
"html_url": "https://github.com/kaleocheng",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7597/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7597/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/4642 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4642/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4642/comments | https://api.github.com/repos/ollama/ollama/issues/4642/events | https://github.com/ollama/ollama/pull/4642 | 2,317,404,366 | PR_kwDOJ0Z1Ps5wko13 | 4,642 | docs(gpu): Add workaround for nvidia GPU unavailable | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [] | closed | false | null | [] | null | 2 | 2024-05-26T02:50:12 | 2024-06-06T03:51:52 | 2024-06-06T03:51:51 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4642",
"html_url": "https://github.com/ollama/ollama/pull/4642",
"diff_url": "https://github.com/ollama/ollama/pull/4642.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4642.patch",
"merged_at": null
} | Docs:
- Update docs to add workaround for Nvidia GPU becoming unavailable after a period of time idle.
- Minor: Markdown formatting fixes.
I see people logging issues and asking for help on Discord for this quite often, this workaround has had good success in fixing the issue for many folk.
e.g. https://githu... | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4642/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8170 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8170/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8170/comments | https://api.github.com/repos/ollama/ollama/issues/8170/events | https://github.com/ollama/ollama/issues/8170 | 2,749,841,385 | I_kwDOJ0Z1Ps6j5z_p | 8,170 | ollama and with_structured_output fails for new langchain-ollama==0.2.2 | {
"login": "nomisto",
"id": 28439912,
"node_id": "MDQ6VXNlcjI4NDM5OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nomisto",
"html_url": "https://github.com/nomisto",
"followers_url": "https://api.github.com/users/nomist... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 7 | 2024-12-19T10:16:24 | 2025-01-24T10:28:31 | 2024-12-20T21:45:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
pip install langchain-ollama==0.2.1 pydantic
```
and
```python
from langchain_ollama import ChatOllama
from typing import Optional
from pydantic import BaseModel, Field
class Person(BaseModel):
name: str
age: int
llm = ChatOllama(
model="llama3.1:latest",
bas... | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8170/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8239 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8239/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8239/comments | https://api.github.com/repos/ollama/ollama/issues/8239/events | https://github.com/ollama/ollama/issues/8239 | 2,758,733,492 | I_kwDOJ0Z1Ps6kbu60 | 8,239 | GPU is not being used on macOS when launching from CLI | {
"login": "Bhavya031",
"id": 98141026,
"node_id": "U_kgDOBdmDYg",
"avatar_url": "https://avatars.githubusercontent.com/u/98141026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bhavya031",
"html_url": "https://github.com/Bhavya031",
"followers_url": "https://api.github.com/users/Bhavya03... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 10 | 2024-12-25T11:04:31 | 2024-12-27T11:41:56 | 2024-12-27T11:41:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
On macOS, if you use Ollama, it utilizes the GPU. However, when launching via CLI, it does not. I searched for GPU flags but couldn’t find any. We need default GPU support for macOS when using the CLI.
https://github.com/user-attachments/assets/26fd9f8a-94f8-458f-8482-bbb96ab40697
### OS... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8239/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5275 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5275/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5275/comments | https://api.github.com/repos/ollama/ollama/issues/5275/events | https://github.com/ollama/ollama/issues/5275 | 2,373,009,336 | I_kwDOJ0Z1Ps6NcT-4 | 5,275 | ROCm on WSL | {
"login": "justinkb",
"id": 218024,
"node_id": "MDQ6VXNlcjIxODAyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/218024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justinkb",
"html_url": "https://github.com/justinkb",
"followers_url": "https://api.github.com/users/justink... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": ... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 15 | 2024-06-25T15:37:46 | 2025-01-23T23:15:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Recently, AMD released preview drivers for Windows that, alongside userspace packages for WSL, enable one to use ROCm through WSL. Ollama detection of AMD GPUs in linux, however, uses the presence of loaded amdgpu drivers and other sysfs stuff to determine various properties of the GPU. These are not available with thi... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5275/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5275/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5855 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5855/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5855/comments | https://api.github.com/repos/ollama/ollama/issues/5855/events | https://github.com/ollama/ollama/pull/5855 | 2,423,245,009 | PR_kwDOJ0Z1Ps52HIZc | 5,855 | Remove no longer supported max vram var | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-22T16:09:01 | 2024-07-22T17:36:30 | 2024-07-22T17:35:29 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5855",
"html_url": "https://github.com/ollama/ollama/pull/5855",
"diff_url": "https://github.com/ollama/ollama/pull/5855.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5855.patch",
"merged_at": "2024-07-22T17:35:29"
} | The OLLAMA_MAX_VRAM env var was a temporary workaround for OOM scenarios. With Concurrency this was no longer wired up, and the simplistic value doesn't map to multi-GPU setups. Users can still set `num_gpu` to limit memory usage to avoid OOM if we get our predictions wrong.
Fixes #5754 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5855/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6408 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6408/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6408/comments | https://api.github.com/repos/ollama/ollama/issues/6408/events | https://github.com/ollama/ollama/issues/6408 | 2,472,334,391 | I_kwDOJ0Z1Ps6TXNQ3 | 6,408 | 404 POST "/api/chat" | {
"login": "turndown",
"id": 57825084,
"node_id": "MDQ6VXNlcjU3ODI1MDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/57825084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turndown",
"html_url": "https://github.com/turndown",
"followers_url": "https://api.github.com/users/tur... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 12 | 2024-08-19T02:41:49 | 2024-11-05T11:02:35 | 2024-09-02T03:05:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
At first, it started running normally, but after a while, it reported 404,and can‘t run any model.
Can you help me solve it?Thx.
install by:curl -fsSL https://ollama.com/install.sh
**log below:**
Aug 19 10:25:57 ecs-lcdsj ollama[1026502]: llm_load_print_meta: LF token = 148848 'Ä... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6408/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/179 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/179/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/179/comments | https://api.github.com/repos/ollama/ollama/issues/179/events | https://github.com/ollama/ollama/pull/179 | 1,816,921,968 | PR_kwDOJ0Z1Ps5WJ69E | 179 | change push to chunked uploads from monolithic | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-22T23:16:24 | 2023-07-23T00:31:27 | 2023-07-23T00:31:26 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/179",
"html_url": "https://github.com/ollama/ollama/pull/179",
"diff_url": "https://github.com/ollama/ollama/pull/179.diff",
"patch_url": "https://github.com/ollama/ollama/pull/179.patch",
"merged_at": "2023-07-23T00:31:26"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/179/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7254 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7254/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7254/comments | https://api.github.com/repos/ollama/ollama/issues/7254/events | https://github.com/ollama/ollama/issues/7254 | 2,597,953,873 | I_kwDOJ0Z1Ps6a2aFR | 7,254 | Support directly running GGUF files without importing | {
"login": "ahizap",
"id": 67712951,
"node_id": "MDQ6VXNlcjY3NzEyOTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/67712951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahizap",
"html_url": "https://github.com/ahizap",
"followers_url": "https://api.github.com/users/ahizap/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-10-18T16:34:56 | 2024-12-20T04:40:13 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In llama.cpp we can directly run models with `llama-cli -m your_model.gguf ` without having to import the model, It would be great if we can do the same with ollama. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7254/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2839 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2839/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2839/comments | https://api.github.com/repos/ollama/ollama/issues/2839/events | https://github.com/ollama/ollama/issues/2839 | 2,161,701,266 | I_kwDOJ0Z1Ps6A2PGS | 2,839 | keeps loading but never success | {
"login": "xudong2019",
"id": 16278392,
"node_id": "MDQ6VXNlcjE2Mjc4Mzky",
"avatar_url": "https://avatars.githubusercontent.com/u/16278392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xudong2019",
"html_url": "https://github.com/xudong2019",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6947643302,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnhyfpg... | open | false | null | [] | null | 3 | 2024-02-29T16:55:00 | 2024-11-06T18:00:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ollama run renxin_query_type_classify "hello"

I successfully generate a model from gguf file. however keeps loading but never succeed... Any idea what's happening?
FROM ./model_query_type_classify.gguf
PARAMETER temp... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2839/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1474 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1474/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1474/comments | https://api.github.com/repos/ollama/ollama/issues/1474/events | https://github.com/ollama/ollama/issues/1474 | 2,036,649,047 | I_kwDOJ0Z1Ps55ZMxX | 1,474 | subprocess or pexpect rather than the API | {
"login": "MikeyBeez",
"id": 14264000,
"node_id": "MDQ6VXNlcjE0MjY0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14264000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeyBeez",
"html_url": "https://github.com/MikeyBeez",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2023-12-11T22:34:32 | 2023-12-11T22:49:56 | 2023-12-11T22:46:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I find that Ollama is fast enough, but the API is very slow. I've been trying to use something like subprocess. The is program runs, but waiting for the output is torturously slow:
import subprocess
def run_ollama(model_name):
# Build the Ollama command
ollama_command = f"ollama run {model_name}"
... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1474/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/1256 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1256/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1256/comments | https://api.github.com/repos/ollama/ollama/issues/1256/events | https://github.com/ollama/ollama/pull/1256 | 2,008,391,507 | PR_kwDOJ0Z1Ps5gPQiK | 1,256 | Implement tensor_split support in modelfile | {
"login": "Lissanro",
"id": 46057271,
"node_id": "MDQ6VXNlcjQ2MDU3Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/46057271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lissanro",
"html_url": "https://github.com/Lissanro",
"followers_url": "https://api.github.com/users/Lis... | [] | closed | false | null | [] | null | 7 | 2023-11-23T14:58:47 | 2024-04-08T17:15:18 | 2024-01-25T22:13:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1256",
"html_url": "https://github.com/ollama/ollama/pull/1256",
"diff_url": "https://github.com/ollama/ollama/pull/1256.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1256.patch",
"merged_at": null
} | This patch allows to specify a string for --tensor-split in a modelfile, for example:
PARAMETER tensor_split "25,75"
This allows to adjust VRAM allocation for each model, for example, to optimize VRAM usage on each GPU or to better accommodate models which need more memory for context on the main GPU. | {
"login": "Lissanro",
"id": 46057271,
"node_id": "MDQ6VXNlcjQ2MDU3Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/46057271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lissanro",
"html_url": "https://github.com/Lissanro",
"followers_url": "https://api.github.com/users/Lis... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1256/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2307 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2307/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2307/comments | https://api.github.com/repos/ollama/ollama/issues/2307/events | https://github.com/ollama/ollama/pull/2307 | 2,112,042,563 | PR_kwDOJ0Z1Ps5lrOJr | 2,307 | Fix help string for stop parameter | {
"login": "gaardhus",
"id": 46934916,
"node_id": "MDQ6VXNlcjQ2OTM0OTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/46934916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaardhus",
"html_url": "https://github.com/gaardhus",
"followers_url": "https://api.github.com/users/gaa... | [] | closed | false | null | [] | null | 1 | 2024-02-01T09:47:24 | 2024-05-07T23:48:35 | 2024-05-07T23:48:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2307",
"html_url": "https://github.com/ollama/ollama/pull/2307",
"diff_url": "https://github.com/ollama/ollama/pull/2307.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2307.patch",
"merged_at": "2024-05-07T23:48:35"
} | Changed the help prompt for setting the stop parameters, and quotes or commas are otherwise included in the stop-token:
/set parameter stop "?", "!" # Invalid
/set parameter stop ? ! # Valid | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2307/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4368 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4368/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4368/comments | https://api.github.com/repos/ollama/ollama/issues/4368/events | https://github.com/ollama/ollama/pull/4368 | 2,291,084,965 | PR_kwDOJ0Z1Ps5vKx0b | 4,368 | Fix OpenAI `finish_reason` values when empty | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-05-11T22:31:27 | 2024-05-11T22:31:42 | 2024-05-11T22:31:41 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4368",
"html_url": "https://github.com/ollama/ollama/pull/4368",
"diff_url": "https://github.com/ollama/ollama/pull/4368.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4368.patch",
"merged_at": "2024-05-11T22:31:41"
} | Fixes https://github.com/ollama/ollama/issues/4357 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4368/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4368/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4127 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4127/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4127/comments | https://api.github.com/repos/ollama/ollama/issues/4127/events | https://github.com/ollama/ollama/issues/4127 | 2,277,792,321 | I_kwDOJ0Z1Ps6HxFpB | 4,127 | Add LLAVA++ model | {
"login": "ddpasa",
"id": 112642920,
"node_id": "U_kgDOBrbLaA",
"avatar_url": "https://avatars.githubusercontent.com/u/112642920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddpasa",
"html_url": "https://github.com/ddpasa",
"followers_url": "https://api.github.com/users/ddpasa/follower... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 2 | 2024-05-03T14:24:21 | 2024-05-21T21:48:43 | 2024-05-21T21:48:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There is a new version of the Amazing LLava model that uses Llama 3 or Phi-3:
https://huggingface.co/collections/MBZUAI/llava-llama-3-and-phi-3-mini-662b38b972e3e3e4d8f821bb
https://github.com/mbzuai-oryx/LLaVA-pp | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4127/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4127/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7897 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7897/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7897/comments | https://api.github.com/repos/ollama/ollama/issues/7897/events | https://github.com/ollama/ollama/issues/7897 | 2,707,771,502 | I_kwDOJ0Z1Ps6hZVBu | 7,897 | Audio to audio models | {
"login": "mohammadaminyza",
"id": 73334272,
"node_id": "MDQ6VXNlcjczMzM0Mjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/73334272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohammadaminyza",
"html_url": "https://github.com/mohammadaminyza",
"followers_url": "https://api... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-11-30T18:31:44 | 2024-11-30T18:31:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, any plan to add audio to audio support? There are couple of open source model witch provide that | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7897/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6729 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6729/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6729/comments | https://api.github.com/repos/ollama/ollama/issues/6729/events | https://github.com/ollama/ollama/pull/6729 | 2,516,631,081 | PR_kwDOJ0Z1Ps56_i00 | 6,729 | Feature: Add Support for Distributed Inferencing | {
"login": "ecyht2",
"id": 94816144,
"node_id": "U_kgDOBabHkA",
"avatar_url": "https://avatars.githubusercontent.com/u/94816144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ecyht2",
"html_url": "https://github.com/ecyht2",
"followers_url": "https://api.github.com/users/ecyht2/followers"... | [] | open | false | null | [] | null | 20 | 2024-09-10T14:24:43 | 2025-01-24T23:15:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6729",
"html_url": "https://github.com/ollama/ollama/pull/6729",
"diff_url": "https://github.com/ollama/ollama/pull/6729.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6729.patch",
"merged_at": null
} | This feature adds support for llama.cpp RPC. This allows for distributed inferencing on different devices.
This Pull Request aims to implement #4643. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6729/reactions",
"total_count": 47,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 27,
"rocket": 20,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6729/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1444 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1444/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1444/comments | https://api.github.com/repos/ollama/ollama/issues/1444/events | https://github.com/ollama/ollama/pull/1444 | 2,033,527,333 | PR_kwDOJ0Z1Ps5hkgjb | 1,444 | Added mention of the NOPRUNE env var | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [] | closed | false | null | [] | null | 0 | 2023-12-09T01:38:51 | 2023-12-12T01:15:00 | 2023-12-12T01:15:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1444",
"html_url": "https://github.com/ollama/ollama/pull/1444",
"diff_url": "https://github.com/ollama/ollama/pull/1444.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1444.patch",
"merged_at": null
} | OLLAMA_NOPRUNE will prevent the pruning process from running, but it isn't mentioned anywhere outside of the code and a merged PR. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1444/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1027 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1027/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1027/comments | https://api.github.com/repos/ollama/ollama/issues/1027/events | https://github.com/ollama/ollama/issues/1027 | 1,980,781,895 | I_kwDOJ0Z1Ps52EFVH | 1,027 | How to properly format Advanced Parameters / options in API calls? | {
"login": "tob-har",
"id": 32613633,
"node_id": "MDQ6VXNlcjMyNjEzNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/32613633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tob-har",
"html_url": "https://github.com/tob-har",
"followers_url": "https://api.github.com/users/tob-ha... | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 3 | 2023-11-07T08:10:16 | 2023-11-09T00:44:38 | 2023-11-09T00:44:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | API Documentation gives a proper example, how to use
`POST /api/generate`
But how to properly format the JSON object to use Advanced Parameters?
Especially `options` and `system`.
I tried to request the following via `POST /api/generate`.
Everything behaves as expected, eg stream, but options is not workig:
... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1027/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1027/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4392 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4392/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4392/comments | https://api.github.com/repos/ollama/ollama/issues/4392/events | https://github.com/ollama/ollama/issues/4392 | 2,292,163,129 | I_kwDOJ0Z1Ps6In6I5 | 4,392 | Use GTT memory in case of iGPUs to run the model efiiciently. | {
"login": "CoolnsX",
"id": 76195824,
"node_id": "MDQ6VXNlcjc2MTk1ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/76195824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CoolnsX",
"html_url": "https://github.com/CoolnsX",
"followers_url": "https://api.github.com/users/Coolns... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-05-13T08:28:01 | 2024-11-02T18:48:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Models run on System memory using CPU is perfectly fine.
But when using integrate GPUs which have limited VRAM locked by vendors, we have model crash due to "low vram memory"
They have feature called GTT memory on linux, and Shared Memory on windows, which they can use whenever their VRAM capacity is nearly full. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4392/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7457 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7457/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7457/comments | https://api.github.com/repos/ollama/ollama/issues/7457/events | https://github.com/ollama/ollama/issues/7457 | 2,627,853,133 | I_kwDOJ0Z1Ps6codtN | 7,457 | Adding avx2+avx512 to cuda runner in new ollama code | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 0 | 2024-10-31T21:19:35 | 2024-12-10T17:47:22 | 2024-12-10T17:47:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In the old code i added avx2+512 in the gen_windows.ps1 by simply adding DGGML_AVX2=on & DGGML_AVX512=on after the DGGML_AVX=on line in the cuda build function
It added a fairly decent performance boost
I have added avx512 to cpu, But In the new code i cannot seem to find where to properly add it in the make files t... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7457/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7457/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3146 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3146/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3146/comments | https://api.github.com/repos/ollama/ollama/issues/3146/events | https://github.com/ollama/ollama/pull/3146 | 2,187,016,061 | PR_kwDOJ0Z1Ps5pqosz | 3,146 | server: replace blob prefix separator from ':' to '-' | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-03-14T18:32:54 | 2024-03-25T16:22:07 | 2024-03-15T03:18:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3146",
"html_url": "https://github.com/ollama/ollama/pull/3146",
"diff_url": "https://github.com/ollama/ollama/pull/3146.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3146.patch",
"merged_at": "2024-03-15T03:18:06"
} | This fixes issues with blob file names that contain ':' characters to be
9 rejected by file systems that do not support them. | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3146/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6653 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6653/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6653/comments | https://api.github.com/repos/ollama/ollama/issues/6653/events | https://github.com/ollama/ollama/issues/6653 | 2,507,275,752 | I_kwDOJ0Z1Ps6Vcf3o | 6,653 | Loading a smaller context model after a bigger model is loaded | {
"login": "Madhav-Gohel",
"id": 76510494,
"node_id": "MDQ6VXNlcjc2NTEwNDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/76510494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Madhav-Gohel",
"html_url": "https://github.com/Madhav-Gohel",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-09-05T09:32:46 | 2024-09-05T09:32:46 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
## Hardware
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6653/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2037 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2037/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2037/comments | https://api.github.com/repos/ollama/ollama/issues/2037/events | https://github.com/ollama/ollama/pull/2037 | 2,087,274,232 | PR_kwDOJ0Z1Ps5kXvPO | 2,037 | fix: pasting slash commands | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 3 | 2024-01-18T01:00:22 | 2025-01-15T02:54:49 | 2025-01-15T02:54:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2037",
"html_url": "https://github.com/ollama/ollama/pull/2037",
"diff_url": "https://github.com/ollama/ollama/pull/2037.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2037.patch",
"merged_at": null
} | there is a bug in paste where the pasted content is written directly to the prompt buffer instead of being processed. for most content, this is fine but slash commands are processed line-by-line.
aggregate status updates, e.g. "Set 'verbose' mode.", "Set system message.", to the end for aesthetics. the status messag... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2037/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2037/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8050 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8050/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8050/comments | https://api.github.com/repos/ollama/ollama/issues/8050/events | https://github.com/ollama/ollama/issues/8050 | 2,733,416,718 | I_kwDOJ0Z1Ps6i7KEO | 8,050 | Ollama behind proxy can't pull new models anymore | {
"login": "the-silversurver",
"id": 135591792,
"node_id": "U_kgDOCBT3cA",
"avatar_url": "https://avatars.githubusercontent.com/u/135591792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/the-silversurver",
"html_url": "https://github.com/the-silversurver",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 12 | 2024-12-11T16:21:38 | 2025-01-13T01:38:03 | 2025-01-13T01:38:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi there,
I am using Ollama on different machines (Ubuntu inside a docker container together with open web ui and on a Mac standalone) within a university that enforces the use of a proxy to access the internet.
On both systems, the proxy is correctly configured and Ollama worked with it... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8050/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5134 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5134/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5134/comments | https://api.github.com/repos/ollama/ollama/issues/5134/events | https://github.com/ollama/ollama/issues/5134 | 2,361,410,869 | I_kwDOJ0Z1Ps6MwEU1 | 5,134 | api interface /api/generate I need to make sure that every question is not answered from the previous record How to do? | {
"login": "mingLvft",
"id": 50644675,
"node_id": "MDQ6VXNlcjUwNjQ0Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/50644675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingLvft",
"html_url": "https://github.com/mingLvft",
"followers_url": "https://api.github.com/users/min... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-06-19T06:09:53 | 2024-11-20T20:11:36 | 2024-06-27T21:33:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | api interface /api/generate I need to make sure that every question is not answered from the previous record How to do? | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5134/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7810 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7810/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7810/comments | https://api.github.com/repos/ollama/ollama/issues/7810/events | https://github.com/ollama/ollama/issues/7810 | 2,685,926,172 | I_kwDOJ0Z1Ps6gF_sc | 7,810 | could anyone help me? something is not work. use a special gpu | {
"login": "wangzd0209",
"id": 99313728,
"node_id": "U_kgDOBetoQA",
"avatar_url": "https://avatars.githubusercontent.com/u/99313728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangzd0209",
"html_url": "https://github.com/wangzd0209",
"followers_url": "https://api.github.com/users/wangz... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-11-23T12:07:01 | 2024-12-01T02:19:24 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when i follow the instruction to install ollama with source code. i can not to finish gen.linux.sh
there are error information
`CMake Error at ggml/src/CMakeLists.txt:440 (find_package):
By not providing "Findhip.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a packa... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7810/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3112 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3112/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3112/comments | https://api.github.com/repos/ollama/ollama/issues/3112/events | https://github.com/ollama/ollama/issues/3112 | 2,184,294,406 | I_kwDOJ0Z1Ps6CMbAG | 3,112 | Windows Error:pull model manifest return wsarecv: An existing connection was forcibly closed by the remote host. | {
"login": "heimu-liu",
"id": 102661308,
"node_id": "U_kgDOBh58vA",
"avatar_url": "https://avatars.githubusercontent.com/u/102661308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heimu-liu",
"html_url": "https://github.com/heimu-liu",
"followers_url": "https://api.github.com/users/heimu-... | [
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": ... | closed | false | null | [] | null | 11 | 2024-03-13T15:26:36 | 2024-04-23T05:27:48 | 2024-03-29T03:25:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | i can't down the model:
[app.log](https://github.com/ollama/ollama/files/14589977/app.log)
[server.log](https://github.com/ollama/ollama/files/14589978/server.log)
`PS C:\Users\heimu\AppData\Local\Ollama> ollama pull llama2
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=A-QmGZFS... | {
"login": "heimu-liu",
"id": 102661308,
"node_id": "U_kgDOBh58vA",
"avatar_url": "https://avatars.githubusercontent.com/u/102661308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heimu-liu",
"html_url": "https://github.com/heimu-liu",
"followers_url": "https://api.github.com/users/heimu-... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3112/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2061 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2061/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2061/comments | https://api.github.com/repos/ollama/ollama/issues/2061/events | https://github.com/ollama/ollama/pull/2061 | 2,089,359,047 | PR_kwDOJ0Z1Ps5ke4Nm | 2,061 | ci: use stubs libraries | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-19T00:55:16 | 2024-01-19T01:17:47 | 2024-01-19T01:17:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2061",
"html_url": "https://github.com/ollama/ollama/pull/2061",
"diff_url": "https://github.com/ollama/ollama/pull/2061.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2061.patch",
"merged_at": null
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2061/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1523 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1523/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1523/comments | https://api.github.com/repos/ollama/ollama/issues/1523/events | https://github.com/ollama/ollama/issues/1523 | 2,041,859,796 | I_kwDOJ0Z1Ps55tE7U | 1,523 | docs: generate chat response `loadDuration` missing | {
"login": "mthongvanh",
"id": 4961248,
"node_id": "MDQ6VXNlcjQ5NjEyNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4961248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mthongvanh",
"html_url": "https://github.com/mthongvanh",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 1 | 2023-12-14T14:49:34 | 2023-12-14T17:15:51 | 2023-12-14T17:15:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | in the documentation https://github.com/jmorganca/ollama/blob/main/docs/api.md#response-6 loadDuration is listed as a return value but does not get returned by the api
<img width="981" alt="image" src="https://github.com/jmorganca/ollama/assets/4961248/bb0dbc37-c2cf-48ff-8c8d-be2ffcfa5115">
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1523/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/103 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/103/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/103/comments | https://api.github.com/repos/ollama/ollama/issues/103/events | https://github.com/ollama/ollama/pull/103 | 1,810,624,762 | PR_kwDOJ0Z1Ps5V0lpv | 103 | website content and design update | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | [] | closed | false | null | [] | null | 1 | 2023-07-18T19:58:33 | 2023-07-23T10:25:30 | 2023-07-18T20:18:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/103",
"html_url": "https://github.com/ollama/ollama/pull/103",
"diff_url": "https://github.com/ollama/ollama/pull/103.diff",
"patch_url": "https://github.com/ollama/ollama/pull/103.patch",
"merged_at": "2023-07-18T20:18:04"
} | null | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/103/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5681 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5681/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5681/comments | https://api.github.com/repos/ollama/ollama/issues/5681/events | https://github.com/ollama/ollama/pull/5681 | 2,407,157,044 | PR_kwDOJ0Z1Ps51Tgif | 5,681 | Adding instructions when user doesn't have sudo privileges | {
"login": "Ivanknmk",
"id": 1672248,
"node_id": "MDQ6VXNlcjE2NzIyNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1672248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ivanknmk",
"html_url": "https://github.com/Ivanknmk",
"followers_url": "https://api.github.com/users/Ivank... | [] | closed | false | null | [] | null | 2 | 2024-07-13T20:38:44 | 2024-11-25T00:02:00 | 2024-11-25T00:02:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5681",
"html_url": "https://github.com/ollama/ollama/pull/5681",
"diff_url": "https://github.com/ollama/ollama/pull/5681.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5681.patch",
"merged_at": null
} | Adding instructions when user doesn't have sudo privileges according to https://github.com/ollama/ollama/issues/2111 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5681/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6287 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6287/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6287/comments | https://api.github.com/repos/ollama/ollama/issues/6287/events | https://github.com/ollama/ollama/issues/6287 | 2,458,234,863 | I_kwDOJ0Z1Ps6Sha_v | 6,287 | UHD intel GPU Accelerate | {
"login": "jomardyan",
"id": 18527406,
"node_id": "MDQ6VXNlcjE4NTI3NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18527406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jomardyan",
"html_url": "https://github.com/jomardyan",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677491450,
"node_id": ... | closed | false | null | [] | null | 2 | 2024-08-09T16:03:25 | 2024-08-28T02:52:37 | 2024-08-09T18:36:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Why Ollama use CPU, but not utilizing intel UHD integrated GPU ?
(Computer with not Nvidia GPU)
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6287/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/199 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/199/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/199/comments | https://api.github.com/repos/ollama/ollama/issues/199/events | https://github.com/ollama/ollama/issues/199 | 1,819,047,422 | I_kwDOJ0Z1Ps5sbHX- | 199 | nous-hermes and parameters | {
"login": "alivardar",
"id": 10295369,
"node_id": "MDQ6VXNlcjEwMjk1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/10295369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alivardar",
"html_url": "https://github.com/alivardar",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2023-07-24T19:58:34 | 2023-08-23T17:46:45 | 2023-08-23T17:46:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
If I want to generate my model, this example with temperature and num_ctx paramters crashing "ollama" application.
FROM nous-hermes
# sets the temperature to 1 [higher is more creative, lower is more coherent]
# sets the context size to 4096
PARAMETER temperature 2
PARAMETER num_ctx 4096
Here is all r... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/199/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8657 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8657/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8657/comments | https://api.github.com/repos/ollama/ollama/issues/8657/events | https://github.com/ollama/ollama/issues/8657 | 2,818,103,966 | I_kwDOJ0Z1Ps6n-Nqe | 8,657 | running ollama deepseek-r1:1.5b on windows stuck for whole day | {
"login": "aadltya",
"id": 142524039,
"node_id": "U_kgDOCH6-hw",
"avatar_url": "https://avatars.githubusercontent.com/u/142524039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aadltya",
"html_url": "https://github.com/aadltya",
"followers_url": "https://api.github.com/users/aadltya/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw... | closed | false | null | [] | null | 3 | 2025-01-29T12:40:29 | 2025-01-29T13:44:51 | 2025-01-29T13:44:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | optimize for low end device, I'm using should windows with 8gb ram and 4gb nvidia gtx 1650 graphics card and im unable to run PS deepseek-r1:1.5b
In command line it stuck at 0% for whole day
```bash
C:\Users\ADITYA> ollama run deepseek-r1:1.5b
pulling manifest
pulling aabd4debf0c8... 0% ▕ ... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8657/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8267 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8267/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8267/comments | https://api.github.com/repos/ollama/ollama/issues/8267/events | https://github.com/ollama/ollama/pull/8267 | 2,762,422,303 | PR_kwDOJ0Z1Ps6GZHcU | 8,267 | examples: remove codified examples | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-12-29T22:10:00 | 2025-01-13T19:26:25 | 2025-01-13T19:26:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8267",
"html_url": "https://github.com/ollama/ollama/pull/8267",
"diff_url": "https://github.com/ollama/ollama/pull/8267.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8267.patch",
"merged_at": "2025-01-13T19:26:22"
} | This PR aims to streamline the examples and to have outgoing links to community frameworks instead.
Closes #8117
| {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8267/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5090 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5090/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5090/comments | https://api.github.com/repos/ollama/ollama/issues/5090/events | https://github.com/ollama/ollama/issues/5090 | 2,356,059,419 | I_kwDOJ0Z1Ps6Mbp0b | 5,090 | `amdgpu version file missing` when running via systemd | {
"login": "pulpocaminante",
"id": 109849915,
"node_id": "U_kgDOBowtOw",
"avatar_url": "https://avatars.githubusercontent.com/u/109849915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pulpocaminante",
"html_url": "https://github.com/pulpocaminante",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | closed | false | null | [] | null | 1 | 2024-06-16T23:56:40 | 2024-06-18T19:01:33 | 2024-06-18T19:01:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Previous issue was closed as fixed but the bug still exists.
Hi, this doesn't happen to me when running ollama as root directly in a shell, but it happens when I start ollama as a service (regardless of the user):
```
amnesia λ ~/ sudo systemctl status ollama
● ollama.service - Ollama Service
Loaded: loa... | {
"login": "pulpocaminante",
"id": 109849915,
"node_id": "U_kgDOBowtOw",
"avatar_url": "https://avatars.githubusercontent.com/u/109849915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pulpocaminante",
"html_url": "https://github.com/pulpocaminante",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5090/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/684 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/684/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/684/comments | https://api.github.com/repos/ollama/ollama/issues/684/events | https://github.com/ollama/ollama/issues/684 | 1,923,008,644 | I_kwDOJ0Z1Ps5ynsiE | 684 | WSL2 Ubuntu 22.04 GPU "CUDA error 100" ggml-cuda.cu:5522 ggml-cuda.cu:4883 no CUDA-capable device is detected | {
"login": "iamexe",
"id": 60526252,
"node_id": "MDQ6VXNlcjYwNTI2MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/60526252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamexe",
"html_url": "https://github.com/iamexe",
"followers_url": "https://api.github.com/users/iamexe/fo... | [] | closed | false | null | [] | null | 14 | 2023-10-02T23:45:31 | 2024-01-21T09:53:28 | 2023-10-03T23:44:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Thank you so much for ollama and the wsl2 support,
I already wrote a vuejs frontend and it works great with CPU.
I want GPU on WSL.
I installed CUDA like recomended from nvidia with wsl2 (cuda on windows).
I ran the following:
go generate ./...
go build .
I got a ollama that runs with CPU but not wit... | {
"login": "iamexe",
"id": 60526252,
"node_id": "MDQ6VXNlcjYwNTI2MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/60526252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamexe",
"html_url": "https://github.com/iamexe",
"followers_url": "https://api.github.com/users/iamexe/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/684/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/684/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2903 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2903/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2903/comments | https://api.github.com/repos/ollama/ollama/issues/2903/events | https://github.com/ollama/ollama/issues/2903 | 2,165,553,361 | I_kwDOJ0Z1Ps6BE7jR | 2,903 | msg="CPU does not have AVX or AVX2, disabling GPU support." | {
"login": "digicr",
"id": 162058985,
"node_id": "U_kgDOCajS6Q",
"avatar_url": "https://avatars.githubusercontent.com/u/162058985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/digicr",
"html_url": "https://github.com/digicr",
"followers_url": "https://api.github.com/users/digicr/follower... | [] | closed | false | null | [] | null | 4 | 2024-03-03T20:57:38 | 2024-03-06T16:49:29 | 2024-03-06T16:49:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | winserver2022 old cpuX5675 GPU RTX3070 CUDA11.8 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2903/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8636 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8636/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8636/comments | https://api.github.com/repos/ollama/ollama/issues/8636/events | https://github.com/ollama/ollama/issues/8636 | 2,815,799,891 | I_kwDOJ0Z1Ps6n1bJT | 8,636 | Upload compressed package file, unable to decompress and error reported | {
"login": "terling",
"id": 174825001,
"node_id": "U_kgDOCmueKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174825001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/terling",
"html_url": "https://github.com/terling",
"followers_url": "https://api.github.com/users/terling/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-28T14:13:01 | 2025-01-29T23:29:46 | 2025-01-29T23:29:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Thanks for this great program, I love it! However, I uploaded a compressed package containing the project source code in the dialog interface, and an error occurred when the program was run. Can this problem be solved?
, so my questions may be a bit silly.
My use case is to serve both CLIP and LLaVA (which combines clip and mistral) at the same time.
LLaVA can run perfectly on ollama. But I need to open another service for CLIP.
What I want to... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3477/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3477/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8629 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8629/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8629/comments | https://api.github.com/repos/ollama/ollama/issues/8629/events | https://github.com/ollama/ollama/issues/8629 | 2,815,526,057 | I_kwDOJ0Z1Ps6n0YSp | 8,629 | Choose path to install on Windows | {
"login": "EvgeniGenchev",
"id": 59848681,
"node_id": "MDQ6VXNlcjU5ODQ4Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/59848681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EvgeniGenchev",
"html_url": "https://github.com/EvgeniGenchev",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2025-01-28T12:31:56 | 2025-01-28T21:31:28 | 2025-01-28T21:31:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The title is pretty self-expanatory. I would be nice to chose a folder where ollama is being installed on windows instead of defaulting to C:\Users\... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8629/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5995 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5995/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5995/comments | https://api.github.com/repos/ollama/ollama/issues/5995/events | https://github.com/ollama/ollama/pull/5995 | 2,432,956,077 | PR_kwDOJ0Z1Ps52nnLa | 5,995 | return tool calls finish reason for openai | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-07-26T20:46:29 | 2024-07-30T08:51:13 | 2024-07-29T20:56:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5995",
"html_url": "https://github.com/ollama/ollama/pull/5995",
"diff_url": "https://github.com/ollama/ollama/pull/5995.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5995.patch",
"merged_at": "2024-07-29T20:56:57"
} | null | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5995/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5995/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5982 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5982/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5982/comments | https://api.github.com/repos/ollama/ollama/issues/5982/events | https://github.com/ollama/ollama/issues/5982 | 2,432,159,409 | I_kwDOJ0Z1Ps6Q986x | 5,982 | Ollama is amazing!! | {
"login": "robertguss",
"id": 5605310,
"node_id": "MDQ6VXNlcjU2MDUzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5605310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robertguss",
"html_url": "https://github.com/robertguss",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 6 | 2024-07-26T12:41:49 | 2024-08-25T18:42:26 | 2024-08-25T18:42:26 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This really isn't an issue but I just wanted to say that everyone who works on and maintains this project is doing incredible work! Thank you so much for all of the countless hours and hard work you put into making Ollama.
I was a little shocked to see the project has over 900 issues at the time of this writing and... | {
"login": "robertguss",
"id": 5605310,
"node_id": "MDQ6VXNlcjU2MDUzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5605310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robertguss",
"html_url": "https://github.com/robertguss",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5982/reactions",
"total_count": 10,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5982/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1818 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1818/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1818/comments | https://api.github.com/repos/ollama/ollama/issues/1818/events | https://github.com/ollama/ollama/pull/1818 | 2,068,204,377 | PR_kwDOJ0Z1Ps5jW8ln | 1,818 | fix(cmd): history in alt prompt | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-05T23:58:04 | 2024-01-08T21:48:35 | 2024-01-08T21:48:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1818",
"html_url": "https://github.com/ollama/ollama/pull/1818",
"diff_url": "https://github.com/ollama/ollama/pull/1818.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1818.patch",
"merged_at": "2024-01-08T21:48:35"
} | using up/down arrows (for history) messes up multiline string inputs by replacing the alt prefix `...` with the default prefix `>>>` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1818/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3794 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3794/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3794/comments | https://api.github.com/repos/ollama/ollama/issues/3794/events | https://github.com/ollama/ollama/issues/3794 | 2,254,988,005 | I_kwDOJ0Z1Ps6GaGLl | 3,794 | 模型下载最后1%速度骤降,导致下载时间超长。The download speed suddenly drops at the last 1%, resulting in an extremely long download time. | {
"login": "aohanhongzhi",
"id": 37319319,
"node_id": "MDQ6VXNlcjM3MzE5MzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/37319319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aohanhongzhi",
"html_url": "https://github.com/aohanhongzhi",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2024-04-21T09:34:13 | 2025-01-23T23:04:23 | 2024-04-30T19:20:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
模型无论大小,每次下载前面99%都最大的速度可以达到29MB/s。但是最后1%速度就只有几百 KB/s。很奇怪。是进度条有问题,还是啥bug?这在我本地电脑和线上服务器都出现了。
Regardless of model size, in most cases, the download speed reaches 29MB/s for about 99% of the time before completion. However, the last 1% takes only a few hundred KB/s. This is quite strange. Is it ... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3794/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2468 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2468/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2468/comments | https://api.github.com/repos/ollama/ollama/issues/2468/events | https://github.com/ollama/ollama/pull/2468 | 2,130,941,577 | PR_kwDOJ0Z1Ps5mrdNr | 2,468 | Update llama.cpp submodule to `099afc6` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-12T20:35:40 | 2024-02-12T22:01:17 | 2024-02-12T22:01:16 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2468",
"html_url": "https://github.com/ollama/ollama/pull/2468",
"diff_url": "https://github.com/ollama/ollama/pull/2468.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2468.patch",
"merged_at": "2024-02-12T22:01:16"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2468/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7450 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7450/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7450/comments | https://api.github.com/repos/ollama/ollama/issues/7450/events | https://github.com/ollama/ollama/issues/7450 | 2,627,248,628 | I_kwDOJ0Z1Ps6cmKH0 | 7,450 | Run LLM directly in Golang App without Ollama Server | {
"login": "faelp22",
"id": 6642575,
"node_id": "MDQ6VXNlcjY2NDI1NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6642575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faelp22",
"html_url": "https://github.com/faelp22",
"followers_url": "https://api.github.com/users/faelp22/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-10-31T16:04:31 | 2024-11-29T17:07:31 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello everyone, I would like to know if it is possible to run an all-minilm LLM model directly in my Golang App without having to make calls to the Ollama Server http://localhost:11434/api
I would like to take a small "all-minilm" model and use the //go:embed model/* to already have the tool embedded in the Golang b... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7450/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7482 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7482/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7482/comments | https://api.github.com/repos/ollama/ollama/issues/7482/events | https://github.com/ollama/ollama/pull/7482 | 2,631,424,552 | PR_kwDOJ0Z1Ps6Au4C6 | 7,482 | Add action for publishing package to WinGet | {
"login": "mdanish-kh",
"id": 88161975,
"node_id": "MDQ6VXNlcjg4MTYxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/88161975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mdanish-kh",
"html_url": "https://github.com/mdanish-kh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2024-11-03T19:38:14 | 2024-11-23T19:35:49 | 2024-11-23T19:35:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7482",
"html_url": "https://github.com/ollama/ollama/pull/7482",
"diff_url": "https://github.com/ollama/ollama/pull/7482.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7482.patch",
"merged_at": null
} | ## Description
This PR proposes to add a GitHub action for submitting the latest stable release to WinGet as it gets published. [microsoft/winget-create](https://github.com/microsoft/winget-create) is used as the tool for submitting the latest package.
## Steps needed from maintainers
If the maintainers approv... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7482/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7482/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2520/comments | https://api.github.com/repos/ollama/ollama/issues/2520/events | https://github.com/ollama/ollama/issues/2520 | 2,137,351,508 | I_kwDOJ0Z1Ps5_ZWVU | 2,520 | go-1.21 fails to build ollama: C source files not allowed when not using cgo or SWIG: gpu_info_cpu.c gpu_info_cuda.c gpu_info_rocm.c | {
"login": "yurivict",
"id": 271906,
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yurivict",
"html_url": "https://github.com/yurivict",
"followers_url": "https://api.github.com/users/yurivic... | [] | closed | false | null | [] | null | 11 | 2024-02-15T20:04:57 | 2024-05-02T22:00:23 | 2024-05-02T22:00:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
===> Building for ollama-0.1.25
(cd /usr/ports/misc/ollama/work/github.com/ollama/ollama@v0.1.25; for t in ./cmd; do out=$(/usr/bin/basename $(echo ${t} | /usr/bin/sed -Ee 's/^[^:]*:([^:]+).*$/\1/' -e 's/^\.$/ollama/')); pkg=$(echo ${t} | /usr/bin/sed -Ee 's/^([^:]*).*$/\1/' -e 's/^ollama$/./'); echo "===>... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2520/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1198 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1198/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1198/comments | https://api.github.com/repos/ollama/ollama/issues/1198/events | https://github.com/ollama/ollama/issues/1198 | 2,000,892,540 | I_kwDOJ0Z1Ps53QzJ8 | 1,198 | Support for hyenadna-large-1m-seqlen-hf | {
"login": "magedhelmy1",
"id": 63347261,
"node_id": "MDQ6VXNlcjYzMzQ3MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/63347261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/magedhelmy1",
"html_url": "https://github.com/magedhelmy1",
"followers_url": "https://api.github.com/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2023-11-19T15:17:54 | 2024-03-11T17:46:16 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, any plans to support hyenadna? it has 1 million tokens!
https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen-hf | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1198/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3234 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3234/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3234/comments | https://api.github.com/repos/ollama/ollama/issues/3234/events | https://github.com/ollama/ollama/issues/3234 | 2,193,959,556 | I_kwDOJ0Z1Ps6CxSqE | 3,234 | is it possible to use ollama as a library , not through network | {
"login": "aizimuji",
"id": 129702132,
"node_id": "U_kgDOB7sY9A",
"avatar_url": "https://avatars.githubusercontent.com/u/129702132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aizimuji",
"html_url": "https://github.com/aizimuji",
"followers_url": "https://api.github.com/users/aizimuji/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-03-19T03:58:11 | 2024-03-21T13:42:48 | 2024-03-21T13:42:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
i want to know if it's possible to call ollama function like some library, for example by dll in windows
so other developer can develop some gui or related software with this library
not to run ollama as a stand-alone server
### How should we solve this?
it's easier to build related... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3234/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2218 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2218/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2218/comments | https://api.github.com/repos/ollama/ollama/issues/2218/events | https://github.com/ollama/ollama/issues/2218 | 2,102,941,771 | I_kwDOJ0Z1Ps59WFhL | 2,218 | :link: Please add HF (HuggingFace) model link to `duckdb-nsql` :duck: | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | [] | closed | false | null | [] | null | 3 | 2024-01-26T21:40:29 | 2024-01-27T09:26:19 | 2024-01-27T06:25:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # :grey_question: About
Recently, [`duckdb-nsql`](https://ollama.ai/library/duckdb-nsql) has been added to `ollama` library:
- https://github.com/ollama/ollama/issues/2193

**:point_right: ... but the page is lac... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2218/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6295 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6295/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6295/comments | https://api.github.com/repos/ollama/ollama/issues/6295/events | https://github.com/ollama/ollama/issues/6295 | 2,458,840,334 | I_kwDOJ0Z1Ps6Sju0O | 6,295 | Ability to preload embedding model | {
"login": "comunidadio",
"id": 10286013,
"node_id": "MDQ6VXNlcjEwMjg2MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/10286013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/comunidadio",
"html_url": "https://github.com/comunidadio",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.git... | null | 2 | 2024-08-10T01:30:27 | 2024-08-13T17:19:57 | 2024-08-13T17:19:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The "empty request" trick to preload a model does not currently work for embedding models.
Source: https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-preload-a-model-into-ollama-to-get-faster-response-times and #2431
```
$ curl http://localhost:11434/api/embed -d '{"model": "all-minilm:latest"}'
... | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6295/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4748 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4748/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4748/comments | https://api.github.com/repos/ollama/ollama/issues/4748/events | https://github.com/ollama/ollama/issues/4748 | 2,327,501,621 | I_kwDOJ0Z1Ps6Kuts1 | 4,748 | Custom-llama issue | {
"login": "Ascariota",
"id": 25208125,
"node_id": "MDQ6VXNlcjI1MjA4MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/25208125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ascariota",
"html_url": "https://github.com/Ascariota",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-05-31T10:29:12 | 2024-05-31T10:29:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
I have a problem, or I misunderstood something. If I put several SYSTEM tags in my custom-llama3 file, only the last one is taken.
How can I give him more information?
Example I would like
SYSTEM You are a helpful AI assistant named Droid
but also that they can know the locatio... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4748/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6977 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6977/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6977/comments | https://api.github.com/repos/ollama/ollama/issues/6977/events | https://github.com/ollama/ollama/issues/6977 | 2,549,983,967 | I_kwDOJ0Z1Ps6X_arf | 6,977 | To configure Ollama to run multiple models simultaneously | {
"login": "DavidAlpha007",
"id": 143383189,
"node_id": "U_kgDOCIvalQ",
"avatar_url": "https://avatars.githubusercontent.com/u/143383189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidAlpha007",
"html_url": "https://github.com/DavidAlpha007",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-09-26T09:08:05 | 2024-09-26T15:46:40 | 2024-09-26T15:46:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | if the design of Ollama can support calling multiple models simultaneously. For example, can it be used in evaluation scenarios? Thanks for your support. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6977/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1102 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1102/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1102/comments | https://api.github.com/repos/ollama/ollama/issues/1102/events | https://github.com/ollama/ollama/issues/1102 | 1,989,549,140 | I_kwDOJ0Z1Ps52lhxU | 1,102 | Ollama on FreeBSD | {
"login": "eng-alameedi",
"id": 73557986,
"node_id": "MDQ6VXNlcjczNTU3OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/73557986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eng-alameedi",
"html_url": "https://github.com/eng-alameedi",
"followers_url": "https://api.github.c... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 46 | 2023-11-12T20:07:58 | 2024-11-08T22:12:37 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello there:
is there any chance to get ollama working on freebsd please?? | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1102/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1102/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2460 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2460/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2460/comments | https://api.github.com/repos/ollama/ollama/issues/2460/events | https://github.com/ollama/ollama/pull/2460 | 2,129,533,959 | PR_kwDOJ0Z1Ps5mmnoO | 2,460 | Refactor chat prompt templating | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-02-12T07:03:05 | 2024-02-12T23:06:58 | 2024-02-12T23:06:57 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2460",
"html_url": "https://github.com/ollama/ollama/pull/2460",
"diff_url": "https://github.com/ollama/ollama/pull/2460.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2460.patch",
"merged_at": "2024-02-12T23:06:57"
} | This refactors the chat prompt processing to be a little easier to follow. It also fully deprecates `.First` in favor of the chat endpoint
Fixes https://github.com/ollama/ollama/issues/2443
Fixes https://github.com/ollama/ollama/issues/2438 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2460/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7391 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7391/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7391/comments | https://api.github.com/repos/ollama/ollama/issues/7391/events | https://github.com/ollama/ollama/issues/7391 | 2,617,261,796 | I_kwDOJ0Z1Ps6cAD7k | 7,391 | ollama -v return 2version one is 0.0.0 the other is client version 0.3.14 | {
"login": "FanGShiYuu",
"id": 88468647,
"node_id": "MDQ6VXNlcjg4NDY4NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/88468647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FanGShiYuu",
"html_url": "https://github.com/FanGShiYuu",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 4 | 2024-10-28T04:25:23 | 2024-11-04T17:59:44 | 2024-11-04T17:59:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i am using ubuntu20.04;install ollama through curl -fsSL https://ollama.com/install.sh | sh; when input ollama -v return ollama version is 0.0.0
Warning: client version is 0.3.14
btw, when using ollama, i notice my gpu is not used and the response is so slow
### OS
Linux
### GPU
Nvidia
##... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7391/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2959 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2959/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2959/comments | https://api.github.com/repos/ollama/ollama/issues/2959/events | https://github.com/ollama/ollama/pull/2959 | 2,172,301,959 | PR_kwDOJ0Z1Ps5o4eiY | 2,959 | fix json encoder | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-06T19:48:35 | 2024-05-09T22:18:42 | 2024-03-06T21:04:14 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2959",
"html_url": "https://github.com/ollama/ollama/pull/2959",
"diff_url": "https://github.com/ollama/ollama/pull/2959.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2959.patch",
"merged_at": null
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2959/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8201 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8201/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8201/comments | https://api.github.com/repos/ollama/ollama/issues/8201/events | https://github.com/ollama/ollama/issues/8201 | 2,754,242,988 | I_kwDOJ0Z1Ps6kKmms | 8,201 | Ollama | {
"login": "Sandro127",
"id": 149949677,
"node_id": "U_kgDOCPAM7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/149949677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sandro127",
"html_url": "https://github.com/Sandro127",
"followers_url": "https://api.github.com/users/Sandro... | [] | closed | false | null | [] | null | 0 | 2024-12-21T16:57:55 | 2024-12-21T16:58:10 | 2024-12-21T16:58:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "Sandro127",
"id": 149949677,
"node_id": "U_kgDOCPAM7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/149949677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sandro127",
"html_url": "https://github.com/Sandro127",
"followers_url": "https://api.github.com/users/Sandro... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8201/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/166 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/166/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/166/comments | https://api.github.com/repos/ollama/ollama/issues/166/events | https://github.com/ollama/ollama/pull/166 | 1,816,357,300 | PR_kwDOJ0Z1Ps5WIKOs | 166 | Note that CGO must be enabled in dev docs | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2023-07-21T20:36:39 | 2023-07-21T20:48:17 | 2023-07-21T20:48:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/166",
"html_url": "https://github.com/ollama/ollama/pull/166",
"diff_url": "https://github.com/ollama/ollama/pull/166.diff",
"patch_url": "https://github.com/ollama/ollama/pull/166.patch",
"merged_at": "2023-07-21T20:48:10"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/166/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8310 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8310/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8310/comments | https://api.github.com/repos/ollama/ollama/issues/8310/events | https://github.com/ollama/ollama/issues/8310 | 2,769,334,462 | I_kwDOJ0Z1Ps6lELC- | 8,310 | llama3.2-vision doesn't utilize my GPU. | {
"login": "blueApple12",
"id": 89522107,
"node_id": "MDQ6VXNlcjg5NTIyMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/89522107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blueApple12",
"html_url": "https://github.com/blueApple12",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 32 | 2025-01-05T15:51:49 | 2025-01-17T18:30:18 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I bought a new pc with 4070 Super to do some AI tasks using Ollama, but when I tried to run llama3.2-vision it just didn't utilize my GPU and only utilize my CPU, llama3.2 does utilize my GPU, so why is that? thank you.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8310/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8310/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/175 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/175/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/175/comments | https://api.github.com/repos/ollama/ollama/issues/175/events | https://github.com/ollama/ollama/pull/175 | 1,816,776,380 | PR_kwDOJ0Z1Ps5WJfEd | 175 | Update .gitignore | {
"login": "jk1jk",
"id": 140257749,
"node_id": "U_kgDOCFwp1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/140257749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jk1jk",
"html_url": "https://github.com/jk1jk",
"followers_url": "https://api.github.com/users/jk1jk/followers",
... | [] | closed | false | null | [] | null | 0 | 2023-07-22T14:03:26 | 2023-07-22T16:40:38 | 2023-07-22T16:40:38 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/175",
"html_url": "https://github.com/ollama/ollama/pull/175",
"diff_url": "https://github.com/ollama/ollama/pull/175.diff",
"patch_url": "https://github.com/ollama/ollama/pull/175.patch",
"merged_at": "2023-07-22T16:40:38"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/175/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2403 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2403/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2403/comments | https://api.github.com/repos/ollama/ollama/issues/2403/events | https://github.com/ollama/ollama/pull/2403 | 2,124,200,380 | PR_kwDOJ0Z1Ps5mUxPw | 2,403 | Ensure the libraries are present | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 1 | 2024-02-08T01:28:22 | 2024-02-08T01:55:33 | 2024-02-08T01:55:31 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2403",
"html_url": "https://github.com/ollama/ollama/pull/2403",
"diff_url": "https://github.com/ollama/ollama/pull/2403.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2403.patch",
"merged_at": "2024-02-08T01:55:31"
} | When we store our libraries in a temp dir, a reaper might clean them when we are idle, so make sure to check for them before we reload. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2403/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3579 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3579/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3579/comments | https://api.github.com/repos/ollama/ollama/issues/3579/events | https://github.com/ollama/ollama/pull/3579 | 2,236,185,189 | PR_kwDOJ0Z1Ps5sRyQc | 3,579 | fix ci | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-04-10T18:27:10 | 2024-04-10T18:37:02 | 2024-04-10T18:37:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3579",
"html_url": "https://github.com/ollama/ollama/pull/3579",
"diff_url": "https://github.com/ollama/ollama/pull/3579.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3579.patch",
"merged_at": "2024-04-10T18:37:01"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3579/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/675 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/675/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/675/comments | https://api.github.com/repos/ollama/ollama/issues/675/events | https://github.com/ollama/ollama/issues/675 | 1,922,472,936 | I_kwDOJ0Z1Ps5ylpvo | 675 | api improvements | {
"login": "jtoy",
"id": 14783,
"node_id": "MDQ6VXNlcjE0Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/14783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jtoy",
"html_url": "https://github.com/jtoy",
"followers_url": "https://api.github.com/users/jtoy/followers",
"follo... | [] | closed | false | null | [] | null | 7 | 2023-10-02T18:59:10 | 2024-01-10T13:14:47 | 2023-10-05T16:38:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | its a stream of objects that are separated with a newline. often times new lines are returned in the response, so that breaks just splitting on new lines.
I think the split should be on something else.
Also it seems like there should be an api endpoint that just returns the whole response in a string.
thoughts? | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/675/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/675/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3149 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3149/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3149/comments | https://api.github.com/repos/ollama/ollama/issues/3149/events | https://github.com/ollama/ollama/pull/3149 | 2,187,149,183 | PR_kwDOJ0Z1Ps5prGX9 | 3,149 | fix: clip memory leak | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-03-14T19:47:41 | 2024-03-14T20:34:16 | 2024-03-14T20:34:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3149",
"html_url": "https://github.com/ollama/ollama/pull/3149",
"diff_url": "https://github.com/ollama/ollama/pull/3149.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3149.patch",
"merged_at": "2024-03-14T20:34:15"
} | this change patches llama.cpp and fixes two bugs
1. llama_server_context never calls clip_free
2. clip_free does not fully free its context | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3149/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1212/comments | https://api.github.com/repos/ollama/ollama/issues/1212/events | https://github.com/ollama/ollama/pull/1212 | 2,003,065,238 | PR_kwDOJ0Z1Ps5f9NN9 | 1,212 | enable metal for fp32, q5_0, q5_1 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-11-20T21:48:29 | 2023-11-20T21:56:41 | 2023-11-20T21:56:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1212",
"html_url": "https://github.com/ollama/ollama/pull/1212",
"diff_url": "https://github.com/ollama/ollama/pull/1212.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1212.patch",
"merged_at": "2023-11-20T21:56:40"
} | recent llama.cpp update added kernels for fp32, q5_0, and q5_1
resolves #1200 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1212/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1212/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8285 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8285/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8285/comments | https://api.github.com/repos/ollama/ollama/issues/8285/events | https://github.com/ollama/ollama/issues/8285 | 2,765,781,740 | I_kwDOJ0Z1Ps6k2nrs | 8,285 | GPU runs at maximum load with 2 models | {
"login": "RomanDrechsel",
"id": 6135586,
"node_id": "MDQ6VXNlcjYxMzU1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6135586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RomanDrechsel",
"html_url": "https://github.com/RomanDrechsel",
"followers_url": "https://api.github.... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 21 | 2025-01-02T10:27:48 | 2025-01-24T21:56:15 | 2025-01-11T06:51:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
i use ollama as the provider for the Continue extension for VSCode for tab autocompletion.
Since the last update I have the problem that my GPU runs at maximum load as soon as 2 models are running at the same time.
Even if they are only very small models (e.g. nomic-embed-text for emb... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8285/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8285/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5397 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5397/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5397/comments | https://api.github.com/repos/ollama/ollama/issues/5397/events | https://github.com/ollama/ollama/issues/5397 | 2,382,728,555 | I_kwDOJ0Z1Ps6OBY1r | 5,397 | V0.1.48 The model is loaded into the GPU Memory but runs on the CPU | {
"login": "wxtt522",
"id": 28422636,
"node_id": "MDQ6VXNlcjI4NDIyNjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/28422636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wxtt522",
"html_url": "https://github.com/wxtt522",
"followers_url": "https://api.github.com/users/wxtt52... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-07-01T03:38:43 | 2024-07-03T07:26:48 | 2024-07-03T07:26:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama run gemma2:27b

The same goes for loading other models. It was normal in the previous version. I did not change any environment variables.
### OS
Windows
### GPU
Nvidia
### CPU
Int... | {
"login": "wxtt522",
"id": 28422636,
"node_id": "MDQ6VXNlcjI4NDIyNjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/28422636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wxtt522",
"html_url": "https://github.com/wxtt522",
"followers_url": "https://api.github.com/users/wxtt52... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5397/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5397/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/656 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/656/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/656/comments | https://api.github.com/repos/ollama/ollama/issues/656/events | https://github.com/ollama/ollama/issues/656 | 1,920,166,650 | I_kwDOJ0Z1Ps5yc2r6 | 656 | CLI run output not standard output | {
"login": "reustle",
"id": 304560,
"node_id": "MDQ6VXNlcjMwNDU2MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/304560?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reustle",
"html_url": "https://github.com/reustle",
"followers_url": "https://api.github.com/users/reustle/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2023-09-30T08:04:16 | 2023-10-02T18:52:16 | 2023-10-02T18:52:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, I've been on this for quite some time now, and I'm sorry if I'm misinformed.
To me, it seems like even when I use the command line argument style input such as `ollama run mistral "Here is my prompt"` (as mentioned here https://github.com/jmorganca/ollama#pass-in-prompt-as-arguments ), the output isn't clean... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/656/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1986 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1986/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1986/comments | https://api.github.com/repos/ollama/ollama/issues/1986/events | https://github.com/ollama/ollama/issues/1986 | 2,080,614,336 | I_kwDOJ0Z1Ps58A6fA | 1,986 | Ollama Utilizing Only CPU Instead of GPU on MacBook Pro M1 Pro | {
"login": "vidvudsc",
"id": 77242455,
"node_id": "MDQ6VXNlcjc3MjQyNDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/77242455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vidvudsc",
"html_url": "https://github.com/vidvudsc",
"followers_url": "https://api.github.com/users/vid... | [] | closed | false | null | [] | null | 9 | 2024-01-14T07:18:33 | 2024-06-29T17:51:50 | 2024-01-14T19:14:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Description
I've encountered an issue where Ollama, when running any llm is utilizing only the CPU instead of the GPU on my MacBook Pro with an M1 Pro chip. This results in less efficient model performance than expected.
Environment
MacBook Pro with M1 Pro chip
MacOS version: Sonoma 14.2.1
Ollama version: 1.20
... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1986/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/1986/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/36 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/36/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/36/comments | https://api.github.com/repos/ollama/ollama/issues/36/events | https://github.com/ollama/ollama/issues/36 | 1,786,490,994 | I_kwDOJ0Z1Ps5qe7By | 36 | Fetch `q4_k` models from hugging face | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2023-07-03T16:25:37 | 2023-07-08T03:26:50 | 2023-07-08T03:26:50 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On macOS, metal only supports 4-bit and 16-bit quantization | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/36/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/36/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6173 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6173/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6173/comments | https://api.github.com/repos/ollama/ollama/issues/6173/events | https://github.com/ollama/ollama/issues/6173 | 2,447,913,094 | I_kwDOJ0Z1Ps6R6DCG | 6,173 | Using ollama version 0.3.3, downloading all models will result in errors. | {
"login": "ucjmhfeng",
"id": 65010234,
"node_id": "MDQ6VXNlcjY1MDEwMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/65010234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucjmhfeng",
"html_url": "https://github.com/ucjmhfeng",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 7 | 2024-08-05T08:17:10 | 2024-08-30T12:32:54 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/gemma2/manifests/2b": net/http: TLS handshake timeout.
Before version 0.3.0, there were no similar issues. Starting from the update to 0.3.1, I tried many methods, but none of them worked, including using V... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6173/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5706 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5706/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5706/comments | https://api.github.com/repos/ollama/ollama/issues/5706/events | https://github.com/ollama/ollama/issues/5706 | 2,409,283,935 | I_kwDOJ0Z1Ps6PmsFf | 5,706 | Multiple windows instances with different ports | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-07-15T17:44:03 | 2024-07-16T02:51:39 | null | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When you set an alternate port for OLLAMA_HOST, the CLI will spawn a new app, and create multiple tray instances that have no way to tell which one represents which port.
### OS
Windows
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5706/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4965 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4965/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4965/comments | https://api.github.com/repos/ollama/ollama/issues/4965/events | https://github.com/ollama/ollama/pull/4965 | 2,344,553,985 | PR_kwDOJ0Z1Ps5yAx3F | 4,965 | fix: skip removing layers that no longer exist | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-06-10T18:18:33 | 2024-06-10T18:40:04 | 2024-06-10T18:40:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4965",
"html_url": "https://github.com/ollama/ollama/pull/4965",
"diff_url": "https://github.com/ollama/ollama/pull/4965.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4965.patch",
"merged_at": "2024-06-10T18:40:03"
} | some models, such as `wizardcoder:34b-python`, incorrectly includes the config layer as an item in layers. this causes `RemoveLayers` to try to remove the same layer more than once, failing the second time since it's already removed
```json
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distributi... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4965/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8491 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8491/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8491/comments | https://api.github.com/repos/ollama/ollama/issues/8491/events | https://github.com/ollama/ollama/issues/8491 | 2,797,924,474 | I_kwDOJ0Z1Ps6mxPB6 | 8,491 | MacApp fails to build when building from source | {
"login": "devlux76",
"id": 86517969,
"node_id": "MDQ6VXNlcjg2NTE3OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/86517969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devlux76",
"html_url": "https://github.com/devlux76",
"followers_url": "https://api.github.com/users/dev... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2025-01-20T00:32:59 | 2025-01-20T00:33:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I cloned the repo and was building the macapp and it fails to build. Can't find webpack.main.config
There's a webpack.main.config.ts file but that's not the file referenced. I tried to fix it myself and fell down a rabbit hole.
I'm just bringing this to the attention of whomever is maintaining i... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8491/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3993 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3993/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3993/comments | https://api.github.com/repos/ollama/ollama/issues/3993/events | https://github.com/ollama/ollama/issues/3993 | 2,267,373,661 | I_kwDOJ0Z1Ps6HJWBd | 3,993 | Add support for EMO-2B | {
"login": "OE-LUCIFER",
"id": 158988478,
"node_id": "U_kgDOCXn4vg",
"avatar_url": "https://avatars.githubusercontent.com/u/158988478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OE-LUCIFER",
"html_url": "https://github.com/OE-LUCIFER",
"followers_url": "https://api.github.com/users/OE-... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-04-28T06:21:00 | 2024-04-28T06:21:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Gguf model https://huggingface.co/Abhaykoul/EMO-2B-GGUF
Full model https://huggingface.co/OEvortex/EMO-2B | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3993/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1376 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1376/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1376/comments | https://api.github.com/repos/ollama/ollama/issues/1376/events | https://github.com/ollama/ollama/pull/1376 | 2,024,549,360 | PR_kwDOJ0Z1Ps5hF2eX | 1,376 | install: fix rocky kernel packages | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-12-04T19:20:27 | 2023-12-04T22:23:44 | 2023-12-04T22:23:43 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1376",
"html_url": "https://github.com/ollama/ollama/pull/1376",
"diff_url": "https://github.com/ollama/ollama/pull/1376.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1376.patch",
"merged_at": "2023-12-04T22:23:43"
} | package names for rocky-linux are slightly different | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1376/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4537 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4537/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4537/comments | https://api.github.com/repos/ollama/ollama/issues/4537/events | https://github.com/ollama/ollama/issues/4537 | 2,305,927,857 | I_kwDOJ0Z1Ps6Jcaqx | 4,537 | 请问下如何将模型也封装进ollama的docker镜像中 | {
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/ia... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 0 | 2024-05-20T12:48:17 | 2024-05-20T14:48:27 | 2024-05-20T14:48:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 请问下如何将qwen:32b模型也封装进ollama的docker镜像中 | {
"login": "iaoxuesheng",
"id": 94165844,
"node_id": "U_kgDOBZzbVA",
"avatar_url": "https://avatars.githubusercontent.com/u/94165844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iaoxuesheng",
"html_url": "https://github.com/iaoxuesheng",
"followers_url": "https://api.github.com/users/ia... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4537/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6042 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6042/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6042/comments | https://api.github.com/repos/ollama/ollama/issues/6042/events | https://github.com/ollama/ollama/issues/6042 | 2,434,948,575 | I_kwDOJ0Z1Ps6RIl3f | 6,042 | strange tool response | {
"login": "asyncfncom",
"id": 136445484,
"node_id": "U_kgDOCCH-LA",
"avatar_url": "https://avatars.githubusercontent.com/u/136445484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asyncfncom",
"html_url": "https://github.com/asyncfncom",
"followers_url": "https://api.github.com/users/asy... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-07-29T09:42:35 | 2024-08-15T21:44:53 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
The prompt was "call fibonacci function to determine 7 element of the sequence".
I wonder if there should be 2 tool calls.
```
{
"model": "llama3.1:8b",
"created_at": "2024-07-29T09:32:02.5425761Z",
"message": {
"role": "assistant",
"content": "",
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6042/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6042/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7602 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7602/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7602/comments | https://api.github.com/repos/ollama/ollama/issues/7602/events | https://github.com/ollama/ollama/issues/7602 | 2,647,535,085 | I_kwDOJ0Z1Ps6dzi3t | 7,602 | Ollama ps to report actual number of layers instead of percentage. | {
"login": "chigkim",
"id": 22120994,
"node_id": "MDQ6VXNlcjIyMTIwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/22120994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigkim",
"html_url": "https://github.com/chigkim",
"followers_url": "https://api.github.com/users/chigki... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-11-10T19:02:27 | 2024-11-10T19:02:27 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can we have Ollama to report how many layers out of total layers are offloaded to cpu instead of percentage?
I think This would be more useful information than just percentage when setting num_gpu parameter. Also you can see how many layers a model has.
Thanks! | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7602/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5617 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5617/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5617/comments | https://api.github.com/repos/ollama/ollama/issues/5617/events | https://github.com/ollama/ollama/pull/5617 | 2,401,858,365 | PR_kwDOJ0Z1Ps51Bp_X | 5,617 | OpenAI: Update Docs to Include Tools | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | closed | false | null | [] | null | 0 | 2024-07-10T22:39:15 | 2024-07-25T22:34:07 | 2024-07-25T22:34:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5617",
"html_url": "https://github.com/ollama/ollama/pull/5617",
"diff_url": "https://github.com/ollama/ollama/pull/5617.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5617.patch",
"merged_at": "2024-07-25T22:34:06"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5617/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2449 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2449/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2449/comments | https://api.github.com/repos/ollama/ollama/issues/2449/events | https://github.com/ollama/ollama/issues/2449 | 2,129,132,876 | I_kwDOJ0Z1Ps5-5_1M | 2,449 | Log request/responses payload | {
"login": "jmformenti",
"id": 13070879,
"node_id": "MDQ6VXNlcjEzMDcwODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/13070879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmformenti",
"html_url": "https://github.com/jmformenti",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-02-11T19:26:43 | 2024-10-01T12:28:46 | 2024-05-11T00:36:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In order to debug low-level details during development, it would be very useful to be able to see the payload of requests and responses.
Is there a way to enable this? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2449/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/2449/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2374 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2374/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2374/comments | https://api.github.com/repos/ollama/ollama/issues/2374/events | https://github.com/ollama/ollama/pull/2374 | 2,121,337,588 | PR_kwDOJ0Z1Ps5mLF16 | 2,374 | disable rocm builds | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-02-06T17:29:51 | 2024-02-06T17:41:04 | 2024-02-06T17:41:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2374",
"html_url": "https://github.com/ollama/ollama/pull/2374",
"diff_url": "https://github.com/ollama/ollama/pull/2374.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2374.patch",
"merged_at": "2024-02-06T17:41:03"
} | rocm builds are failing because of disk space issues. disable them temporarily until larger runners
resolves #2373 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2374/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1458 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1458/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1458/comments | https://api.github.com/repos/ollama/ollama/issues/1458/events | https://github.com/ollama/ollama/issues/1458 | 2,034,704,639 | I_kwDOJ0Z1Ps55RyD_ | 1,458 | Ollama hung after 30 minute of use | {
"login": "lfoppiano",
"id": 15426,
"node_id": "MDQ6VXNlcjE1NDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/15426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lfoppiano",
"html_url": "https://github.com/lfoppiano",
"followers_url": "https://api.github.com/users/lfoppiano/... | [] | closed | false | null | [] | null | 22 | 2023-12-11T02:28:13 | 2024-05-05T01:11:36 | 2024-02-20T01:20:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm running Ollama on my mac M1 and I'm trying to use the 7b models for processing batches of questions / answers.
I noticed that after a while ollama just hang and the process stay there forever.
Is there a way to know what's going on?
I did not find a way to get to the logs.
Thank you in advance | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1458/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4255 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4255/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4255/comments | https://api.github.com/repos/ollama/ollama/issues/4255/events | https://github.com/ollama/ollama/issues/4255 | 2,285,208,246 | I_kwDOJ0Z1Ps6INYK2 | 4,255 | max retries exceeded: http status 502 Bad Gateway while pushing a model | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-05-08T10:05:11 | 2024-05-10T12:17:36 | 2024-05-10T12:17:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have spend nearly a whole day to push, but fail. are there anyway to set the time of re-try? please cancel limitation of re-tries. or is that possible pushing is resumable?
taozhiyu@603e5f4a42f1 Q8 % ollama push taozhiyuai/openbiollm-llama-3-70b:q8_0
retrieving manifest
retrieving mani... | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4255/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2022 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2022/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2022/comments | https://api.github.com/repos/ollama/ollama/issues/2022/events | https://github.com/ollama/ollama/issues/2022 | 2,084,797,403 | I_kwDOJ0Z1Ps58Q3vb | 2,022 | List available models | {
"login": "ParisNeo",
"id": 827993,
"node_id": "MDQ6VXNlcjgyNzk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/827993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParisNeo",
"html_url": "https://github.com/ParisNeo",
"followers_url": "https://api.github.com/users/ParisNe... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": ... | closed | false | null | [] | null | 16 | 2024-01-16T20:14:24 | 2024-11-21T17:26:22 | 2024-11-21T17:26:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi. The API allows me to list the local models. Is there a way to list all available models (those we can find in the website of ollama?
I need that for the models zoo to make it easy for users of lollms with ollama backend to install the models.
I prefer this rather than having to scrape the website to get the lat... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2022/reactions",
"total_count": 23,
"+1": 23,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2022/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6284 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6284/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6284/comments | https://api.github.com/repos/ollama/ollama/issues/6284/events | https://github.com/ollama/ollama/issues/6284 | 2,457,788,664 | I_kwDOJ0Z1Ps6SfuD4 | 6,284 | Intel GPU in Docker container crashes | {
"login": "Minionflo",
"id": 62773986,
"node_id": "MDQ6VXNlcjYyNzczOTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/62773986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Minionflo",
"html_url": "https://github.com/Minionflo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677491450,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJu-g... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-08-09T12:14:56 | 2024-08-09T19:14:44 | 2024-08-09T19:14:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
with the error `panic: runtime error: invalid memory address or nil pointer dereference`
Docker Compose File: https://bin.minionflo.net/p/E9gFhE.yaml
Log: https://bin.minionflo.net/p/QyrT8Z.txt
### OS
Docker on Linux
### GPU
Intel
### CPU
AMD
### Ollama version
0.... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6284/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6432 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6432/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6432/comments | https://api.github.com/repos/ollama/ollama/issues/6432/events | https://github.com/ollama/ollama/pull/6432 | 2,474,477,204 | PR_kwDOJ0Z1Ps54y56d | 6,432 | Split rocm back out of bundle | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-20T00:12:54 | 2024-08-20T14:26:41 | 2024-08-20T14:26:38 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6432",
"html_url": "https://github.com/ollama/ollama/pull/6432",
"diff_url": "https://github.com/ollama/ollama/pull/6432.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6432.patch",
"merged_at": "2024-08-20T14:26:38"
} | We're [over budget for github's maximum release artifact size](https://github.com/ollama/ollama/actions/runs/10461795539/job/28973022210) with rocm + 2 cuda versions. This splits rocm back out as a discrete artifact, but keeps the layout so it can be extracted into the same location as the main bundle. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6432/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1346 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1346/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1346/comments | https://api.github.com/repos/ollama/ollama/issues/1346/events | https://github.com/ollama/ollama/issues/1346 | 2,021,266,981 | I_kwDOJ0Z1Ps54ehYl | 1,346 | Set conversation or chat history/context in CLI | {
"login": "Maharshi-Pandya",
"id": 53078775,
"node_id": "MDQ6VXNlcjUzMDc4Nzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/53078775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maharshi-Pandya",
"html_url": "https://github.com/Maharshi-Pandya",
"followers_url": "https://api... | [] | closed | false | null | [] | null | 1 | 2023-12-01T17:03:50 | 2023-12-27T15:09:54 | 2023-12-27T15:09:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Thank you for making this!
I tried the `/set history` command within the CLI and expected it to work.
I would like to use the CLI as a chatbot itself having access to conversation history (a window of messages if not whole).
What is the process to set the conversation history as context in `Openhermes-mistral` s... | {
"login": "Maharshi-Pandya",
"id": 53078775,
"node_id": "MDQ6VXNlcjUzMDc4Nzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/53078775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maharshi-Pandya",
"html_url": "https://github.com/Maharshi-Pandya",
"followers_url": "https://api... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1346/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4458 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4458/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4458/comments | https://api.github.com/repos/ollama/ollama/issues/4458/events | https://github.com/ollama/ollama/issues/4458 | 2,298,859,750 | I_kwDOJ0Z1Ps6JBdDm | 4,458 | Confirm GPU usage command | {
"login": "puddlejumper90",
"id": 55165215,
"node_id": "MDQ6VXNlcjU1MTY1MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/55165215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puddlejumper90",
"html_url": "https://github.com/puddlejumper90",
"followers_url": "https://api.gi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-05-15T21:16:25 | 2024-05-16T21:11:45 | 2024-05-15T22:53:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Would like to have a way to to confirm if a GPU is actually being utilized. Maybe some kind of command or option when running a given model to test/log individual machine performance. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4458/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2366 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2366/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2366/comments | https://api.github.com/repos/ollama/ollama/issues/2366/events | https://github.com/ollama/ollama/issues/2366 | 2,119,725,248 | I_kwDOJ0Z1Ps5-WHDA | 2,366 | Bump llama.cpp commit to 6b91b1e which includes Intel GPU support (iGPU, Arc, Max, Flex) | {
"login": "0x33taji",
"id": 148982823,
"node_id": "U_kgDOCOFMJw",
"avatar_url": "https://avatars.githubusercontent.com/u/148982823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0x33taji",
"html_url": "https://github.com/0x33taji",
"followers_url": "https://api.github.com/users/0x33taji/... | [] | closed | false | null | [] | null | 2 | 2024-02-06T00:44:53 | 2024-02-13T21:52:10 | 2024-02-13T21:52:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | llama.cpp has added support for Intel GPUs.
commit ID: [6b91b1e0a92ac2e4e269eec6361ca53a61ced6c6](https://github.com/ggerganov/llama.cpp/commit/6b91b1e0a92ac2e4e269eec6361ca53a61ced6c6)
*Task*
1. Bump llama.cpp commit if feasible
2. Then update Dockerfile with with Intel GPU support for one-click deployment or as... | {
"login": "0x33taji",
"id": 148982823,
"node_id": "U_kgDOCOFMJw",
"avatar_url": "https://avatars.githubusercontent.com/u/148982823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0x33taji",
"html_url": "https://github.com/0x33taji",
"followers_url": "https://api.github.com/users/0x33taji/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2366/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2653 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2653/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2653/comments | https://api.github.com/repos/ollama/ollama/issues/2653/events | https://github.com/ollama/ollama/issues/2653 | 2,147,711,815 | I_kwDOJ0Z1Ps6AA3tH | 2,653 | Ollama serve fails silently when an input is too long | {
"login": "logancyang",
"id": 4860545,
"node_id": "MDQ6VXNlcjQ4NjA1NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4860545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logancyang",
"html_url": "https://github.com/logancyang",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 6 | 2024-02-21T21:05:18 | 2024-03-12T02:02:12 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When I use `ollama serve` and provide a context of ~30k tokens with a mistral model that has a max context window of 32768, the server doesn't show any error and proceeds to return as usual. That gave me the impression that it successfully took in the entire context.
But after digging a bit deeper, I see it's not.
... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2653/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2653/timeline | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.