url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/3891 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3891/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3891/comments | https://api.github.com/repos/ollama/ollama/issues/3891/events | https://github.com/ollama/ollama/issues/3891 | 2,262,214,074 | I_kwDOJ0Z1Ps6G1qW6 | 3,891 | not clear what the options are for OLLAMA_LLM_LIBRARY | {
"login": "FlorinAndrei",
"id": 901867,
"node_id": "MDQ6VXNlcjkwMTg2Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/901867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FlorinAndrei",
"html_url": "https://github.com/FlorinAndrei",
"followers_url": "https://api.github.com/u... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 1 | 2024-04-24T21:24:59 | 2024-05-01T23:02:27 | 2024-05-01T23:02:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
This document https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md says:
```
You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, us... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3891/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7793 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7793/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7793/comments | https://api.github.com/repos/ollama/ollama/issues/7793/events | https://github.com/ollama/ollama/issues/7793 | 2,682,562,136 | I_kwDOJ0Z1Ps6f5KZY | 7,793 | LLM(vision) GGUF Recommendation: Is there any LLM(vision) with great performance in GGUF format? | {
"login": "bohaocheung",
"id": 106144344,
"node_id": "U_kgDOBlOiWA",
"avatar_url": "https://avatars.githubusercontent.com/u/106144344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bohaocheung",
"html_url": "https://github.com/bohaocheung",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-11-22T09:30:55 | 2024-11-26T17:38:13 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Disappointing Performance
It's really strange that I have tried many **LLMs with vision** in `GGUF` format, listed in the official website, such as `Llama3.2-vision`, `llava`, `llava-llama3`, `llava-phi3`. However, all of their performance is disappointing in **vision** aspect, even a simple task like recognizing ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7793/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3770 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3770/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3770/comments | https://api.github.com/repos/ollama/ollama/issues/3770/events | https://github.com/ollama/ollama/pull/3770 | 2,254,361,254 | PR_kwDOJ0Z1Ps5tPV0T | 3,770 | Allow User-Defined GPU Selection for Ollama | {
"login": "chornox",
"id": 1256609,
"node_id": "MDQ6VXNlcjEyNTY2MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1256609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chornox",
"html_url": "https://github.com/chornox",
"followers_url": "https://api.github.com/users/chornox/... | [] | closed | false | null | [] | null | 1 | 2024-04-20T04:03:57 | 2024-04-24T06:28:20 | 2024-04-24T06:27:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3770",
"html_url": "https://github.com/ollama/ollama/pull/3770",
"diff_url": "https://github.com/ollama/ollama/pull/3770.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3770.patch",
"merged_at": null
} | Currently, Ollama defaults to using NVIDIA GPUs. This PR introduces the ability for users to choose their preferred GPU by leveraging the existing `CUDA_VISIBLE_DEVICES` environment variable.
By setting `CUDA_VISIBLE_DEVICES` to a "-1" (invalid value), users can ensure Ollama respects their GPU preference, regardle... | {
"login": "chornox",
"id": 1256609,
"node_id": "MDQ6VXNlcjEyNTY2MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1256609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chornox",
"html_url": "https://github.com/chornox",
"followers_url": "https://api.github.com/users/chornox/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3770/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6642 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6642/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6642/comments | https://api.github.com/repos/ollama/ollama/issues/6642/events | https://github.com/ollama/ollama/pull/6642 | 2,506,161,987 | PR_kwDOJ0Z1Ps56cHdy | 6,642 | llm: use json.hpp from common | {
"login": "iscy",
"id": 294710,
"node_id": "MDQ6VXNlcjI5NDcxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/294710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iscy",
"html_url": "https://github.com/iscy",
"followers_url": "https://api.github.com/users/iscy/followers",
... | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2024-09-04T19:54:36 | 2024-09-04T23:34:42 | 2024-09-04T23:34:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6642",
"html_url": "https://github.com/ollama/ollama/pull/6642",
"diff_url": "https://github.com/ollama/ollama/pull/6642.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6642.patch",
"merged_at": "2024-09-04T23:34:42"
} | The version of json.hpp from the 'common' module was no longer the same as the one within the 'ext_server' module. The discrepancy can cause linking errors depending on the functions used. This patch remove the old version in favor of using the one found in the common module. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6642/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8681 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8681/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8681/comments | https://api.github.com/repos/ollama/ollama/issues/8681/events | https://github.com/ollama/ollama/pull/8681 | 2,819,666,702 | PR_kwDOJ0Z1Ps6JcEt2 | 8,681 | Remove hard-coded GIN mode | {
"login": "yoonsio",
"id": 24367477,
"node_id": "MDQ6VXNlcjI0MzY3NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/24367477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoonsio",
"html_url": "https://github.com/yoonsio",
"followers_url": "https://api.github.com/users/yoonsi... | [] | open | false | null | [] | null | 0 | 2025-01-30T01:02:26 | 2025-01-30T01:07:02 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8681",
"html_url": "https://github.com/ollama/ollama/pull/8681",
"diff_url": "https://github.com/ollama/ollama/pull/8681.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8681.patch",
"merged_at": null
} | ## Context
https://github.com/ollama/ollama/issues/8682: Gin mode is hard-coded to `gin.DebugMode` and the server displays this log on start up.
```
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
```
## Changes
This PR removes hard-coded `gin.DebugMode` from the source code... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8681/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6783 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6783/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6783/comments | https://api.github.com/repos/ollama/ollama/issues/6783/events | https://github.com/ollama/ollama/issues/6783 | 2,523,800,337 | I_kwDOJ0Z1Ps6WbiMR | 6,783 | Ollama run says "A model with that name already exists" but really its a casing issue? | {
"login": "Sourdface",
"id": 130875793,
"node_id": "U_kgDOB80BkQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130875793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sourdface",
"html_url": "https://github.com/Sourdface",
"followers_url": "https://api.github.com/users/Sourdf... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-09-13T03:28:21 | 2024-09-13T03:28:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I don't know how to explain this exactly, but when I try to run `ollama run Llama3.1` I get the confusing error:
```
Error: a model with that name already exists
```
And it *does* exist, but the issue is that the casing is different (it's `llama3.1`, not `Llama3.1`). This is evidently co... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6783/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7677 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7677/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7677/comments | https://api.github.com/repos/ollama/ollama/issues/7677/events | https://github.com/ollama/ollama/issues/7677 | 2,660,688,240 | I_kwDOJ0Z1Ps6eluFw | 7,677 | Enable image embeddings for vision models | {
"login": "kevin-pw",
"id": 140451262,
"node_id": "U_kgDOCF8dvg",
"avatar_url": "https://avatars.githubusercontent.com/u/140451262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin-pw",
"html_url": "https://github.com/kevin-pw",
"followers_url": "https://api.github.com/users/kevin-pw/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-15T04:37:24 | 2024-11-15T17:10:49 | 2024-11-15T17:09:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I would love to be able to create embeddings for images with vision models like `llama3.2-vision`.
Creating image and text embeddings with a vision-capable model should allow creating image search and image categorization applications.
If my understanding of the shared semantic vector space of image models is cor... | {
"login": "kevin-pw",
"id": 140451262,
"node_id": "U_kgDOCF8dvg",
"avatar_url": "https://avatars.githubusercontent.com/u/140451262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin-pw",
"html_url": "https://github.com/kevin-pw",
"followers_url": "https://api.github.com/users/kevin-pw/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7677/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7677/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/6526 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6526/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6526/comments | https://api.github.com/repos/ollama/ollama/issues/6526/events | https://github.com/ollama/ollama/issues/6526 | 2,488,730,181 | I_kwDOJ0Z1Ps6UVwJF | 6,526 | database modify capability | {
"login": "nRanzo",
"id": 104451140,
"node_id": "U_kgDOBjnMRA",
"avatar_url": "https://avatars.githubusercontent.com/u/104451140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nRanzo",
"html_url": "https://github.com/nRanzo",
"followers_url": "https://api.github.com/users/nRanzo/follower... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-08-27T09:05:35 | 2024-09-12T01:33:31 | 2024-09-12T01:33:30 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | it would be interesting to have the possibility to provide ollama with a folder containing data and ask him to extrapolate a database with the required fields in an .md file | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6526/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4051 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4051/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4051/comments | https://api.github.com/repos/ollama/ollama/issues/4051/events | https://github.com/ollama/ollama/issues/4051 | 2,271,433,081 | I_kwDOJ0Z1Ps6HY1F5 | 4,051 | Enable Flash Attention on GGML/GGUF (feature now merged into llama.cpp) | {
"login": "sammcj",
"id": 862951,
"node_id": "MDQ6VXNlcjg2Mjk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/862951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sammcj",
"html_url": "https://github.com/sammcj",
"followers_url": "https://api.github.com/users/sammcj/follow... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 21 | 2024-04-30T13:06:47 | 2024-07-18T14:46:21 | 2024-05-20T20:36:04 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Flash Attention has landed in llama.cpp (https://github.com/ggerganov/llama.cpp/pull/5021).
The tldr; is simply to pass the -fa flag to llama.cpp’s server.
- Can we please have an Ollama server env var to pass this flag to the underlying llama.cpp server?
also a related idea - perhaps there could be a way to p... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4051/reactions",
"total_count": 30,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4051/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5329 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5329/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5329/comments | https://api.github.com/repos/ollama/ollama/issues/5329/events | https://github.com/ollama/ollama/issues/5329 | 2,378,420,087 | I_kwDOJ0Z1Ps6Nw893 | 5,329 | clip models fail to load with unicode characters in OLLAMA_MODELS path on windows | {
"login": "Derix76",
"id": 174033173,
"node_id": "U_kgDOCl-JFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174033173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Derix76",
"html_url": "https://github.com/Derix76",
"followers_url": "https://api.github.com/users/Derix76/foll... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 9 | 2024-06-27T15:07:04 | 2024-07-05T15:16:59 | 2024-07-05T15:16:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried to start llava:1.6 (or any similar llava based modell) an the llama server terminated.
llama3 modell or different non llava models work just fine.
GPU is: NVIDIA GeForce RTX 4060" total="8.0 GiB" available="6.9 GiB"
CPU is: AMD Ryzen 7 4700G with Radeon GPU (ignored by Ollama) ("uns... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5329/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5329/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8244 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8244/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8244/comments | https://api.github.com/repos/ollama/ollama/issues/8244/events | https://github.com/ollama/ollama/issues/8244 | 2,759,178,297 | I_kwDOJ0Z1Ps6kdbg5 | 8,244 | Ollama GPU/CPU | {
"login": "mcodexyz",
"id": 25278019,
"node_id": "MDQ6VXNlcjI1Mjc4MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/25278019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcodexyz",
"html_url": "https://github.com/mcodexyz",
"followers_url": "https://api.github.com/users/mco... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-12-26T02:01:26 | 2024-12-27T01:05:53 | 2024-12-27T01:05:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
While trying to run the Qwen 2.5 7b 8Q model, I noticed differences in performance between llama.cpp and Ollama. In llama.cpp, the model runs entirely on the RTX 2060 Super graphics card, which is the desired behavior. However, in the case of Ollama, although the VRAM usage is significant (7220M... | {
"login": "mcodexyz",
"id": 25278019,
"node_id": "MDQ6VXNlcjI1Mjc4MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/25278019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcodexyz",
"html_url": "https://github.com/mcodexyz",
"followers_url": "https://api.github.com/users/mco... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8244/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2589 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2589/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2589/comments | https://api.github.com/repos/ollama/ollama/issues/2589/events | https://github.com/ollama/ollama/issues/2589 | 2,141,938,590 | I_kwDOJ0Z1Ps5_q2Oe | 2,589 | Windows ARM support | {
"login": "PeronGH",
"id": 18367871,
"node_id": "MDQ6VXNlcjE4MzY3ODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/18367871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeronGH",
"html_url": "https://github.com/PeronGH",
"followers_url": "https://api.github.com/users/PeronG... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-02-19T09:41:03 | 2024-09-20T20:09:39 | 2024-09-20T20:09:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried to run it on a Windows on ARM device and the installer refused to exectue.

Is there any plan for the native Windows on ARM support? Or is it possible to remove the architecture checking and make the x86 version... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2589/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4795 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4795/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4795/comments | https://api.github.com/repos/ollama/ollama/issues/4795/events | https://github.com/ollama/ollama/issues/4795 | 2,330,494,055 | I_kwDOJ0Z1Ps6K6IRn | 4,795 | Error: llama runner process has terminated: exit status 0xc000001d | {
"login": "Ecthellin203",
"id": 94040890,
"node_id": "U_kgDOBZrzOg",
"avatar_url": "https://avatars.githubusercontent.com/u/94040890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ecthellin203",
"html_url": "https://github.com/Ecthellin203",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-06-03T08:22:08 | 2024-07-03T23:25:39 | 2024-07-03T23:25:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
run llama3:8b
Error: llama runner process has terminated: exit status 0xc000001d
```
2024/06/03 15:40:13 routes.go:1007: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4795/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8343 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8343/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8343/comments | https://api.github.com/repos/ollama/ollama/issues/8343/events | https://github.com/ollama/ollama/pull/8343 | 2,773,995,059 | PR_kwDOJ0Z1Ps6G__yQ | 8,343 | OpenAI: accept additional headers to fix CORS error #8342 | {
"login": "isamu",
"id": 231763,
"node_id": "MDQ6VXNlcjIzMTc2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/231763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isamu",
"html_url": "https://github.com/isamu",
"followers_url": "https://api.github.com/users/isamu/followers"... | [] | closed | false | null | [] | null | 1 | 2025-01-08T00:40:37 | 2025-01-08T19:28:24 | 2025-01-08T19:28:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8343",
"html_url": "https://github.com/ollama/ollama/pull/8343",
"diff_url": "https://github.com/ollama/ollama/pull/8343.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8343.patch",
"merged_at": "2025-01-08T19:28:11"
} | Related to #8342, I add some optional header for openai npm package.
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8343/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8343/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2585 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2585/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2585/comments | https://api.github.com/repos/ollama/ollama/issues/2585/events | https://github.com/ollama/ollama/pull/2585 | 2,141,375,423 | PR_kwDOJ0Z1Ps5nPC1L | 2,585 | Fix cuda leaks | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-02-19T02:37:52 | 2024-02-19T20:48:04 | 2024-02-19T20:48:00 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2585",
"html_url": "https://github.com/ollama/ollama/pull/2585",
"diff_url": "https://github.com/ollama/ollama/pull/2585.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2585.patch",
"merged_at": "2024-02-19T20:48:00"
} | This should resolve the problem where we don't fully unload from the GPU when we go idle.
Fixes #1848
This carries the upstream PR https://github.com/ggerganov/llama.cpp/pull/5576 as a patch until that's reviewed/merged.
This also updates the shutdown patch to match what was [merged upstream](https://github.c... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2585/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5843 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5843/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5843/comments | https://api.github.com/repos/ollama/ollama/issues/5843/events | https://github.com/ollama/ollama/issues/5843 | 2,422,076,442 | I_kwDOJ0Z1Ps6QXfQa | 5,843 | How to offload all layers to GPU? | {
"login": "RakshitAralimatti",
"id": 170917018,
"node_id": "U_kgDOCi_8mg",
"avatar_url": "https://avatars.githubusercontent.com/u/170917018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RakshitAralimatti",
"html_url": "https://github.com/RakshitAralimatti",
"followers_url": "https://api... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 18 | 2024-07-22T06:44:13 | 2024-11-17T20:06:11 | 2024-07-24T20:38:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently when I am running gemma2 (using Ollama serve) on my device by default only 27 layers are offloaded on GPU, but I want to offload all 43 layers to GPU
Does anyone know how I can do that? | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5843/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4539 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4539/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4539/comments | https://api.github.com/repos/ollama/ollama/issues/4539/events | https://github.com/ollama/ollama/issues/4539 | 2,306,061,251 | I_kwDOJ0Z1Ps6Jc7PD | 4,539 | qwen模型简介未更新110B | {
"login": "yuchenwei28",
"id": 141537882,
"node_id": "U_kgDOCG-yWg",
"avatar_url": "https://avatars.githubusercontent.com/u/141537882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenwei28",
"html_url": "https://github.com/yuchenwei28",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-05-20T13:56:16 | 2024-05-20T13:56:16 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
qwen模型简介未更新110B,
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4539/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1551 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1551/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1551/comments | https://api.github.com/repos/ollama/ollama/issues/1551/events | https://github.com/ollama/ollama/issues/1551 | 2,044,219,935 | I_kwDOJ0Z1Ps552FIf | 1,551 | Analyse this document. | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/ipla... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2023-12-15T18:54:28 | 2024-05-10T00:27:00 | 2024-05-10T00:27:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Thinking of an enhancement. With llava, you could ask what a picture is about and give the file location.
I wonder if it would be useful or worthwhile to analyse a document by giving it the file location.
Downsides, no rag so info can't be easily stored.
Upsides, would be super useful and can use as a reference, ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1551/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1551/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7653 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7653/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7653/comments | https://api.github.com/repos/ollama/ollama/issues/7653/events | https://github.com/ollama/ollama/issues/7653 | 2,656,195,761 | I_kwDOJ0Z1Ps6eUlSx | 7,653 | Validation of Keys and Subkeys in Ollama API JSON Objects | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 1 | 2024-11-13T17:11:07 | 2024-12-03T10:32:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | **Problem:**
When interacting with the Ollama API, developers may inadvertently pass incorrect keys or subkeys in their requests (e.g., due to typos or misunderstanding of the expected structure). Currently, the API does not provide feedback when this occurs, which can lead to silent failures where options are ignored... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7653/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/7653/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3961 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3961/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3961/comments | https://api.github.com/repos/ollama/ollama/issues/3961/events | https://github.com/ollama/ollama/issues/3961 | 2,266,535,771 | I_kwDOJ0Z1Ps6HGJdb | 3,961 | setting OLLAMA_HOST to 0.0.0.0 could make the API to listen on the port using IPv6 only | {
"login": "TadayukiOkada",
"id": 51673480,
"node_id": "MDQ6VXNlcjUxNjczNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/51673480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TadayukiOkada",
"html_url": "https://github.com/TadayukiOkada",
"followers_url": "https://api.githu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-04-26T21:35:48 | 2025-01-30T00:03:34 | 2024-04-26T23:43:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Edit2: sorry, if you set BindIPv6Only, 0.0.0.0:11434 should use v4. so this shouldn't be a problem.
Edit: by default, it seems it'll listen on both v4 and v6. If you set BindIPv6Only in systemd.socket, or /proc/sys/net/ipv6/bindv6only is set to 1, it may not listen on v4.
Ollama is only li... | {
"login": "TadayukiOkada",
"id": 51673480,
"node_id": "MDQ6VXNlcjUxNjczNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/51673480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TadayukiOkada",
"html_url": "https://github.com/TadayukiOkada",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3961/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3831 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3831/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3831/comments | https://api.github.com/repos/ollama/ollama/issues/3831/events | https://github.com/ollama/ollama/issues/3831 | 2,257,276,019 | I_kwDOJ0Z1Ps6Gi0xz | 3,831 | Upsert to Vector Store Error: 404 | {
"login": "thedavc",
"id": 28845125,
"node_id": "MDQ6VXNlcjI4ODQ1MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/28845125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thedavc",
"html_url": "https://github.com/thedavc",
"followers_url": "https://api.github.com/users/thedav... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-04-22T19:10:19 | 2024-09-05T20:08:22 | 2024-05-09T22:34:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm running into the 404 error when upserting into Flowise Vector Store. The system does not seem to register the call. In the server logs, I can see that the chat api is working as expected but not the embed.
{"function":"print_timings","level":"INFO","line":290,"msg":"generation eval time ... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3831/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5572 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5572/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5572/comments | https://api.github.com/repos/ollama/ollama/issues/5572/events | https://github.com/ollama/ollama/pull/5572 | 2,398,302,645 | PR_kwDOJ0Z1Ps501stW | 5,572 | Create SECURITY.md | {
"login": "Senipostol",
"id": 168364989,
"node_id": "U_kgDOCgkLvQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168364989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Senipostol",
"html_url": "https://github.com/Senipostol",
"followers_url": "https://api.github.com/users/Sen... | [] | closed | false | null | [] | null | 1 | 2024-07-09T13:54:50 | 2024-08-17T02:33:09 | 2024-08-14T16:55:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5572",
"html_url": "https://github.com/ollama/ollama/pull/5572",
"diff_url": "https://github.com/ollama/ollama/pull/5572.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5572.patch",
"merged_at": null
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5572/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/977 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/977/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/977/comments | https://api.github.com/repos/ollama/ollama/issues/977/events | https://github.com/ollama/ollama/issues/977 | 1,975,159,125 | I_kwDOJ0Z1Ps51uolV | 977 | connect: can't assign requested address & $HOME variable not defined | {
"login": "tyhallcsu",
"id": 16804423,
"node_id": "MDQ6VXNlcjE2ODA0NDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16804423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyhallcsu",
"html_url": "https://github.com/tyhallcsu",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 5 | 2023-11-02T22:46:05 | 2023-11-03T07:25:05 | 2023-11-03T07:25:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
% ollama run llama2
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connect: can't assign requested address
```
`/opt/homebrew/var/log/ollama.log`
Log file states:
```
Error: $HOME is not defined
Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connect: can't assign reques... | {
"login": "tyhallcsu",
"id": 16804423,
"node_id": "MDQ6VXNlcjE2ODA0NDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/16804423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyhallcsu",
"html_url": "https://github.com/tyhallcsu",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/977/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/587 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/587/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/587/comments | https://api.github.com/repos/ollama/ollama/issues/587/events | https://github.com/ollama/ollama/issues/587 | 1,911,634,901 | I_kwDOJ0Z1Ps5x8TvV | 587 | Clicking "restart to update Ollama" may not restart the Mac app | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api... | null | 0 | 2023-09-25T14:27:17 | 2023-09-28T19:29:19 | 2023-09-28T19:29:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/jmorganca/ollama/assets/5853428/ad263742-275a-4dca-a7f9-f7ea8b6408f7
Clicking the "restart to update Ollama" option to get a new version of the Ollama app did not close and update the desktop Mac app. Looking in the server logs there are no errors or information displayed.
Ollama is still respo... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/587/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/643 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/643/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/643/comments | https://api.github.com/repos/ollama/ollama/issues/643/events | https://github.com/ollama/ollama/issues/643 | 1,918,611,605 | I_kwDOJ0Z1Ps5yW7CV | 643 | Docs request: quantizations used for Llama models | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2023-09-29T05:16:20 | 2023-09-30T08:10:10 | 2023-09-30T04:57:03 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://ollama.ai/library/llama2 is nice that it links the model sources as TheBloke.
Can we add what quantization is used? That way there's more traceability as to what model is being run/downloaded.
---
Update: I can see from the aliases that it's Q4_0
, or so so that you can simply add it those tools to any model that supports such a tool, and a way to integrate custom tools, maybe someone wants to integrate their SD3 workflow, but it should be provid... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6339/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6339/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2140 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2140/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2140/comments | https://api.github.com/repos/ollama/ollama/issues/2140/events | https://github.com/ollama/ollama/issues/2140 | 2,094,532,599 | I_kwDOJ0Z1Ps582Af3 | 2,140 | Embedding api returns null (sometimes) | {
"login": "Gal-Lahat",
"id": 73216615,
"node_id": "MDQ6VXNlcjczMjE2NjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/73216615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gal-Lahat",
"html_url": "https://github.com/Gal-Lahat",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-01-22T18:48:16 | 2024-03-13T23:02:37 | 2024-03-13T23:02:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is my code (C# .NET):
```cs
string url = "http://localhost:11434/api/embeddings";
string json = "{ \"model\": \"llama2:text\",\"prompt\": \"" + jsonSafeText + "\" }";
// get the response field from the json response
HttpClient client = new HttpClient();
var response = client.PostAsync(url, new StringCont... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2140/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2140/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7097 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7097/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7097/comments | https://api.github.com/repos/ollama/ollama/issues/7097/events | https://github.com/ollama/ollama/pull/7097 | 2,565,214,977 | PR_kwDOJ0Z1Ps59kAsi | 7,097 | feat: configure auto startup in macos | {
"login": "hichemfantar",
"id": 34947993,
"node_id": "MDQ6VXNlcjM0OTQ3OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/34947993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hichemfantar",
"html_url": "https://github.com/hichemfantar",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 4 | 2024-10-04T00:49:51 | 2025-01-27T02:57:05 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7097",
"html_url": "https://github.com/ollama/ollama/pull/7097",
"diff_url": "https://github.com/ollama/ollama/pull/7097.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7097.patch",
"merged_at": null
} | This pull request adds a new function called `toggleAutoStartup` and a corresponding menu item to the application. The function allows the user to enable or disable auto startup of the application. When the function is called, it updates the application's login item settings and displays a notification to indicate whet... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7097/reactions",
"total_count": 13,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7097/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1734 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1734/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1734/comments | https://api.github.com/repos/ollama/ollama/issues/1734/events | https://github.com/ollama/ollama/issues/1734 | 2,058,552,441 | I_kwDOJ0Z1Ps56swR5 | 1,734 | Ollama - Llava Model Unable to detect image uploaded (WSL2 on Windows10) | {
"login": "m4ttgit",
"id": 27547776,
"node_id": "MDQ6VXNlcjI3NTQ3Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/27547776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m4ttgit",
"html_url": "https://github.com/m4ttgit",
"followers_url": "https://api.github.com/users/m4ttgi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2023-12-28T15:15:52 | 2024-04-27T15:57:07 | 2023-12-29T14:00:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Trying to run Llava model using WSL2 on Windows10. Ollama version is 0.1.16
Got this error message.

How do I fix this?
Thanks | {
"login": "m4ttgit",
"id": 27547776,
"node_id": "MDQ6VXNlcjI3NTQ3Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/27547776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m4ttgit",
"html_url": "https://github.com/m4ttgit",
"followers_url": "https://api.github.com/users/m4ttgi... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1734/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8525 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8525/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8525/comments | https://api.github.com/repos/ollama/ollama/issues/8525/events | https://github.com/ollama/ollama/issues/8525 | 2,803,116,322 | I_kwDOJ0Z1Ps6nFCki | 8,525 | Ollama Linux Service vs. Ollama Serve (Changing Ports) | {
"login": "slyyyle",
"id": 78447050,
"node_id": "MDQ6VXNlcjc4NDQ3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/78447050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyyyle",
"html_url": "https://github.com/slyyyle",
"followers_url": "https://api.github.com/users/slyyyl... | [] | closed | false | null | [] | null | 3 | 2025-01-22T00:52:50 | 2025-01-24T09:27:25 | 2025-01-24T09:27:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Default Setup:
Following the installation guide, Ollama works without issues when hosted on the default port.
Changing Address to 0.0.0.0:
I was able to successfully change the address to 0.0.0.0, which works fine. However, when trying to change the port, I encountered issues.
Modifying the Service File:
When I modif... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8525/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/67 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/67/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/67/comments | https://api.github.com/repos/ollama/ollama/issues/67/events | https://github.com/ollama/ollama/pull/67 | 1,799,442,678 | PR_kwDOJ0Z1Ps5VOs74 | 67 | app: write logs to ~/.ollama/logs | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | [] | closed | false | null | [] | null | 0 | 2023-07-11T17:42:13 | 2023-07-11T18:45:53 | 2023-07-11T18:45:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/67",
"html_url": "https://github.com/ollama/ollama/pull/67",
"diff_url": "https://github.com/ollama/ollama/pull/67.diff",
"patch_url": "https://github.com/ollama/ollama/pull/67.patch",
"merged_at": "2023-07-11T18:45:21"
} | null | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/67/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/67/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3842 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3842/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3842/comments | https://api.github.com/repos/ollama/ollama/issues/3842/events | https://github.com/ollama/ollama/issues/3842 | 2,258,798,122 | I_kwDOJ0Z1Ps6GooYq | 3,842 | mixtao-7bx2-moe-v8.1 cannot work | {
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2024-04-23T12:49:43 | 2024-04-23T14:22:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Just changed my computer and installed ollama, and I found this model is not working.
https://ollama.com/eramax/mixtao-7bx2-moe-v8.1
```
llama_model_loader: loaded meta data with 25 key-value pairs and 419 tensors from C:\Users\eramax\.ollama\models\blobs\sha256-1e360f0f98ef2687a01f8775aee6... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3842/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2610 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2610/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2610/comments | https://api.github.com/repos/ollama/ollama/issues/2610/events | https://github.com/ollama/ollama/issues/2610 | 2,144,009,930 | I_kwDOJ0Z1Ps5_yv7K | 2,610 | Return citations for given answers | {
"login": "SteffenBrinckmann",
"id": 39419674,
"node_id": "MDQ6VXNlcjM5NDE5Njc0",
"avatar_url": "https://avatars.githubusercontent.com/u/39419674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SteffenBrinckmann",
"html_url": "https://github.com/SteffenBrinckmann",
"followers_url": "https... | [] | closed | false | null | [] | null | 1 | 2024-02-20T10:08:37 | 2024-02-27T07:53:19 | 2024-02-27T07:53:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hey,
would it be possible to return citations, too. Just like perplexity does?
Best, Steffen | {
"login": "SteffenBrinckmann",
"id": 39419674,
"node_id": "MDQ6VXNlcjM5NDE5Njc0",
"avatar_url": "https://avatars.githubusercontent.com/u/39419674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SteffenBrinckmann",
"html_url": "https://github.com/SteffenBrinckmann",
"followers_url": "https... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2610/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/394 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/394/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/394/comments | https://api.github.com/repos/ollama/ollama/issues/394/events | https://github.com/ollama/ollama/issues/394 | 1,860,928,542 | I_kwDOJ0Z1Ps5u64Qe | 394 | Ollama on VMware Photon OS | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasot... | [] | closed | false | null | [] | null | 1 | 2023-08-22T08:37:21 | 2023-08-22T23:55:07 | 2023-08-22T23:55:07 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I'm tinkering with Ollama on VMware Photon OS.
The langchain example works, but the langchain-document example not.
This is ok
```
tdnf update -y
tdnf install -y git go build-essential
git clone https://github.com/jmorganca/ollama
cd ollama
go build .
tdnf install -y python3-pip
pip3 instal... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/394/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7799 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7799/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7799/comments | https://api.github.com/repos/ollama/ollama/issues/7799/events | https://github.com/ollama/ollama/issues/7799 | 2,683,643,510 | I_kwDOJ0Z1Ps6f9SZ2 | 7,799 | langchain_ollama tool_calls is None | {
"login": "UICJohn",
"id": 4167985,
"node_id": "MDQ6VXNlcjQxNjc5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4167985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UICJohn",
"html_url": "https://github.com/UICJohn",
"followers_url": "https://api.github.com/users/UICJohn/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 6 | 2024-11-22T15:41:07 | 2024-11-23T13:52:11 | 2024-11-23T01:17:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
` File "/workspaces/vivichains-base/.venv/lib/python3.11/site-packages/langchain_ollama/chat_models.py", line 732, in _agenerate
final_chunk = await self._achat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/vivichains-base/.venv... | {
"login": "UICJohn",
"id": 4167985,
"node_id": "MDQ6VXNlcjQxNjc5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4167985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UICJohn",
"html_url": "https://github.com/UICJohn",
"followers_url": "https://api.github.com/users/UICJohn/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7799/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7799/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/743 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/743/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/743/comments | https://api.github.com/repos/ollama/ollama/issues/743/events | https://github.com/ollama/ollama/pull/743 | 1,933,620,549 | PR_kwDOJ0Z1Ps5cShZI | 743 | handle upstream proxies | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-10-09T18:51:18 | 2023-10-10T16:59:07 | 2023-10-10T16:59:06 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/743",
"html_url": "https://github.com/ollama/ollama/pull/743",
"diff_url": "https://github.com/ollama/ollama/pull/743.diff",
"patch_url": "https://github.com/ollama/ollama/pull/743.patch",
"merged_at": "2023-10-10T16:59:06"
} | `http.ProxyFromEnvironment` returns the appropriate `*_PROXY` for the request. e.g. `HTTP_PROXY` for `http://` requests, `HTTPS_PROXY` for `https://` requests.
Resolves #729 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/743/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1968 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1968/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1968/comments | https://api.github.com/repos/ollama/ollama/issues/1968/events | https://github.com/ollama/ollama/pull/1968 | 2,079,757,821 | PR_kwDOJ0Z1Ps5j-YbF | 1,968 | fix: request retry with error | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-12T21:34:32 | 2024-01-16T18:33:51 | 2024-01-16T18:33:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1968",
"html_url": "https://github.com/ollama/ollama/pull/1968",
"diff_url": "https://github.com/ollama/ollama/pull/1968.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1968.patch",
"merged_at": "2024-01-16T18:33:50"
} | This fixes a subtle bug with makeRequestWithRetry where an HTTP status error on a retried request will potentially not return the right error.
When a request is retried on Unauthorized, the second request does not go through the same error handling as the first request. For example, if the second request returns wit... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1968/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8679 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8679/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8679/comments | https://api.github.com/repos/ollama/ollama/issues/8679/events | https://github.com/ollama/ollama/issues/8679 | 2,819,621,617 | I_kwDOJ0Z1Ps6oEALx | 8,679 | AMD RX 6750 GPU not recognized by Ollama on Arch Linux despite HSA_OVERRIDE_GFX_VERSION | {
"login": "Guedxx",
"id": 148347673,
"node_id": "U_kgDOCNebGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/148347673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guedxx",
"html_url": "https://github.com/Guedxx",
"followers_url": "https://api.github.com/users/Guedxx/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-30T00:21:07 | 2025-01-30T00:31:06 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm running Arch Linux with an AMD RX 6750 GPU. Ollama fails to recognize my GPU as compatible, even after setting the Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0" environment variable. I've tried several steps to resolve the issue, but nothing has worked so far.
time=2025-01-29T21:15:25.499-0... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8679/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4286 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4286/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4286/comments | https://api.github.com/repos/ollama/ollama/issues/4286/events | https://github.com/ollama/ollama/issues/4286 | 2,287,791,130 | I_kwDOJ0Z1Ps6IXOwa | 4,286 | can't copy command correctly on ollama.com | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/tao... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | [
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.git... | null | 2 | 2024-05-09T14:16:19 | 2024-05-09T16:34:06 | 2024-05-09T16:18:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
as shown in the picture, not a full command is copied.
1. I click webpage link and enter this page with default tag
2. press copy button and paste to terminal
3. only part of command is copied
but, if I choose other tag and copy command , it ok. then I choose the default tag which is s... | {
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4286/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1407 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1407/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1407/comments | https://api.github.com/repos/ollama/ollama/issues/1407/events | https://github.com/ollama/ollama/issues/1407 | 2,029,423,183 | I_kwDOJ0Z1Ps549opP | 1,407 | When using chat, no error when param names are wrong | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2023-12-06T21:34:49 | 2024-02-20T01:23:23 | 2024-02-20T01:23:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This works and gives the output I expect:
```
POST http://localhost:11434/api/chat
Content-Type: application/json
{
"model": "llama2",
"messages": [
{
"role": "user",
"content": "Why is the sky blue"
}
]
}
```
But this:
```
POST http://localhost:11434/api/chat
Content-T... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1407/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4540 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4540/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4540/comments | https://api.github.com/repos/ollama/ollama/issues/4540/events | https://github.com/ollama/ollama/issues/4540 | 2,306,182,760 | I_kwDOJ0Z1Ps6JdY5o | 4,540 | "ollama is not running" issue after changing the host ip | {
"login": "hknatm",
"id": 132488695,
"node_id": "U_kgDOB-Wd9w",
"avatar_url": "https://avatars.githubusercontent.com/u/132488695?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hknatm",
"html_url": "https://github.com/hknatm",
"followers_url": "https://api.github.com/users/hknatm/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-05-20T14:57:18 | 2025-01-06T14:59:07 | 2024-07-03T23:05:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when first start everything is normal. After I change host address to 192.168.1.10:11434 from /etc/systemd/system/ollama.service by adding environment, I put "ollama pull [model]" it throws an error : "Error: could not connect to ollama app, is it running?" but when I curl that IP that I chang... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4540/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1760 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1760/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1760/comments | https://api.github.com/repos/ollama/ollama/issues/1760/events | https://github.com/ollama/ollama/issues/1760 | 2,062,548,464 | I_kwDOJ0Z1Ps567_3w | 1,760 | [WSL1] Ollama is outright ignoring keyboard input | {
"login": "TheSystemGuy1337",
"id": 61162037,
"node_id": "MDQ6VXNlcjYxMTYyMDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/61162037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheSystemGuy1337",
"html_url": "https://github.com/TheSystemGuy1337",
"followers_url": "https://... | [] | closed | false | null | [] | null | 9 | 2024-01-02T15:05:48 | 2024-01-04T00:07:14 | 2024-01-02T18:35:58 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It just had to happen. After running ollama, any attempt to type out a message fails, with the program acting like you have not pressed a single key on the keyboard. I am using a Unicomp New Model M, which is an industry-standard ANSI/ASCII QWERTY 108 key keyboard, and this "program" just doesn't want to touch it's out... | {
"login": "TheSystemGuy1337",
"id": 61162037,
"node_id": "MDQ6VXNlcjYxMTYyMDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/61162037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheSystemGuy1337",
"html_url": "https://github.com/TheSystemGuy1337",
"followers_url": "https://... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1760/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7570 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7570/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7570/comments | https://api.github.com/repos/ollama/ollama/issues/7570/events | https://github.com/ollama/ollama/issues/7570 | 2,643,220,474 | I_kwDOJ0Z1Ps6djFf6 | 7,570 | How to install Olama in a distributed manner | {
"login": "smileyboy2019",
"id": 59221294,
"node_id": "MDQ6VXNlcjU5MjIxMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/59221294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smileyboy2019",
"html_url": "https://github.com/smileyboy2019",
"followers_url": "https://api.githu... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-08T08:05:43 | 2024-11-17T14:03:43 | 2024-11-17T14:03:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How to connect two servers with 4090 graphics cards and provide unified services | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7570/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6333 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6333/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6333/comments | https://api.github.com/repos/ollama/ollama/issues/6333/events | https://github.com/ollama/ollama/issues/6333 | 2,462,676,268 | I_kwDOJ0Z1Ps6SyXUs | 6,333 | "couldn't remove unused layers: invalid character '\x00' looking for beginning of value" | {
"login": "FellowTraveler",
"id": 339191,
"node_id": "MDQ6VXNlcjMzOTE5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/339191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FellowTraveler",
"html_url": "https://github.com/FellowTraveler",
"followers_url": "https://api.github... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-08-13T08:02:03 | 2024-08-18T00:02:12 | 2024-08-15T19:20:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="934" alt="image" src="https://github.com/user-attachments/assets/937cbcd5-ac5f-4a0a-ba7b-97dc6327efa9">
```
(base) ollama % ollama pull llama3.1:8b-instruct-q8_0
pulling manifest
pulling cc04e85e1f86... 100% ▕█████████████████████████████████████▏ 8.5 GB ... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6333/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7323 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7323/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7323/comments | https://api.github.com/repos/ollama/ollama/issues/7323/events | https://github.com/ollama/ollama/issues/7323 | 2,606,116,550 | I_kwDOJ0Z1Ps6bVi7G | 7,323 | ollama ps reporting "100% GPU" while model is running on CPU only. | {
"login": "Liu-Eroteme",
"id": 129079288,
"node_id": "U_kgDOB7GX-A",
"avatar_url": "https://avatars.githubusercontent.com/u/129079288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liu-Eroteme",
"html_url": "https://github.com/Liu-Eroteme",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 10 | 2024-10-22T17:56:23 | 2025-01-27T09:39:07 | 2024-12-02T14:43:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Running llama 3.1 70b q3km on 2x4090 when there is already a colbert retriever loaded (takes up ~2800MiB VRAM) should work, but doesn't - ollama ps reports that the model is running and using the GPU:
`llama3.1:70b-instruct-q3_K_M 0e97a7709799 40 GB 100% GPU Less than a second ... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7323/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1966 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1966/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1966/comments | https://api.github.com/repos/ollama/ollama/issues/1966/events | https://github.com/ollama/ollama/pull/1966 | 2,079,729,874 | PR_kwDOJ0Z1Ps5j-SUy | 1,966 | improve cuda detection (rel. issue #1704) | {
"login": "fpreiss",
"id": 17441607,
"node_id": "MDQ6VXNlcjE3NDQxNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/17441607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fpreiss",
"html_url": "https://github.com/fpreiss",
"followers_url": "https://api.github.com/users/fpreis... | [] | closed | false | null | [] | null | 0 | 2024-01-12T21:12:54 | 2024-01-15T02:00:11 | 2024-01-15T02:00:11 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1966",
"html_url": "https://github.com/ollama/ollama/pull/1966",
"diff_url": "https://github.com/ollama/ollama/pull/1966.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1966.patch",
"merged_at": "2024-01-15T02:00:11"
} | This pull request supersedes https://github.com/jmorganca/ollama/pull/1880 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1966/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1807 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1807/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1807/comments | https://api.github.com/repos/ollama/ollama/issues/1807/events | https://github.com/ollama/ollama/issues/1807 | 2,067,376,071 | I_kwDOJ0Z1Ps57OafH | 1,807 | [ISSUES] I think it would be interesting to have different templates. | {
"login": "rgaidot",
"id": 5269,
"node_id": "MDQ6VXNlcjUyNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rgaidot",
"html_url": "https://github.com/rgaidot",
"followers_url": "https://api.github.com/users/rgaidot/followers"... | [] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 2 | 2024-01-05T13:43:41 | 2024-03-14T22:43:59 | 2024-03-14T22:43:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I think it would be interesting to have different templates (.github/**/*.md) for various purposes within your repo. Templates can significantly enhance efficiency and clarity in communication, especially when dealing with different aspects of your code/repo. Imagine having specific templates tailored for bug reports, ... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1807/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7856 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7856/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7856/comments | https://api.github.com/repos/ollama/ollama/issues/7856/events | https://github.com/ollama/ollama/issues/7856 | 2,697,641,318 | I_kwDOJ0Z1Ps6gyr1m | 7,856 | Ddos of parsing markdown in frontend & images | {
"login": "remco-pc",
"id": 8077908,
"node_id": "MDQ6VXNlcjgwNzc5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8077908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remco-pc",
"html_url": "https://github.com/remco-pc",
"followers_url": "https://api.github.com/users/remco... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-11-27T08:39:12 | 2024-11-27T08:40:27 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
if the frontend converts markdown strings to supported html elements, on every token the frontend is requesting the same image over and over again and start downloading all images on every new token.
So markdown conversion should be done on the backend to avoid ddos attacks through wrong ja... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7856/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7956 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7956/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7956/comments | https://api.github.com/repos/ollama/ollama/issues/7956/events | https://github.com/ollama/ollama/issues/7956 | 2,721,367,886 | I_kwDOJ0Z1Ps6iNMdO | 7,956 | Low GPU usage on second GPU | {
"login": "frenzybiscuit",
"id": 190028151,
"node_id": "U_kgDOC1OZdw",
"avatar_url": "https://avatars.githubusercontent.com/u/190028151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frenzybiscuit",
"html_url": "https://github.com/frenzybiscuit",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 25 | 2024-12-05T20:50:03 | 2024-12-14T22:30:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am on the 0.5.0 release (which links to 0.4.8-rc0) and using Qwen 2.5 32b Q5 with 32k context and flash attention with q8_0 KV cache.
I have a 3090 and 2080ti.
Ollama is putting 22GB on the 3090 and 5.3GB on the 2080ti.
When running a prompt the 3090 is at 80%-90% GPU usage while the ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7956/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2755 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2755/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2755/comments | https://api.github.com/repos/ollama/ollama/issues/2755/events | https://github.com/ollama/ollama/issues/2755 | 2,153,025,711 | I_kwDOJ0Z1Ps6AVJCv | 2,755 | New Model Request: BioMistral model? | {
"login": "unclecode",
"id": 12494079,
"node_id": "MDQ6VXNlcjEyNDk0MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/12494079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unclecode",
"html_url": "https://github.com/unclecode",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 3 | 2024-02-26T00:37:01 | 2024-05-15T21:05:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I wonder if you have any plan to add BioMistral in library?
Thanks | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2755/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3858 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3858/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3858/comments | https://api.github.com/repos/ollama/ollama/issues/3858/events | https://github.com/ollama/ollama/pull/3858 | 2,260,007,235 | PR_kwDOJ0Z1Ps5tiSSm | 3,858 | types/model: restrict digest hash part to a minimum of 2 characters | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 0 | 2024-04-24T00:36:11 | 2024-04-24T01:24:18 | 2024-04-24T01:24:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3858",
"html_url": "https://github.com/ollama/ollama/pull/3858",
"diff_url": "https://github.com/ollama/ollama/pull/3858.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3858.patch",
"merged_at": "2024-04-24T01:24:17"
} | This allows users of a valid Digest to know it has a minimum of 2 characters in the hash part for use when sharding.
This is a reasonable restriction as the hash part is a SHA256 hash which is 64 characters long, which is the common hash used. There is no anticipation of using a hash with less than 2 characters.
... | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3858/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7495 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7495/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7495/comments | https://api.github.com/repos/ollama/ollama/issues/7495/events | https://github.com/ollama/ollama/issues/7495 | 2,633,538,249 | I_kwDOJ0Z1Ps6c-JrJ | 7,495 | mac Errors when running | {
"login": "shan23chen",
"id": 44418759,
"node_id": "MDQ6VXNlcjQ0NDE4NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/44418759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shan23chen",
"html_url": "https://github.com/shan23chen",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q... | closed | false | null | [] | null | 5 | 2024-11-04T18:24:24 | 2025-01-13T00:52:19 | 2025-01-13T00:52:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
`ollama run gemma2:2b`
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/gemma2/manifests/2b": write tcp [2601:19b:0:b8a0:915f:c8c:3de4:9c5]:50022->[2606:4700:3034::ac43:b6e5]:443: write: socket is not connected
### OS
macOS
### GPU
Apple
### CPU
... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7495/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1754 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1754/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1754/comments | https://api.github.com/repos/ollama/ollama/issues/1754/events | https://github.com/ollama/ollama/issues/1754 | 2,061,609,837 | I_kwDOJ0Z1Ps564att | 1,754 | How to add custom LLM models from Huggingface | {
"login": "yiouyou",
"id": 14249712,
"node_id": "MDQ6VXNlcjE0MjQ5NzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/14249712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yiouyou",
"html_url": "https://github.com/yiouyou",
"followers_url": "https://api.github.com/users/yiouyo... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 5 | 2024-01-01T15:02:24 | 2025-01-28T03:49:18 | 2024-01-02T11:27:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have some fine-tuned models saved on Huggingface. How to add or convert any custome LLM to ollama fitted version? | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1754/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4933/comments | https://api.github.com/repos/ollama/ollama/issues/4933/events | https://github.com/ollama/ollama/issues/4933 | 2,341,691,507 | I_kwDOJ0Z1Ps6Lk2Bz | 4,933 | Error: Pull Model Manifest - Timeout | {
"login": "ulhaqi12",
"id": 44068298,
"node_id": "MDQ6VXNlcjQ0MDY4Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44068298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ulhaqi12",
"html_url": "https://github.com/ulhaqi12",
"followers_url": "https://api.github.com/users/ulh... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-08T14:54:03 | 2024-08-11T12:50:51 | 2024-06-18T11:19:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
I am using the latest docker image of Ollama(0.1.40). Here are the contents of my docker-compose file:
```
ollama:
image: internal-mirror/ollama/ollama
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama:/root/.ollama
restart: u... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4933/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6176 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6176/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6176/comments | https://api.github.com/repos/ollama/ollama/issues/6176/events | https://github.com/ollama/ollama/issues/6176 | 2,448,260,013 | I_kwDOJ0Z1Ps6R7Xut | 6,176 | System Prompts can not work on the first round. | {
"login": "DirtyKnightForVi",
"id": 116725810,
"node_id": "U_kgDOBvUYMg",
"avatar_url": "https://avatars.githubusercontent.com/u/116725810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DirtyKnightForVi",
"html_url": "https://github.com/DirtyKnightForVi",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 26 | 2024-08-05T10:59:26 | 2024-12-02T20:09:52 | 2024-12-02T20:09:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # What is the issue?
## Description
**Bug Summary:**
System Prompts can not work on the first round.
**Actual Behavior:**
For a specific task scenario, there might be special System Prompts. However, in the current version (at least starting from 3.10), an additional round of conversation is needed before th... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6176/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6176/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2322 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2322/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2322/comments | https://api.github.com/repos/ollama/ollama/issues/2322/events | https://github.com/ollama/ollama/issues/2322 | 2,114,448,109 | I_kwDOJ0Z1Ps5-B-rt | 2,322 | Run Ollama models stored on external disk | {
"login": "B-Gendron",
"id": 95307996,
"node_id": "U_kgDOBa5I3A",
"avatar_url": "https://avatars.githubusercontent.com/u/95307996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/B-Gendron",
"html_url": "https://github.com/B-Gendron",
"followers_url": "https://api.github.com/users/B-Gendro... | [] | closed | false | null | [] | null | 7 | 2024-02-02T09:07:51 | 2024-10-10T18:26:13 | 2024-02-05T19:22:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As I went through the whole documentation, I am still a bit confused about how the model are saved when doing `ollama pull` and how I can use it. For instance, as I don't have that much storage on my computer I would like to pull several models and then save the whole `/.ollama/models/blobs/` directory on an external d... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2322/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5095 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5095/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5095/comments | https://api.github.com/repos/ollama/ollama/issues/5095/events | https://github.com/ollama/ollama/issues/5095 | 2,356,871,776 | I_kwDOJ0Z1Ps6MewJg | 5,095 | add support Alibaba-NLP/gte-Qwen2-7B-instruct | {
"login": "louyongjiu",
"id": 16408477,
"node_id": "MDQ6VXNlcjE2NDA4NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16408477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louyongjiu",
"html_url": "https://github.com/louyongjiu",
"followers_url": "https://api.github.com/use... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 6 | 2024-06-17T09:35:32 | 2024-07-09T19:16:35 | 2024-06-27T09:17:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct

| {
"login": "louyongjiu",
"id": 16408477,
"node_id": "MDQ6VXNlcjE2NDA4NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16408477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louyongjiu",
"html_url": "https://github.com/louyongjiu",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5095/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5095/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7279 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7279/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7279/comments | https://api.github.com/repos/ollama/ollama/issues/7279/events | https://github.com/ollama/ollama/issues/7279 | 2,600,692,910 | I_kwDOJ0Z1Ps6bA2yu | 7,279 | Ollama Docker image 0.4.0-rc3-rocm crashes due to missing shared library | {
"login": "ic4-y",
"id": 61844926,
"node_id": "MDQ6VXNlcjYxODQ0OTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/61844926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ic4-y",
"html_url": "https://github.com/ic4-y",
"followers_url": "https://api.github.com/users/ic4-y/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-10-20T17:14:21 | 2024-10-22T19:54:16 | 2024-10-22T19:54:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I just tried out the latest 0.4.0-rc3-rocm docker image and the `ollama_llama_server` crashes with
```ollama-rocm | /usr/lib/ollama/runners/rocm/ollama_llama_server: error while loading shared libraries: libelf.so.1: cannot open shared object file: No such file or directory```
I am running... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7279/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2621 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2621/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2621/comments | https://api.github.com/repos/ollama/ollama/issues/2621/events | https://github.com/ollama/ollama/issues/2621 | 2,145,600,139 | I_kwDOJ0Z1Ps5_40KL | 2,621 | Request to allow installation to a different location | {
"login": "QJAG1024",
"id": 123146382,
"node_id": "U_kgDOB1cQjg",
"avatar_url": "https://avatars.githubusercontent.com/u/123146382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QJAG1024",
"html_url": "https://github.com/QJAG1024",
"followers_url": "https://api.github.com/users/QJAG1024/... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-02-21T01:35:37 | 2024-03-02T04:21:12 | 2024-03-02T04:21:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I noticed that ollama can only be installed on the **volume C**.
although there is enough space for me to install models, i prefer to install programs on volume D.
and for some people, they even haven't enough space to install models on volume C.
so i think they should have a chance to install ollama to a different ... | {
"login": "QJAG1024",
"id": 123146382,
"node_id": "U_kgDOB1cQjg",
"avatar_url": "https://avatars.githubusercontent.com/u/123146382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QJAG1024",
"html_url": "https://github.com/QJAG1024",
"followers_url": "https://api.github.com/users/QJAG1024/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2621/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2621/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8397 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8397/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8397/comments | https://api.github.com/repos/ollama/ollama/issues/8397/events | https://github.com/ollama/ollama/issues/8397 | 2,782,680,124 | I_kwDOJ0Z1Ps6l3FQ8 | 8,397 | [UNK_BYTE_…] Output with gemma-2b-it in Ollama | {
"login": "TsurHerman",
"id": 3405405,
"node_id": "MDQ6VXNlcjM0MDU0MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3405405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TsurHerman",
"html_url": "https://github.com/TsurHerman",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2025-01-12T20:34:12 | 2025-01-16T13:26:20 | 2025-01-16T13:25:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When running ollama run with gemma-2b-it model, the generated text contains [UNK_BYTE_...] markers interleaved with normal text, instead of producing the expected characters.
>
> ollama run Al
> >>> hi
> Hi[UNK_BYTE_0xe29681▁there]there![UNK_BYTE_0xe29681▁👋][UNK_BYTE_0xf09f918b▁👋]▁▁
... | {
"login": "TsurHerman",
"id": 3405405,
"node_id": "MDQ6VXNlcjM0MDU0MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3405405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TsurHerman",
"html_url": "https://github.com/TsurHerman",
"followers_url": "https://api.github.com/users... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8397/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8032 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8032/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8032/comments | https://api.github.com/repos/ollama/ollama/issues/8032/events | https://github.com/ollama/ollama/pull/8032 | 2,730,947,526 | PR_kwDOJ0Z1Ps6EwDHg | 8,032 | Remove unused runner CpuFeatures | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-12-10T18:57:02 | 2024-12-10T20:59:43 | 2024-12-10T20:59:39 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8032",
"html_url": "https://github.com/ollama/ollama/pull/8032",
"diff_url": "https://github.com/ollama/ollama/pull/8032.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8032.patch",
"merged_at": "2024-12-10T20:59:39"
} | The final implementation of #7499 removed dynamic vector requirements in favor of a simpler [filename based model](https://github.com/ollama/ollama/blob/main/runners/common.go#L125-L132), and this was left over logic that is no longer needed. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8032/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2740 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2740/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2740/comments | https://api.github.com/repos/ollama/ollama/issues/2740/events | https://github.com/ollama/ollama/issues/2740 | 2,152,628,134 | I_kwDOJ0Z1Ps6ATn-m | 2,740 | Cannot pass file as suggested in example with windows | {
"login": "mattjoyce",
"id": 278869,
"node_id": "MDQ6VXNlcjI3ODg2OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/278869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattjoyce",
"html_url": "https://github.com/mattjoyce",
"followers_url": "https://api.github.com/users/matt... | [] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-02-25T06:45:42 | 2024-06-17T16:51:47 | 2024-03-12T21:48:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ollama version is 0.1.27
Here's the example provided in the documentation.
> ollama run llama2 "Summarize this file: $(cat README.md)"
Here's what I tried use the windows versions and the response
> ollama run phi "summarize this file $(type 5_QGU5D7mLk.md)"
> I'm sorry, but as an AI language model, I can... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2740/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5066/comments | https://api.github.com/repos/ollama/ollama/issues/5066/events | https://github.com/ollama/ollama/issues/5066 | 2,355,056,694 | I_kwDOJ0Z1Ps6MX1A2 | 5,066 | AMD 7945HX not showing avx512 | {
"login": "mikealanni",
"id": 25714603,
"node_id": "MDQ6VXNlcjI1NzE0NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25714603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikealanni",
"html_url": "https://github.com/mikealanni",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-15T17:28:07 | 2024-06-18T22:17:17 | 2024-06-18T22:17:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi, I'm curious if this is a bug as the logs showing like I don't have avx512 in my CPU while I have. When I start my ollama docker it show this in the log
`INFO [main] system info | n_threads=16 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5066/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8171 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8171/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8171/comments | https://api.github.com/repos/ollama/ollama/issues/8171/events | https://github.com/ollama/ollama/pull/8171 | 2,749,930,106 | PR_kwDOJ0Z1Ps6Fwv7v | 8,171 | Update go.sum | {
"login": "Squishedmac",
"id": 88924339,
"node_id": "MDQ6VXNlcjg4OTI0MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/88924339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Squishedmac",
"html_url": "https://github.com/Squishedmac",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2024-12-19T10:50:34 | 2024-12-19T10:51:53 | 2024-12-19T10:51:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8171",
"html_url": "https://github.com/ollama/ollama/pull/8171",
"diff_url": "https://github.com/ollama/ollama/pull/8171.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8171.patch",
"merged_at": null
} | null | {
"login": "Squishedmac",
"id": 88924339,
"node_id": "MDQ6VXNlcjg4OTI0MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/88924339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Squishedmac",
"html_url": "https://github.com/Squishedmac",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8171/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4464 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4464/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4464/comments | https://api.github.com/repos/ollama/ollama/issues/4464/events | https://github.com/ollama/ollama/issues/4464 | 2,299,107,344 | I_kwDOJ0Z1Ps6JCZgQ | 4,464 | Support RX6600 (gfx1032) on windows (gfx override works on linux) | {
"login": "usmandilmeer",
"id": 51738693,
"node_id": "MDQ6VXNlcjUxNzM4Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/51738693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/usmandilmeer",
"html_url": "https://github.com/usmandilmeer",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-05-16T01:10:36 | 2024-08-27T21:13:12 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
Ollama(0.1.32) is working awesome with Zluda using AMD RX6600 on windows 10.
But I have downloaded and tested the all above versions from"0.1.33 to 0.1.38" Ollama is not working with Zluda.
It gives error "0xc000001d"
So, now I downgraded and using 0.1.32 with Zluda.
Is it Zluda's... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4464/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4464/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4413 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4413/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4413/comments | https://api.github.com/repos/ollama/ollama/issues/4413/events | https://github.com/ollama/ollama/pull/4413 | 2,293,952,173 | PR_kwDOJ0Z1Ps5vUgzk | 4,413 | check if name exists before create/pull/copy | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 1 | 2024-05-13T22:28:11 | 2024-05-29T19:06:59 | 2024-05-29T19:06:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4413",
"html_url": "https://github.com/ollama/ollama/pull/4413",
"diff_url": "https://github.com/ollama/ollama/pull/4413.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4413.patch",
"merged_at": "2024-05-29T19:06:58"
} | TODO
- [x] tests | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4413/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1121 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1121/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1121/comments | https://api.github.com/repos/ollama/ollama/issues/1121/events | https://github.com/ollama/ollama/issues/1121 | 1,992,243,519 | I_kwDOJ0Z1Ps52vzk_ | 1,121 | Using FROM command and using Modelfile not clear | {
"login": "kikoferrer",
"id": 135333835,
"node_id": "U_kgDOCBEHyw",
"avatar_url": "https://avatars.githubusercontent.com/u/135333835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kikoferrer",
"html_url": "https://github.com/kikoferrer",
"followers_url": "https://api.github.com/users/kik... | [] | closed | false | null | [] | null | 5 | 2023-11-14T08:36:40 | 2023-11-20T16:04:09 | 2023-11-16T16:02:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | so I installed ollama using the instructions here. Then I want to use a predownloaded model. So this is what I did:
guide says create a Modelfile so I used touch
`touch Modelfile`
then add a FROM instruction with the local filepath to the model you want to import
`nano Modelfile
FROM ./path/to/model/model.gguf`
... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1121/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5116 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5116/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5116/comments | https://api.github.com/repos/ollama/ollama/issues/5116/events | https://github.com/ollama/ollama/issues/5116 | 2,359,886,939 | I_kwDOJ0Z1Ps6MqQRb | 5,116 | ERROR [validate_model_chat_template] deepseek-coder-v2:16b-lite-instruct-q8_0 | {
"login": "ekolawole",
"id": 79321648,
"node_id": "MDQ6VXNlcjc5MzIxNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/79321648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekolawole",
"html_url": "https://github.com/ekolawole",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-18T13:35:19 | 2024-06-19T18:44:06 | 2024-06-19T18:44:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
deepseek-coder-v2:16b-lite-instruct-q8_0:
INFO [main] model loaded | tid="0x1fe414c00" timestamp=1718717321
ERROR [validate_model_chat_template] The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses | tid="... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5116/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8583 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8583/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8583/comments | https://api.github.com/repos/ollama/ollama/issues/8583/events | https://github.com/ollama/ollama/issues/8583 | 2,811,062,722 | I_kwDOJ0Z1Ps6njWnC | 8,583 | Deepseek R1 throwing weird generation DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD | {
"login": "amrrs",
"id": 5347322,
"node_id": "MDQ6VXNlcjUzNDczMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5347322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amrrs",
"html_url": "https://github.com/amrrs",
"followers_url": "https://api.github.com/users/amrrs/follower... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2025-01-25T16:25:14 | 2025-01-27T13:44:28 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I tried to use the full deepseek model 4-bit quantized one with `ollama run deepseek-r1:671b`
but it somehow gives `DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD` as the output

### OS
Linux
### GPU
AMD
### CPU
_... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8583/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2055 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2055/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2055/comments | https://api.github.com/repos/ollama/ollama/issues/2055/events | https://github.com/ollama/ollama/pull/2055 | 2,088,786,742 | PR_kwDOJ0Z1Ps5kc53j | 2,055 | Refine the linux cuda/rocm developer docs | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-01-18T17:52:23 | 2024-01-18T20:07:34 | 2024-01-18T20:07:31 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2055",
"html_url": "https://github.com/ollama/ollama/pull/2055",
"diff_url": "https://github.com/ollama/ollama/pull/2055.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2055.patch",
"merged_at": "2024-01-18T20:07:31"
} | With the recent improvements in the [gen_linux.sh](https://github.com/jmorganca/ollama/blob/main/llm/generate/gen_linux.sh) script and these doc updates, this should fix #1704 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2055/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/491 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/491/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/491/comments | https://api.github.com/repos/ollama/ollama/issues/491/events | https://github.com/ollama/ollama/pull/491 | 1,886,678,309 | PR_kwDOJ0Z1Ps5Z0wdg | 491 | add autoprune to remove unused layers | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-09-07T23:06:48 | 2023-09-11T18:46:36 | 2023-09-11T18:46:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/491",
"html_url": "https://github.com/ollama/ollama/pull/491",
"diff_url": "https://github.com/ollama/ollama/pull/491.diff",
"patch_url": "https://github.com/ollama/ollama/pull/491.patch",
"merged_at": "2023-09-11T18:46:35"
} | This change will remove any unused layers for models. It runs at server startup, and will also clean up on `pull` or `create` commands which can orphan older layers. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/491/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3436 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3436/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3436/comments | https://api.github.com/repos/ollama/ollama/issues/3436/events | https://github.com/ollama/ollama/pull/3436 | 2,218,025,498 | PR_kwDOJ0Z1Ps5rTcx0 | 3,436 | Update README.md | {
"login": "ParisNeo",
"id": 827993,
"node_id": "MDQ6VXNlcjgyNzk5Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/827993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParisNeo",
"html_url": "https://github.com/ParisNeo",
"followers_url": "https://api.github.com/users/ParisNe... | [] | closed | false | null | [] | null | 0 | 2024-04-01T10:49:28 | 2024-04-01T15:16:31 | 2024-04-01T15:16:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3436",
"html_url": "https://github.com/ollama/ollama/pull/3436",
"diff_url": "https://github.com/ollama/ollama/pull/3436.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3436.patch",
"merged_at": "2024-04-01T15:16:31"
} | Just added lollms-webui to the list of supported webuis.
Lollms is a webui that can perform a large range of tasks, from generating text and chatting with more than 500 agents to generating images, music and videos. Lollms supports multimodality and can use it along with ollama. it can also offer RAG and summary servi... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3436/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6414 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6414/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6414/comments | https://api.github.com/repos/ollama/ollama/issues/6414/events | https://github.com/ollama/ollama/issues/6414 | 2,472,725,641 | I_kwDOJ0Z1Ps6TYsyJ | 6,414 | Ollama embedding is slow | {
"login": "yuanjie-ai",
"id": 20265321,
"node_id": "MDQ6VXNlcjIwMjY1MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/20265321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanjie-ai",
"html_url": "https://github.com/yuanjie-ai",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-08-19T08:06:45 | 2024-08-23T23:38:13 | 2024-08-23T23:38:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Ollama embedding is slow | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6414/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1165 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1165/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1165/comments | https://api.github.com/repos/ollama/ollama/issues/1165/events | https://github.com/ollama/ollama/issues/1165 | 1,998,144,073 | I_kwDOJ0Z1Ps53GUJJ | 1,165 | Provide command to export downloaded models | {
"login": "biandayu",
"id": 52662468,
"node_id": "MDQ6VXNlcjUyNjYyNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/52662468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/biandayu",
"html_url": "https://github.com/biandayu",
"followers_url": "https://api.github.com/users/bia... | [] | closed | false | null | [] | null | 10 | 2023-11-17T02:21:19 | 2024-02-20T01:08:28 | 2024-02-20T01:08:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Is there any way to import and export downloaded models? In this way, there is no need to use ollama pull to download again on another local machine.
Thanks | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1165/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1165/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5910 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5910/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5910/comments | https://api.github.com/repos/ollama/ollama/issues/5910/events | https://github.com/ollama/ollama/issues/5910 | 2,427,476,355 | I_kwDOJ0Z1Ps6QsFmD | 5,910 | Ollama serve hangs on openai completions request | {
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ika... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-07-24T12:35:56 | 2024-09-04T04:18:47 | 2024-09-04T04:18:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I was processing files locally in a loop, and at some point process just stopped moving forward. I had to do keyboard interrupt. In the terminal, this gave this entry, at the end of the log below. In the terminal running python query source, on termination I've seen it hanged in sock.receive() c... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5910/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8106 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8106/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8106/comments | https://api.github.com/repos/ollama/ollama/issues/8106/events | https://github.com/ollama/ollama/pull/8106 | 2,740,273,114 | PR_kwDOJ0Z1Ps6FPoc3 | 8,106 | server: tokenize & detokenize endpoints | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [] | open | false | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/... | [
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "htt... | null | 0 | 2024-12-15T04:32:59 | 2024-12-19T01:39:45 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8106",
"html_url": "https://github.com/ollama/ollama/pull/8106",
"diff_url": "https://github.com/ollama/ollama/pull/8106.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8106.patch",
"merged_at": null
} | Massive shoutout to @Yurzs for getting this in.
Doing cleanup + tests.
Closes: https://github.com/ollama/ollama/issues/3582
TO-DO:
- [ ] Python SDK: https://github.com/ollama/ollama-python/pull/383
- [ ] JS SDK: https://github.com/ollama/ollama-js/pull/179
- [ ] Benchmarking w/ & w/o caching | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8106/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8106/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7534/comments | https://api.github.com/repos/ollama/ollama/issues/7534/events | https://github.com/ollama/ollama/issues/7534 | 2,639,296,141 | I_kwDOJ0Z1Ps6dUHaN | 7,534 | Performance Regression in Ollama 0.4.0 Compared to 0.3.14 | {
"login": "MMaturax",
"id": 3213496,
"node_id": "MDQ6VXNlcjMyMTM0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3213496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MMaturax",
"html_url": "https://github.com/MMaturax",
"followers_url": "https://api.github.com/users/MMatu... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng... | closed | false | null | [] | null | 16 | 2024-11-06T21:37:05 | 2024-11-22T19:34:20 | 2024-11-22T04:36:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
After updating to Ollama version 0.4.0, which was noted to have performance improvements, I conducted some performance tests and observed that version 0.3.14 outperformed 0.4.0 in certain cases on my system.
Here are the specifics:
Ollama Version 0.4.0 Test Results (Average speed... | {
"login": "MMaturax",
"id": 3213496,
"node_id": "MDQ6VXNlcjMyMTM0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3213496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MMaturax",
"html_url": "https://github.com/MMaturax",
"followers_url": "https://api.github.com/users/MMatu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7534/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7534/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2039 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2039/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2039/comments | https://api.github.com/repos/ollama/ollama/issues/2039/events | https://github.com/ollama/ollama/issues/2039 | 2,087,341,946 | I_kwDOJ0Z1Ps58ak96 | 2,039 | web-ui log error loading model: llama.cpp: tensor 'layers.2.ffn_norm.weight' is missing from model | {
"login": "lpf763827726",
"id": 43004977,
"node_id": "MDQ6VXNlcjQzMDA0OTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43004977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lpf763827726",
"html_url": "https://github.com/lpf763827726",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 2 | 2024-01-18T02:15:04 | 2024-05-17T21:57:45 | 2024-05-17T21:57:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when i run `ollama run llama2:13b` and `ollama run codellama` with ollama-webui, and ask 2~3 question, it start to got error, it report error missing something
[Issue details](https://github.com/ollama-webui/ollama-webui/issues/507) | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2039/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4058 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4058/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4058/comments | https://api.github.com/repos/ollama/ollama/issues/4058/events | https://github.com/ollama/ollama/pull/4058 | 2,272,227,088 | PR_kwDOJ0Z1Ps5uLslV | 4,058 | fix: store accurate model parameter size | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2024-04-30T18:43:30 | 2024-05-07T21:41:54 | 2024-05-07T21:41:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4058",
"html_url": "https://github.com/ollama/ollama/pull/4058",
"diff_url": "https://github.com/ollama/ollama/pull/4058.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4058.patch",
"merged_at": "2024-05-07T21:41:54"
} | - add test for number formatting
- fix bug where 1B and 1M were not stored correctly
- display 2 decimal points for million param sizes
- display 1 decimal point for billion param sizes
This human conversion is displayed as the parameter size on ollama.com, so it should be in the standard format that model parame... | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4058/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/681 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/681/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/681/comments | https://api.github.com/repos/ollama/ollama/issues/681/events | https://github.com/ollama/ollama/pull/681 | 1,922,761,585 | PR_kwDOJ0Z1Ps5bt8Rx | 681 | show a default message when license/parameters/etc aren't specified | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-10-02T21:33:32 | 2023-10-02T21:34:53 | 2023-10-02T21:34:53 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/681",
"html_url": "https://github.com/ollama/ollama/pull/681",
"diff_url": "https://github.com/ollama/ollama/pull/681.diff",
"patch_url": "https://github.com/ollama/ollama/pull/681.patch",
"merged_at": "2023-10-02T21:34:53"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/681/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7842 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7842/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7842/comments | https://api.github.com/repos/ollama/ollama/issues/7842/events | https://github.com/ollama/ollama/issues/7842 | 2,694,371,569 | I_kwDOJ0Z1Ps6gmNjx | 7,842 | Ovis1.6-Gemma2-27B Model request | {
"login": "Backendmagier",
"id": 158162798,
"node_id": "U_kgDOCW1fbg",
"avatar_url": "https://avatars.githubusercontent.com/u/158162798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Backendmagier",
"html_url": "https://github.com/Backendmagier",
"followers_url": "https://api.github.com/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2024-11-26T11:42:39 | 2024-11-26T11:42:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-27B
very good multi modal.
Could be the best open source multimodal atm.
Would love to have it in Ollama. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7842/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7842/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6224 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6224/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6224/comments | https://api.github.com/repos/ollama/ollama/issues/6224/events | https://github.com/ollama/ollama/issues/6224 | 2,452,506,249 | I_kwDOJ0Z1Ps6SLkaJ | 6,224 | Passing result from tool calling to model | {
"login": "tristanMatthias",
"id": 2550138,
"node_id": "MDQ6VXNlcjI1NTAxMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2550138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tristanMatthias",
"html_url": "https://github.com/tristanMatthias",
"followers_url": "https://api.g... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677367769,
"node_id": ... | closed | false | null | [] | null | 4 | 2024-08-07T05:39:45 | 2024-10-24T03:23:46 | 2024-10-24T03:23:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi there. I am trying to follow the guidelines from Meta on how to pass a result from a tool call to Llama3.1
This is per [their documentation](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/)
The ollama [`api.ToolCall`](https://github.com/ollama/ollama/blob/main/api/types.go#L144-L146) struc... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6224/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6731 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6731/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6731/comments | https://api.github.com/repos/ollama/ollama/issues/6731/events | https://github.com/ollama/ollama/issues/6731 | 2,516,878,681 | I_kwDOJ0Z1Ps6WBIVZ | 6,731 | error wile install on opensuse leap 15.6 | {
"login": "kc8pdr205",
"id": 95314147,
"node_id": "U_kgDOBa5g4w",
"avatar_url": "https://avatars.githubusercontent.com/u/95314147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kc8pdr205",
"html_url": "https://github.com/kc8pdr205",
"followers_url": "https://api.github.com/users/kc8pdr20... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-09-10T16:07:33 | 2024-09-11T01:27:43 | 2024-09-11T01:27:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to install ollame in leap 15.6 When I try the command to install ollama . I getting his error. I have both files installed
WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.I do have cuda and the NVIDIA drivers ins... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6731/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1513 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1513/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1513/comments | https://api.github.com/repos/ollama/ollama/issues/1513/events | https://github.com/ollama/ollama/issues/1513 | 2,040,814,879 | I_kwDOJ0Z1Ps55pF0f | 1,513 | I don't like the idea that ollama force me to use a server. | {
"login": "franciscoprin",
"id": 27599257,
"node_id": "MDQ6VXNlcjI3NTk5MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/27599257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/franciscoprin",
"html_url": "https://github.com/franciscoprin",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2023-12-14T02:57:59 | 2024-03-12T01:25:25 | 2024-03-12T01:25:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | so, if I have a python code that looks like this:
```python
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chat_models import ChatOllama
q... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1513/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6657 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6657/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6657/comments | https://api.github.com/repos/ollama/ollama/issues/6657/events | https://github.com/ollama/ollama/issues/6657 | 2,508,239,239 | I_kwDOJ0Z1Ps6VgLGH | 6,657 | Qwen2-VL 2B / 7B / 72B | {
"login": "thiswillbeyourgithub",
"id": 26625900,
"node_id": "MDQ6VXNlcjI2NjI1OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/26625900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiswillbeyourgithub",
"html_url": "https://github.com/thiswillbeyourgithub",
"followers_url... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 0 | 2024-09-05T16:23:48 | 2024-09-05T16:24:37 | 2024-09-05T16:24:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
The [new Qwen2-VL model](https://github.com/QwenLM/Qwen2-VL) supports as input vision and even videos at low size. It uses [a permissive license too!](https://simonwillison.net/2024/Sep/4/qwen2-vl/)
Example by [simonw](https://simonwillison.net/2024/Sep/4/qwen2-vl/)
 application, everything was going well... Until the following happened in this part:
```bash
~/ollama $ go build .
# github.com/ollama/ollama/discover
gpu_info_cudart.c:61:13: warning: comparison of different enumeration types ('cudartReturn_t' (ak... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8666/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4905 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4905/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4905/comments | https://api.github.com/repos/ollama/ollama/issues/4905/events | https://github.com/ollama/ollama/issues/4905 | 2,340,398,957 | I_kwDOJ0Z1Ps6Lf6dt | 4,905 | Issue verifying SHA256 digest in Windows version of Ollama | {
"login": "raymond-infinitecode",
"id": 4714784,
"node_id": "MDQ6VXNlcjQ3MTQ3ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4714784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raymond-infinitecode",
"html_url": "https://github.com/raymond-infinitecode",
"followers_url":... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-07T13:00:03 | 2024-06-07T13:19:39 | 2024-06-07T13:19:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Error: digest mismatch, file must be downloaded again: want sha256:xxxxx, got sha256:xxxxx
ollama run phi3:3.3b-mini-4k-instruct-q8_0
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41 | {
"login": "raymond-infinitecode",
"id": 4714784,
"node_id": "MDQ6VXNlcjQ3MTQ3ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4714784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raymond-infinitecode",
"html_url": "https://github.com/raymond-infinitecode",
"followers_url":... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4905/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3647 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3647/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3647/comments | https://api.github.com/repos/ollama/ollama/issues/3647/events | https://github.com/ollama/ollama/issues/3647 | 2,243,153,230 | I_kwDOJ0Z1Ps6Fs81O | 3,647 | Ollama reverts to CPU on a100 docker. "error looking up CUDA GPU memory: device memory info lookup failure 0: 4 | {
"login": "Yaffa16",
"id": 13223356,
"node_id": "MDQ6VXNlcjEzMjIzMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/13223356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yaffa16",
"html_url": "https://github.com/Yaffa16",
"followers_url": "https://api.github.com/users/Yaffa1... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 4 | 2024-04-15T09:20:50 | 2024-09-25T20:31:42 | 2024-04-24T00:28:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
time=2024-04-15T09:17:48.609Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected"
time=2024-04-15T09:17:48.609Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-15T09:17:48.617Z level=INFO source=gpu.go:109 msg="error looking up CUDA GPU memory: device memory info lookup fa... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3647/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4357 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4357/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4357/comments | https://api.github.com/repos/ollama/ollama/issues/4357/events | https://github.com/ollama/ollama/issues/4357 | 2,290,856,865 | I_kwDOJ0Z1Ps6Ii7Oh | 4,357 | Incorrect value of "finish_reason" when streaming | {
"login": "longseespace",
"id": 187720,
"node_id": "MDQ6VXNlcjE4NzcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/187720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/longseespace",
"html_url": "https://github.com/longseespace",
"followers_url": "https://api.github.com/u... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 2 | 2024-05-11T11:48:47 | 2024-05-11T22:31:42 | 2024-05-11T22:31:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When streaming using OpenAI server, the "finish_reason" is an empty which is incorrect. It should be one of the values from OpenAI, or null.
```
data: {"id":"chatcmpl-693","object":"chat.completion.chunk","created":1715427619,"model":"mistral:latest","system_fingerprint":"fp_ollama","choices... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4357/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4357/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1493 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1493/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1493/comments | https://api.github.com/repos/ollama/ollama/issues/1493/events | https://github.com/ollama/ollama/issues/1493 | 2,038,728,774 | I_kwDOJ0Z1Ps55hIhG | 1,493 | A way to prevent downloaded models from being deleted | {
"login": "t18n",
"id": 14198542,
"node_id": "MDQ6VXNlcjE0MTk4NTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/14198542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t18n",
"html_url": "https://github.com/t18n",
"followers_url": "https://api.github.com/users/t18n/followers"... | [] | closed | false | null | [] | null | 8 | 2023-12-13T00:09:31 | 2024-11-01T17:01:53 | 2024-01-25T22:26:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I downloaded around 50Gbs worth of models to use with Big AGI. For some reason, when I reloaded with Big AGI interface, all the models are gone. The models are too easy to get removed and it takes a lot of time to download them. Is there a way to prevent that? Can I save the models somewhere and point Ollama to it inst... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1493/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3887 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3887/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3887/comments | https://api.github.com/repos/ollama/ollama/issues/3887/events | https://github.com/ollama/ollama/pull/3887 | 2,261,967,319 | PR_kwDOJ0Z1Ps5to5dN | 3,887 | types/model: require all names parts start with an alnum char | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [] | closed | false | null | [] | null | 5 | 2024-04-24T18:55:11 | 2024-04-26T03:13:22 | 2024-04-26T03:13:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3887",
"html_url": "https://github.com/ollama/ollama/pull/3887",
"diff_url": "https://github.com/ollama/ollama/pull/3887.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3887.patch",
"merged_at": null
} | null | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3887/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1264 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1264/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1264/comments | https://api.github.com/repos/ollama/ollama/issues/1264/events | https://github.com/ollama/ollama/issues/1264 | 2,009,654,397 | I_kwDOJ0Z1Ps53yOR9 | 1,264 | Why is my model not referring to the info given in system command in Modelfile | {
"login": "DeeptangshuSaha",
"id": 64020655,
"node_id": "MDQ6VXNlcjY0MDIwNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/64020655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeeptangshuSaha",
"html_url": "https://github.com/DeeptangshuSaha",
"followers_url": "https://api... | [] | closed | false | null | [] | null | 3 | 2023-11-24T12:32:09 | 2024-01-25T22:05:28 | 2024-01-25T22:05:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Okay let me explain what I meant by that.
I am trying to create a personal assistant and I want the model to remember some of my details.
I tried this by providing a system prompt but that did not exactly work as I set myself as its master for a lack of a better term.
But it shoots of saying its an AI and it only as... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1264/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7610 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7610/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7610/comments | https://api.github.com/repos/ollama/ollama/issues/7610/events | https://github.com/ollama/ollama/issues/7610 | 2,648,107,517 | I_kwDOJ0Z1Ps6d1un9 | 7,610 | Blank responses | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] | closed | false | null | [] | null | 3 | 2024-11-11T04:34:22 | 2024-12-23T07:53:09 | 2024-12-23T07:53:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Testing different models, mainly gemma 2, i have been receiving a lot of blank responses (no line, no spacing, just blank no characters at all), usually a few regens fixes it but sometimes it takes quite a few (once took 60x regenerating on my laptop instance to move on and generate a response) thought it might have be... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7610/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4945 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4945/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4945/comments | https://api.github.com/repos/ollama/ollama/issues/4945/events | https://github.com/ollama/ollama/issues/4945 | 2,342,064,044 | I_kwDOJ0Z1Ps6LmQ-s | 4,945 | Trying to Run Ollama on openSUSE Tumbleweed - GPU errors | {
"login": "richardstevenhack",
"id": 44449170,
"node_id": "MDQ6VXNlcjQ0NDQ5MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/44449170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardstevenhack",
"html_url": "https://github.com/richardstevenhack",
"followers_url": "https... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-06-09T07:07:35 | 2024-06-09T07:10:17 | 2024-06-09T07:10:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to run Ollama on the latest openSUSE Tumbleweed Linux. I got it to install by running the installer as root and then explicitly passing the path where it was installed to the ollama serve command. However, I then get a slew of error messages.
### OS
Linux
### GPU
AMD
### CPU
AMD... | {
"login": "richardstevenhack",
"id": 44449170,
"node_id": "MDQ6VXNlcjQ0NDQ5MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/44449170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardstevenhack",
"html_url": "https://github.com/richardstevenhack",
"followers_url": "https... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4945/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2419 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2419/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2419/comments | https://api.github.com/repos/ollama/ollama/issues/2419/events | https://github.com/ollama/ollama/issues/2419 | 2,126,496,211 | I_kwDOJ0Z1Ps5-v8HT | 2,419 | Running Qwen | {
"login": "PrashantDixit0",
"id": 54981696,
"node_id": "MDQ6VXNlcjU0OTgxNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/54981696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrashantDixit0",
"html_url": "https://github.com/PrashantDixit0",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 12 | 2024-02-09T05:23:46 | 2024-03-11T19:17:38 | 2024-03-11T19:17:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried running Qwen with Langchain but didn't get any output. It is stuck.
Has anyone else got stuck at the same place? | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2419/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3520/comments | https://api.github.com/repos/ollama/ollama/issues/3520/events | https://github.com/ollama/ollama/issues/3520 | 2,229,570,794 | I_kwDOJ0Z1Ps6E5Izq | 3,520 | The ability to pass session commands as startup arguments client-side | {
"login": "redpiller",
"id": 31500722,
"node_id": "MDQ6VXNlcjMxNTAwNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/31500722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redpiller",
"html_url": "https://github.com/redpiller",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 0 | 2024-04-07T05:26:53 | 2024-04-07T09:11:35 | 2024-04-07T09:11:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
I recently attempted to make permanent adjustments to the system prompt of a model and realized it is a cumbersome process of rebuilding the model, changing its manifest causing a lot of needless IO to my SSD.
This lack of scalability is a critical flaw in a software peace of this mag... | {
"login": "redpiller",
"id": 31500722,
"node_id": "MDQ6VXNlcjMxNTAwNzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/31500722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redpiller",
"html_url": "https://github.com/redpiller",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3520/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8152 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8152/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8152/comments | https://api.github.com/repos/ollama/ollama/issues/8152/events | https://github.com/ollama/ollama/issues/8152 | 2,747,307,037 | I_kwDOJ0Z1Ps6jwJQd | 8,152 | LangChain - ChatOLLAMA model - calling tool on every input | {
"login": "Arslan-Mehmood1",
"id": 51626734,
"node_id": "MDQ6VXNlcjUxNjI2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/51626734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arslan-Mehmood1",
"html_url": "https://github.com/Arslan-Mehmood1",
"followers_url": "https://api... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-12-18T09:40:51 | 2024-12-23T08:14:26 | 2024-12-23T08:14:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
llama3.2:1b
llama3.2:3b
llama3.2:1b-instruct-fp16
llama3.1:8b
I've tested above models and all the above models are calling tools even with simple query like 'hi'.
the behavior is same whether binding :
tools_list
openai_format_tools_list
Need help... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8152/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/8479 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8479/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8479/comments | https://api.github.com/repos/ollama/ollama/issues/8479/events | https://github.com/ollama/ollama/issues/8479 | 2,796,791,671 | I_kwDOJ0Z1Ps6ms6d3 | 8,479 | Embedding Model: iamgroot42/rover_nexus | {
"login": "AlgorithmicKing",
"id": 147901320,
"node_id": "U_kgDOCNDLiA",
"avatar_url": "https://avatars.githubusercontent.com/u/147901320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlgorithmicKing",
"html_url": "https://github.com/AlgorithmicKing",
"followers_url": "https://api.githu... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 0 | 2025-01-18T06:23:15 | 2025-01-18T06:23:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Its the top model in MTEB Leaderboard. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8479/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8479/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/398 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/398/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/398/comments | https://api.github.com/repos/ollama/ollama/issues/398/events | https://github.com/ollama/ollama/pull/398 | 1,862,088,981 | PR_kwDOJ0Z1Ps5YiEeQ | 398 | Mxyng/cleanup | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-08-22T19:41:43 | 2023-08-22T22:51:42 | 2023-08-22T22:51:41 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/398",
"html_url": "https://github.com/ollama/ollama/pull/398",
"diff_url": "https://github.com/ollama/ollama/pull/398.diff",
"patch_url": "https://github.com/ollama/ollama/pull/398.patch",
"merged_at": "2023-08-22T22:51:41"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/398/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8351 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8351/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8351/comments | https://api.github.com/repos/ollama/ollama/issues/8351/events | https://github.com/ollama/ollama/pull/8351 | 2,776,383,777 | PR_kwDOJ0Z1Ps6HIRqZ | 8,351 | better client error for /api/create | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2025-01-08T21:28:44 | 2025-01-09T18:12:33 | 2025-01-09T18:12:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8351",
"html_url": "https://github.com/ollama/ollama/pull/8351",
"diff_url": "https://github.com/ollama/ollama/pull/8351.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8351.patch",
"merged_at": "2025-01-09T18:12:30"
} | This change shows a mode descriptive error in the client w/ the `POST /api/create` endpoint if the client has been refreshed but the server hasn't been updated. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8351/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.