url stringlengths 51 54 | repository_url stringclasses 1
value | labels_url stringlengths 65 68 | comments_url stringlengths 60 63 | events_url stringlengths 58 61 | html_url stringlengths 39 44 | id int64 1.78B 2.82B | node_id stringlengths 18 19 | number int64 1 8.69k | title stringlengths 1 382 | user dict | labels listlengths 0 5 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 2 | milestone null | comments int64 0 323 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 4
values | sub_issues_summary dict | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 2 118k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 60 63 | performed_via_github_app null | state_reason stringclasses 4
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/7606 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7606/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7606/comments | https://api.github.com/repos/ollama/ollama/issues/7606/events | https://github.com/ollama/ollama/issues/7606 | 2,647,954,349 | I_kwDOJ0Z1Ps6d1JOt | 7,606 | vram usage does not go back down after model unloads | {
"login": "CraftMaster163",
"id": 69362326,
"node_id": "MDQ6VXNlcjY5MzYyMzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/69362326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CraftMaster163",
"html_url": "https://github.com/CraftMaster163",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA... | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 18 | 2024-11-11T02:42:55 | 2024-11-13T22:00:31 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when i set keep alive to 0 the memory usage does not go all the way back down. also it uses system ram when vram still avalible
gpu 7800xt
platform windows
rocm version 6.1
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.1 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7606/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5321 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5321/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5321/comments | https://api.github.com/repos/ollama/ollama/issues/5321/events | https://github.com/ollama/ollama/issues/5321 | 2,377,767,781 | I_kwDOJ0Z1Ps6Nudtl | 5,321 | Llama3: Generated outputs inconsistent despite seed and temperature | {
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-k... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 4 | 2024-06-27T10:24:21 | 2025-01-02T16:38:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Follow-up of #586
Even though the output is **deterministic** and **reproducible** with a fixed `seed`, a `temperature` set to 0 and a fixed `num_ctx`, the generated output of **Llama 3** slightly differs in the first executing of this code and the second execution of this code (without ker... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5321/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5321/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8212 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8212/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8212/comments | https://api.github.com/repos/ollama/ollama/issues/8212/events | https://github.com/ollama/ollama/issues/8212 | 2,754,827,792 | I_kwDOJ0Z1Ps6kM1YQ | 8,212 | Add "/v1/images/generations" endpoints for compatiblity in order to leverage vision models ` | {
"login": "Routhinator",
"id": 727535,
"node_id": "MDQ6VXNlcjcyNzUzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/727535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Routhinator",
"html_url": "https://github.com/Routhinator",
"followers_url": "https://api.github.com/user... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-12-22T20:21:38 | 2024-12-22T22:42:35 | 2024-12-22T22:42:35 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was playing with getting Nextcloud Assistant to try and use the `llava` model from the Ollama library, and realized that since Nextcloud's Asstant integration works through the LocalAI api syntax, it expects the vision models to respond from `/v1/images/generations` - which is an endpoint Ollama currently does not ex... | {
"login": "Routhinator",
"id": 727535,
"node_id": "MDQ6VXNlcjcyNzUzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/727535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Routhinator",
"html_url": "https://github.com/Routhinator",
"followers_url": "https://api.github.com/user... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8212/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8212/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2863 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2863/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2863/comments | https://api.github.com/repos/ollama/ollama/issues/2863/events | https://github.com/ollama/ollama/issues/2863 | 2,163,492,479 | I_kwDOJ0Z1Ps6A9EZ_ | 2,863 | Users and users management commands | {
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2024-03-01T14:07:11 | 2024-03-12T00:25:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If one wants to host one or more model on a beefy computer and give access to a selected few but not the entire world, I would like to suggest some sort of user feature.
**Host**
The host device will have to install Ollama, install the models and port forward. The host device can create users, delete users, list ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2863/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/467 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/467/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/467/comments | https://api.github.com/repos/ollama/ollama/issues/467/events | https://github.com/ollama/ollama/issues/467 | 1,879,267,746 | I_kwDOJ0Z1Ps5wA1mi | 467 | Running a 70B Model with 16GB RAM: Possible Strategies? | {
"login": "OguzcanOzdemir",
"id": 24637523,
"node_id": "MDQ6VXNlcjI0NjM3NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/24637523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OguzcanOzdemir",
"html_url": "https://github.com/OguzcanOzdemir",
"followers_url": "https://api.gi... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 4 | 2023-09-03T23:19:22 | 2023-09-05T19:47:44 | 2023-09-05T16:05:06 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I'm currently working with a system that has 16GB of RAM, and I'm interested in running a 70B model for my project. I understand that according to the GitHub repository's documentation, a 70B model typically requires 32GB of RAM.
However, due to my system limitations, I'm looking for guidance on potential ... | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.git... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/467/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3929 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3929/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3929/comments | https://api.github.com/repos/ollama/ollama/issues/3929/events | https://github.com/ollama/ollama/issues/3929 | 2,264,918,755 | I_kwDOJ0Z1Ps6G_-rj | 3,929 | Can you please add llava-phi-3-mini by xtuner? | {
"login": "yashasnadigsyn",
"id": 103478177,
"node_id": "U_kgDOBirzoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103478177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashasnadigsyn",
"html_url": "https://github.com/yashasnadigsyn",
"followers_url": "https://api.github.c... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 5 | 2024-04-26T04:08:36 | 2024-04-27T02:20:14 | 2024-04-27T02:20:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Here is the model gguf link: https://huggingface.co/xtuner/llava-phi-3-mini-gguf
Here is the model hf link: https://huggingface.co/xtuner/llava-phi-3-mini-hf
I have been trying to add it manually by modelfile but i can't seem to understand the template. I tried using llava template, bakllava template, other multimod... | {
"login": "yashasnadigsyn",
"id": 103478177,
"node_id": "U_kgDOBirzoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103478177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashasnadigsyn",
"html_url": "https://github.com/yashasnadigsyn",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3929/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/3929/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3719 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3719/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3719/comments | https://api.github.com/repos/ollama/ollama/issues/3719/events | https://github.com/ollama/ollama/issues/3719 | 2,249,590,632 | I_kwDOJ0Z1Ps6GFgdo | 3,719 | How do I download an AI model to external storage and run it? | {
"login": "manfar",
"id": 13696009,
"node_id": "MDQ6VXNlcjEzNjk2MDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13696009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manfar",
"html_url": "https://github.com/manfar",
"followers_url": "https://api.github.com/users/manfar/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 7 | 2024-04-18T02:12:52 | 2025-01-07T13:11:12 | 2024-05-05T00:20:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For existing Mac computers with insufficient hard disk space, how to download the model to an external SSD drive for running instead of storing it on the computer itself. This way you can install more models and run them faster. It also supports path searching and finder viewing of the download storage.
And ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3719/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4674 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4674/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4674/comments | https://api.github.com/repos/ollama/ollama/issues/4674/events | https://github.com/ollama/ollama/issues/4674 | 2,320,283,650 | I_kwDOJ0Z1Ps6KTLgC | 4,674 | any command but serve get errors,when using proxy | {
"login": "lingfengchencn",
"id": 2757011,
"node_id": "MDQ6VXNlcjI3NTcwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2757011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lingfengchencn",
"html_url": "https://github.com/lingfengchencn",
"followers_url": "https://api.gith... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-05-28T06:59:22 | 2024-06-18T16:51:08 | 2024-06-18T16:51:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when I configure http_proxy/ https_proxy / HTTP_PROXY / HTTPS_PROXY in docker-compose, `ollama serve` runs well, but other commands get errors.
here is my yaml
```yaml
version: '3.8'
name: "dev-ollama"
services:
ollama:
image: ollama/ollama
environment:
- OLLAMA_DEBU... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4674/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6548 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6548/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6548/comments | https://api.github.com/repos/ollama/ollama/issues/6548/events | https://github.com/ollama/ollama/pull/6548 | 2,493,192,684 | PR_kwDOJ0Z1Ps55xFfu | 6,548 | update the openai docs to explain how to set the context size | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2024-08-29T00:09:37 | 2024-08-29T00:11:48 | 2024-08-29T00:11:46 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6548",
"html_url": "https://github.com/ollama/ollama/pull/6548",
"diff_url": "https://github.com/ollama/ollama/pull/6548.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6548.patch",
"merged_at": "2024-08-29T00:11:46"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6548/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3033 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3033/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3033/comments | https://api.github.com/repos/ollama/ollama/issues/3033/events | https://github.com/ollama/ollama/pull/3033 | 2,177,513,695 | PR_kwDOJ0Z1Ps5pKMEJ | 3,033 | docs: Add AI telegram to Community Integrations. | {
"login": "tusharhero",
"id": 54012021,
"node_id": "MDQ6VXNlcjU0MDEyMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/54012021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tusharhero",
"html_url": "https://github.com/tusharhero",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2024-03-10T04:08:51 | 2024-03-25T18:56:42 | 2024-03-25T18:56:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3033",
"html_url": "https://github.com/ollama/ollama/pull/3033",
"diff_url": "https://github.com/ollama/ollama/pull/3033.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3033.patch",
"merged_at": "2024-03-25T18:56:42"
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3033/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3066/comments | https://api.github.com/repos/ollama/ollama/issues/3066/events | https://github.com/ollama/ollama/issues/3066 | 2,180,244,723 | I_kwDOJ0Z1Ps6B8-Tz | 3,066 | CLBlast for intergrated gpu support | {
"login": "joshuachris2001",
"id": 54247518,
"node_id": "MDQ6VXNlcjU0MjQ3NTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/54247518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshuachris2001",
"html_url": "https://github.com/joshuachris2001",
"followers_url": "https://api... | [
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2024-03-11T21:11:56 | 2024-03-11T22:26:48 | 2024-03-11T22:26:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | is there support to compile ollama with CLBlast for a device with a integrated non AMD GPU?
I've tried compiling with: CLBlast_DIR=/usr/lib/cmake/CLBlast go generate -tags clbast ./...
yet I still get "no GPU detected"
the I-GPU I'm trying to get CLBlast to work on is a `Intel HD Graphics 5500` when llama is expli... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3066/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8559 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8559/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8559/comments | https://api.github.com/repos/ollama/ollama/issues/8559/events | https://github.com/ollama/ollama/issues/8559 | 2,808,921,360 | I_kwDOJ0Z1Ps6nbL0Q | 8,559 | Model list cleared after starting as a service using nssm | {
"login": "AncientMystic",
"id": 62780271,
"node_id": "MDQ6VXNlcjYyNzgwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/62780271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AncientMystic",
"html_url": "https://github.com/AncientMystic",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | 1 | 2025-01-24T09:08:23 | 2025-01-24T09:21:40 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have started Ollama as a service in windows hoping that how windows handles services will allow for a slight improvement in performance, but after starting ollama in this way the model list is gone.
Both starting as a service and through the app now have the same result, no models.
All the model files are still p... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8559/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7368 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7368/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7368/comments | https://api.github.com/repos/ollama/ollama/issues/7368/events | https://github.com/ollama/ollama/pull/7368 | 2,615,297,617 | PR_kwDOJ0Z1Ps5_9ssN | 7,368 | runner.go: Use stable llama.cpp sampling interface | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | 0 | 2024-10-25T22:45:47 | 2024-11-21T19:35:29 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7368",
"html_url": "https://github.com/ollama/ollama/pull/7368",
"diff_url": "https://github.com/ollama/ollama/pull/7368.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7368.patch",
"merged_at": null
} | Currently for sampling we are using an internal interface for the llama.cpp examples, which tends to change from release to release. This is the only such interface used for text models, though llava and clip are also used for image processing.
This switches to use the stable interfaces, reducing the amount of work ... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7368/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/135 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/135/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/135/comments | https://api.github.com/repos/ollama/ollama/issues/135/events | https://github.com/ollama/ollama/pull/135 | 1,813,277,989 | PR_kwDOJ0Z1Ps5V9q9k | 135 | ctrl+c on empty line exits | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-20T06:40:59 | 2023-07-20T16:20:37 | 2023-07-20T07:53:08 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/135",
"html_url": "https://github.com/ollama/ollama/pull/135",
"diff_url": "https://github.com/ollama/ollama/pull/135.diff",
"patch_url": "https://github.com/ollama/ollama/pull/135.patch",
"merged_at": "2023-07-20T07:53:08"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/135/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1364 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1364/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1364/comments | https://api.github.com/repos/ollama/ollama/issues/1364/events | https://github.com/ollama/ollama/pull/1364 | 2,022,630,377 | PR_kwDOJ0Z1Ps5g_T7J | 1,364 | Ollama Telegram Bot | {
"login": "ruecat",
"id": 79139779,
"node_id": "MDQ6VXNlcjc5MTM5Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/79139779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruecat",
"html_url": "https://github.com/ruecat",
"followers_url": "https://api.github.com/users/ruecat/fo... | [] | closed | false | null | [] | null | 1 | 2023-12-03T17:16:56 | 2023-12-03T19:19:55 | 2023-12-03T19:19:55 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1364",
"html_url": "https://github.com/ollama/ollama/pull/1364",
"diff_url": "https://github.com/ollama/ollama/pull/1364.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1364.patch",
"merged_at": "2023-12-03T19:19:55"
} | This pull request adds [telegram-ollama](https://github.com/ruecat/ollama-telegram) to [Extensions & Plugins](https://github.com/jmorganca/ollama/commit/41f73433bbf607160f2356388463de42714f2d23) section
I created a bot for telegram, it uses aiogram and can stream API requests in one message, without ratelimit.
Soon i... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1364/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6039 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6039/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6039/comments | https://api.github.com/repos/ollama/ollama/issues/6039/events | https://github.com/ollama/ollama/pull/6039 | 2,434,633,290 | PR_kwDOJ0Z1Ps52tC4o | 6,039 | update llama.cpp submodule to `6eeaeba1` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 1 | 2024-07-29T07:11:08 | 2024-07-30T01:09:01 | 2024-07-29T20:20:26 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6039",
"html_url": "https://github.com/ollama/ollama/pull/6039",
"diff_url": "https://github.com/ollama/ollama/pull/6039.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6039.patch",
"merged_at": "2024-07-29T20:20:26"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6039/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3019 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3019/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3019/comments | https://api.github.com/repos/ollama/ollama/issues/3019/events | https://github.com/ollama/ollama/issues/3019 | 2,177,110,079 | I_kwDOJ0Z1Ps6BxBA_ | 3,019 | Automatic sub-language constraint sections | {
"login": "mirek",
"id": 8561,
"node_id": "MDQ6VXNlcjg1NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mirek",
"html_url": "https://github.com/mirek",
"followers_url": "https://api.github.com/users/mirek/followers",
"fol... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-03-09T07:34:20 | 2024-03-12T07:23:29 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be useful if ollama supported automatic, text based, plugin driven grammar sections.
Every time triple quote section is used in text it enters that language mode constraints, for example:
1. "```json" enters json bnf
2. "```json:Foo" enters json bnf + json schema for Foo object
3. "```python" enters py... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3019/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3019/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/985 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/985/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/985/comments | https://api.github.com/repos/ollama/ollama/issues/985/events | https://github.com/ollama/ollama/pull/985 | 1,975,506,789 | PR_kwDOJ0Z1Ps5egBFc | 985 | restore runner build flags | {
"login": "yoshino-s",
"id": 28624661,
"node_id": "MDQ6VXNlcjI4NjI0NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28624661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshino-s",
"html_url": "https://github.com/yoshino-s",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 5 | 2023-11-03T06:01:13 | 2023-11-24T08:00:30 | 2023-11-14T16:52:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/985",
"html_url": "https://github.com/ollama/ollama/pull/985",
"diff_url": "https://github.com/ollama/ollama/pull/985.diff",
"patch_url": "https://github.com/ollama/ollama/pull/985.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/985/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5595 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5595/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5595/comments | https://api.github.com/repos/ollama/ollama/issues/5595/events | https://github.com/ollama/ollama/issues/5595 | 2,400,345,707 | I_kwDOJ0Z1Ps6PEl5r | 5,595 | codegeex4 | {
"login": "sinxyz",
"id": 32287704,
"node_id": "MDQ6VXNlcjMyMjg3NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/32287704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinxyz",
"html_url": "https://github.com/sinxyz",
"followers_url": "https://api.github.com/users/sinxyz/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 4 | 2024-07-10T10:21:38 | 2024-11-17T22:24:22 | 2024-11-17T22:24:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
can't use,output:GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5595/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2598 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2598/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2598/comments | https://api.github.com/repos/ollama/ollama/issues/2598/events | https://github.com/ollama/ollama/issues/2598 | 2,143,104,049 | I_kwDOJ0Z1Ps5_vSwx | 2,598 | Add ROCm support on windows | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 16 | 2024-02-19T20:32:43 | 2024-03-27T05:51:10 | 2024-03-07T18:51:01 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Users with Radeon cards would like to be able to take advantage of the new native windows app and not have to resort to WSL2 to get support for their AMD GPUs. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2598/reactions",
"total_count": 12,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/ollama/ollama/issues/2598/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7481 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7481/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7481/comments | https://api.github.com/repos/ollama/ollama/issues/7481/events | https://github.com/ollama/ollama/issues/7481 | 2,631,299,461 | I_kwDOJ0Z1Ps6c1nGF | 7,481 | [FEATURE REQUEST] - Add option to add code into the "Send a Message" prompt <> | {
"login": "BryanBond",
"id": 187150339,
"node_id": "U_kgDOCyewAw",
"avatar_url": "https://avatars.githubusercontent.com/u/187150339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BryanBond",
"html_url": "https://github.com/BryanBond",
"followers_url": "https://api.github.com/users/BryanB... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-03T16:00:28 | 2024-11-05T03:52:51 | 2024-11-05T03:49:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I would love to see the addition of an 'add code <>' option to the "Send a Message" box in ollama. This would make inquiry formatting to the llm much cleaner and easier to interpret/read. | {
"login": "BryanBond",
"id": 187150339,
"node_id": "U_kgDOCyewAw",
"avatar_url": "https://avatars.githubusercontent.com/u/187150339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BryanBond",
"html_url": "https://github.com/BryanBond",
"followers_url": "https://api.github.com/users/BryanB... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7481/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/1933 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1933/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1933/comments | https://api.github.com/repos/ollama/ollama/issues/1933/events | https://github.com/ollama/ollama/issues/1933 | 2,077,696,781 | I_kwDOJ0Z1Ps571yMN | 1,933 | Wrong tag on dockerhub | {
"login": "otavio-silva",
"id": 22914610,
"node_id": "MDQ6VXNlcjIyOTE0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/22914610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/otavio-silva",
"html_url": "https://github.com/otavio-silva",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 1 | 2024-01-11T22:17:15 | 2024-01-11T23:02:59 | 2024-01-11T23:02:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # Description
It seems latest version was released under the 0.0.0 tag (see https://hub.docker.com/r/ollama/ollama/tags and https://hub.docker.com/layers/ollama/ollama/0.0.0/images/sha256-720e093927cfaed71c70dcc70bd32f9c39be3937243ebd6ddcdce5016d5deb2b?context=explore) instead of 0.1.20 that is the correct number. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1933/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6335 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6335/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6335/comments | https://api.github.com/repos/ollama/ollama/issues/6335/events | https://github.com/ollama/ollama/issues/6335 | 2,462,885,777 | I_kwDOJ0Z1Ps6SzKeR | 6,335 | Bug in Continuous Questioning and Output Content on Windows | {
"login": "Lucas-SJY",
"id": 72309268,
"node_id": "MDQ6VXNlcjcyMzA5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/72309268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lucas-SJY",
"html_url": "https://github.com/Lucas-SJY",
"followers_url": "https://api.github.com/users/... | [
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] | closed | false | null | [] | null | 2 | 2024-08-13T09:38:03 | 2024-09-05T19:05:34 | 2024-09-05T19:05:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I got the following issues on Windows.
In ollama 0.2.5, running llama3.1, It can not response for the second time and returned the following error message "Error: template: :28:7: executing "" at <.ToolCalls>: can't evaluate field ToolCalls in type *api.Message", and it sometimes did not return... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6335/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1034 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1034/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1034/comments | https://api.github.com/repos/ollama/ollama/issues/1034/events | https://github.com/ollama/ollama/pull/1034 | 1,981,783,166 | PR_kwDOJ0Z1Ps5e1FqF | 1,034 | Fix sudo variable in install.sh | {
"login": "upchui",
"id": 24575829,
"node_id": "MDQ6VXNlcjI0NTc1ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/24575829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/upchui",
"html_url": "https://github.com/upchui",
"followers_url": "https://api.github.com/users/upchui/fo... | [] | closed | false | null | [] | null | 0 | 2023-11-07T16:42:52 | 2023-11-07T17:59:58 | 2023-11-07T17:59:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1034",
"html_url": "https://github.com/ollama/ollama/pull/1034",
"diff_url": "https://github.com/ollama/ollama/pull/1034.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1034.patch",
"merged_at": "2023-11-07T17:59:57"
} | It was forgotten to replace sudo at one place with the variable for sudo. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1034/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6824 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6824/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6824/comments | https://api.github.com/repos/ollama/ollama/issues/6824/events | https://github.com/ollama/ollama/issues/6824 | 2,528,136,689 | I_kwDOJ0Z1Ps6WsE3x | 6,824 | How to remove this | {
"login": "lezi-fun",
"id": 177434121,
"node_id": "U_kgDOCpNuCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177434121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lezi-fun",
"html_url": "https://github.com/lezi-fun",
"followers_url": "https://api.github.com/users/lezi-fun/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-09-16T10:57:56 | 2024-09-16T11:03:27 | 2024-09-16T11:03:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
How to remove this
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "lezi-fun",
"id": 177434121,
"node_id": "U_kgDOCpNuCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/177434121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lezi-fun",
"html_url": "https://github.com/lezi-fun",
"followers_url": "https://api.github.com/users/lezi-fun/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6824/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8088 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8088/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8088/comments | https://api.github.com/repos/ollama/ollama/issues/8088/events | https://github.com/ollama/ollama/issues/8088 | 2,738,574,153 | I_kwDOJ0Z1Ps6jO1NJ | 8,088 | pull error EOF with gemma2:27b-instruct-q8_0 | {
"login": "rcanand",
"id": 303900,
"node_id": "MDQ6VXNlcjMwMzkwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcanand",
"html_url": "https://github.com/rcanand",
"followers_url": "https://api.github.com/users/rcanand/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2024-12-13T14:51:46 | 2024-12-14T16:39:22 | 2024-12-14T16:39:22 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I call `ollama pull gemma2:27b-instruct-q8_0`, I get error `EOF`.
I have pulled other models successfully (including other gemma2 models) from the same system. And I have disk space etc. - running into this issue with just this model.
Based on web search, I suspect the file on the s... | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8088/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2906 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2906/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2906/comments | https://api.github.com/repos/ollama/ollama/issues/2906/events | https://github.com/ollama/ollama/issues/2906 | 2,165,830,857 | I_kwDOJ0Z1Ps6BF_TJ | 2,906 | chat api stuck when using two ChatOllama same time | {
"login": "levin8023",
"id": 30230347,
"node_id": "MDQ6VXNlcjMwMjMwMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/30230347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/levin8023",
"html_url": "https://github.com/levin8023",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-03-04T03:10:15 | 2024-05-15T01:04:56 | 2024-05-15T01:04:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | use langchain for testing llm, when two client connect to ollama for chat api response, it stuck with same following code:
`ChatOllama(model=xxx, base_url=xxx, verbose=True, temperature=0, num_ctx=2048)` (same model)
and i have to restart ollama server, is there any solutions to use ollama chat api for more then 1 cl... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2906/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8361 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8361/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8361/comments | https://api.github.com/repos/ollama/ollama/issues/8361/events | https://github.com/ollama/ollama/issues/8361 | 2,777,280,036 | I_kwDOJ0Z1Ps6lie4k | 8,361 | llama3.1-8B doesn't utilize my gpu | {
"login": "sunday-hao",
"id": 127651124,
"node_id": "U_kgDOB5vNNA",
"avatar_url": "https://avatars.githubusercontent.com/u/127651124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunday-hao",
"html_url": "https://github.com/sunday-hao",
"followers_url": "https://api.github.com/users/sun... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 10 | 2025-01-09T09:26:11 | 2025-01-10T04:08:55 | 2025-01-10T03:38:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when I tried to run llama3.1-8B-Instruct, it just didn't utilize my GPU and only utilize my CPU, so the speed is very slow. However, the server log said that ollama server detected my gpu, and move my model to my gpu. Could anyone help me? And I write output of `nvidia-smi` and the server log as... | {
"login": "sunday-hao",
"id": 127651124,
"node_id": "U_kgDOB5vNNA",
"avatar_url": "https://avatars.githubusercontent.com/u/127651124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunday-hao",
"html_url": "https://github.com/sunday-hao",
"followers_url": "https://api.github.com/users/sun... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8361/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8686 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8686/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8686/comments | https://api.github.com/repos/ollama/ollama/issues/8686/events | https://github.com/ollama/ollama/issues/8686 | 2,820,001,072 | I_kwDOJ0Z1Ps6oFc0w | 8,686 | Support Deepseek Janus Pro Series (7B & 1B) | {
"login": "zytoh0",
"id": 90326544,
"node_id": "MDQ6VXNlcjkwMzI2NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/90326544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zytoh0",
"html_url": "https://github.com/zytoh0",
"followers_url": "https://api.github.com/users/zytoh0/fo... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 2 | 2025-01-30T06:17:54 | 2025-01-30T08:28:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, good day to you all. I would like to request that ollama add support for Deepseek Janus Pro Series (currently only 7B & 1B):
1. https://huggingface.co/deepseek-ai/Janus-Pro-1B
2. https://huggingface.co/deepseek-ai/Janus-Pro-7B | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8686/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8686/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2261 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2261/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2261/comments | https://api.github.com/repos/ollama/ollama/issues/2261/events | https://github.com/ollama/ollama/issues/2261 | 2,106,395,396 | I_kwDOJ0Z1Ps59jQsE | 2,261 | :link: Documentation request - Please add HF model url on `codellama` model page :pray: | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | [
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers"... | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/... | null | 5 | 2024-01-29T20:49:16 | 2024-05-11T20:17:47 | 2024-05-10T23:34:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # :grey_question: About
[`codellama` has just been released with it 70B version](https://twitter.com/ollama/status/1752034262101205450)

:point_right: ... but on its `[ollama` library page](https://ollama.ai/library/c... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2261/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5937 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5937/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5937/comments | https://api.github.com/repos/ollama/ollama/issues/5937/events | https://github.com/ollama/ollama/issues/5937 | 2,428,808,383 | I_kwDOJ0Z1Ps6QxKy_ | 5,937 | Request to add PyOllaMx to the community integration list under Web/Desktop Category | {
"login": "kspviswa",
"id": 7476271,
"node_id": "MDQ6VXNlcjc0NzYyNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7476271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kspviswa",
"html_url": "https://github.com/kspviswa",
"followers_url": "https://api.github.com/users/kspvi... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-07-25T02:17:40 | 2024-09-04T03:05:22 | 2024-09-04T01:59:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Project details : https://github.com/kspviswa/pyOllaMx/ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5937/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3805 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3805/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3805/comments | https://api.github.com/repos/ollama/ollama/issues/3805/events | https://github.com/ollama/ollama/pull/3805 | 2,255,247,570 | PR_kwDOJ0Z1Ps5tSFeH | 3,805 | ♻️ refactor: update langchain-python-simple to use the langchain_community | {
"login": "dkruyt",
"id": 713812,
"node_id": "MDQ6VXNlcjcxMzgxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/713812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkruyt",
"html_url": "https://github.com/dkruyt",
"followers_url": "https://api.github.com/users/dkruyt/follow... | [] | closed | false | null | [] | null | 2 | 2024-04-21T20:22:30 | 2024-11-21T11:05:54 | 2024-11-21T11:05:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3805",
"html_url": "https://github.com/ollama/ollama/pull/3805",
"diff_url": "https://github.com/ollama/ollama/pull/3805.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3805.patch",
"merged_at": null
} | * import Ollama from langchain.llms is deprecated, change to langchain_community
* predict is deprecated, use invoke | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3805/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/644 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/644/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/644/comments | https://api.github.com/repos/ollama/ollama/issues/644/events | https://github.com/ollama/ollama/issues/644 | 1,918,748,305 | I_kwDOJ0Z1Ps5yXcaR | 644 | error: illegal instruction on CPUs without AVX or AVX2 instruction sets | {
"login": "jacoboglez",
"id": 31385011,
"node_id": "MDQ6VXNlcjMxMzg1MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/31385011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacoboglez",
"html_url": "https://github.com/jacoboglez",
"followers_url": "https://api.github.com/use... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 14 | 2023-09-29T07:33:17 | 2024-10-07T17:16:59 | 2023-10-28T19:24:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was testing the Ollama release for WSL and I could not get any model running.
I installed it as indicated in the website:
`curl https://ollama.ai/install.sh | sh`
I got the server running correctly, and the model was download properly.
Finally, when trying to run the model (`ollama run llama2`) I got the foll... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/644/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/644/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5980 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5980/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5980/comments | https://api.github.com/repos/ollama/ollama/issues/5980/events | https://github.com/ollama/ollama/issues/5980 | 2,431,938,432 | I_kwDOJ0Z1Ps6Q9G-A | 5,980 | Context in /api/generate response grows too big. | {
"login": "slouffka",
"id": 8129,
"node_id": "MDQ6VXNlcjgxMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slouffka",
"html_url": "https://github.com/slouffka",
"followers_url": "https://api.github.com/users/slouffka/follow... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-07-26T10:31:22 | 2024-11-21T12:47:42 | 2024-08-01T22:14:00 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm coding my own Chat UI for Ollama and using context feature to implement dialog mode. So every time Ollama generates a response the returned context (embeddings) is saved into chat object. On the next prompt this context is passed into `/api/generate` then after response resulting context i... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5980/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5980/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3474 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3474/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3474/comments | https://api.github.com/repos/ollama/ollama/issues/3474/events | https://github.com/ollama/ollama/issues/3474 | 2,222,766,896 | I_kwDOJ0Z1Ps6EfLsw | 3,474 | ollama process exit but llama.cpp process remains as a zombie process | {
"login": "mofanke",
"id": 54242816,
"node_id": "MDQ6VXNlcjU0MjQyODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mofanke",
"html_url": "https://github.com/mofanke",
"followers_url": "https://api.github.com/users/mofank... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 6 | 2024-04-03T12:12:52 | 2024-06-13T21:26:16 | 2024-04-28T18:58:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?

then i killed the ollama process

### What did you expect to see?
llama.cpp process exit as ollama ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3474/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7228 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7228/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7228/comments | https://api.github.com/repos/ollama/ollama/issues/7228/events | https://github.com/ollama/ollama/issues/7228 | 2,592,583,406 | I_kwDOJ0Z1Ps6ah67u | 7,228 | Llama-3.1-Nemotron-70B | {
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/non... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2024-10-16T17:33:14 | 2024-10-16T22:02:18 | 2024-10-16T22:02:18 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Seems to just be llama 3.1 with ChatML prompt format(?) should be easy to add and seems to beat 4o on some benchmarks... We will see how that actually plays out, but it seems really good for me
https://huggingface.co/collections/nvidia/llama-31-nemotron-70b-670e93cd366feea16abc13d8 | {
"login": "nonetrix",
"id": 45698918,
"node_id": "MDQ6VXNlcjQ1Njk4OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonetrix",
"html_url": "https://github.com/nonetrix",
"followers_url": "https://api.github.com/users/non... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7228/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6322 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6322/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6322/comments | https://api.github.com/repos/ollama/ollama/issues/6322/events | https://github.com/ollama/ollama/issues/6322 | 2,461,455,113 | I_kwDOJ0Z1Ps6SttMJ | 6,322 | Why role must be "system" or "user" or "assistant"? How can I add a custom role like "tool"? | {
"login": "zhangsheng377",
"id": 3692247,
"node_id": "MDQ6VXNlcjM2OTIyNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3692247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangsheng377",
"html_url": "https://github.com/zhangsheng377",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | 13 | 2024-08-12T16:39:06 | 2024-09-04T16:11:44 | 2024-09-04T04:25:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://github.com/ollama/ollama/blob/15c2d8fe149ba2b58aadbab615a6955f8821c7a9/parser/parser.go#L294 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6322/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8392 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8392/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8392/comments | https://api.github.com/repos/ollama/ollama/issues/8392/events | https://github.com/ollama/ollama/issues/8392 | 2,782,317,383 | I_kwDOJ0Z1Ps6l1stH | 8,392 | Empty 'assistant' message | {
"login": "pulinagrawal",
"id": 8232040,
"node_id": "MDQ6VXNlcjgyMzIwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pulinagrawal",
"html_url": "https://github.com/pulinagrawal",
"followers_url": "https://api.github.com... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2025-01-12T06:45:37 | 2025-01-13T19:25:57 | 2025-01-13T19:25:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
with the following python code
```
>>> import ollama
>>> my = {
... "model": "llama3.2",
... "options": {
... "temperature": 0
... },
... "messages": [{"role": "system", "content": "You are a DnD Dungeon Master. Say something in your first message to the user."}
... ],
.... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8392/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2651 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2651/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2651/comments | https://api.github.com/repos/ollama/ollama/issues/2651/events | https://github.com/ollama/ollama/issues/2651 | 2,147,557,948 | I_kwDOJ0Z1Ps6AASI8 | 2,651 | Download Monitoring Error | {
"login": "crimson206",
"id": 110409356,
"node_id": "U_kgDOBpS2jA",
"avatar_url": "https://avatars.githubusercontent.com/u/110409356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crimson206",
"html_url": "https://github.com/crimson206",
"followers_url": "https://api.github.com/users/cri... | [] | closed | false | null | [] | null | 1 | 2024-02-21T19:49:46 | 2024-02-21T23:23:54 | 2024-02-21T23:23:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2651/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3518 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3518/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3518/comments | https://api.github.com/repos/ollama/ollama/issues/3518/events | https://github.com/ollama/ollama/pull/3518 | 2,229,492,472 | PR_kwDOJ0Z1Ps5r674h | 3,518 | ignore vscode debug build | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | [] | closed | false | null | [] | null | 0 | 2024-04-07T01:01:04 | 2024-04-23T00:47:32 | 2024-04-23T00:47:32 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3518",
"html_url": "https://github.com/ollama/ollama/pull/3518",
"diff_url": "https://github.com/ollama/ollama/pull/3518.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3518.patch",
"merged_at": null
} | Prevent this from accidentally getting added to the repo history. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3518/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/223 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/223/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/223/comments | https://api.github.com/repos/ollama/ollama/issues/223/events | https://github.com/ollama/ollama/pull/223 | 1,823,487,815 | PR_kwDOJ0Z1Ps5Wf9tc | 223 | show system/template/license layers from cmd prompt | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 0 | 2023-07-27T02:07:07 | 2023-07-27T23:58:41 | 2023-07-27T23:58:40 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/223",
"html_url": "https://github.com/ollama/ollama/pull/223",
"diff_url": "https://github.com/ollama/ollama/pull/223.diff",
"patch_url": "https://github.com/ollama/ollama/pull/223.patch",
"merged_at": "2023-07-27T23:58:40"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/223/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5094 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5094/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5094/comments | https://api.github.com/repos/ollama/ollama/issues/5094/events | https://github.com/ollama/ollama/issues/5094 | 2,356,504,379 | I_kwDOJ0Z1Ps6MdWc7 | 5,094 | No "Restart to update" option for Windows auto update | {
"login": "vootox",
"id": 27273724,
"node_id": "MDQ6VXNlcjI3MjczNzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/27273724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vootox",
"html_url": "https://github.com/vootox",
"followers_url": "https://api.github.com/users/vootox/fo... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": ... | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 3 | 2024-06-17T06:33:51 | 2024-06-19T16:32:34 | 2024-06-19T16:32:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Popup says there is an Ollama update; I find I must _Click on the taskbar or menubar item and click "Restart to update" to apply the update._ But, I only see `View Log` and `Quit Ollama` and no `Restart to update`. The logs do appear to have installed the update so
I guess it's being done automatically. I'd rather the... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5094/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5094/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4305 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4305/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4305/comments | https://api.github.com/repos/ollama/ollama/issues/4305/events | https://github.com/ollama/ollama/pull/4305 | 2,288,618,125 | PR_kwDOJ0Z1Ps5vCjmF | 4,305 | fix typo | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-09T23:23:48 | 2024-05-09T23:42:10 | 2024-05-09T23:42:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4305",
"html_url": "https://github.com/ollama/ollama/pull/4305",
"diff_url": "https://github.com/ollama/ollama/pull/4305.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4305.patch",
"merged_at": "2024-05-09T23:42:10"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4305/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4737 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4737/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4737/comments | https://api.github.com/repos/ollama/ollama/issues/4737/events | https://github.com/ollama/ollama/pull/4737 | 2,326,726,821 | PR_kwDOJ0Z1Ps5xEW3q | 4,737 | only generate on relevant changes | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-30T23:54:28 | 2024-05-31T00:17:51 | 2024-05-31T00:17:50 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4737",
"html_url": "https://github.com/ollama/ollama/pull/4737",
"diff_url": "https://github.com/ollama/ollama/pull/4737.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4737.patch",
"merged_at": "2024-05-31T00:17:50"
} | relevant change include changes to c++, generate scripts or the llama.cpp submodule | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4737/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4193 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4193/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4193/comments | https://api.github.com/repos/ollama/ollama/issues/4193/events | https://github.com/ollama/ollama/issues/4193 | 2,279,952,316 | I_kwDOJ0Z1Ps6H5U-8 | 4,193 | mixtral:8x22b has missing weights | {
"login": "codebam",
"id": 6035884,
"node_id": "MDQ6VXNlcjYwMzU4ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6035884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codebam",
"html_url": "https://github.com/codebam",
"followers_url": "https://api.github.com/users/codebam/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-06T03:36:27 | 2024-05-06T18:31:39 | 2024-05-06T18:31:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
❯ ollama run mixtral:8x22b
Error: exception create_tensor: tensor 'blk.0.ffn_gate.0.weight' not found
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.31 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4193/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8537 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8537/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8537/comments | https://api.github.com/repos/ollama/ollama/issues/8537/events | https://github.com/ollama/ollama/issues/8537 | 2,804,977,080 | I_kwDOJ0Z1Ps6nMI24 | 8,537 | Ollama stops giving outputs after a few runs | {
"login": "mansibm6",
"id": 63543775,
"node_id": "MDQ6VXNlcjYzNTQzNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/63543775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansibm6",
"html_url": "https://github.com/mansibm6",
"followers_url": "https://api.github.com/users/man... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2025-01-22T17:37:05 | 2025-01-22T20:40:21 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I've been trying to run "smallthinker" and "llama3.2:1b", but after around 30 runs, the models stop giving outputs. However, ollama is running with 100% CPU in the background on my Mac.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8537/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/460 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/460/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/460/comments | https://api.github.com/repos/ollama/ollama/issues/460/events | https://github.com/ollama/ollama/issues/460 | 1,879,040,158 | I_kwDOJ0Z1Ps5v_-Ce | 460 | 404 Client Error: Not Found for url: https://ollama.ai/api/models when running the model | {
"login": "Satyam7166-tech",
"id": 62897696,
"node_id": "MDQ6VXNlcjYyODk3Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/62897696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Satyam7166-tech",
"html_url": "https://github.com/Satyam7166-tech",
"followers_url": "https://api... | [] | closed | false | null | [] | null | 1 | 2023-09-03T10:25:06 | 2023-09-03T13:52:27 | 2023-09-03T13:51:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This was working for me yesterday but is giving me this error after restart. My Ollama server is on
<img width="589" alt="image" src="https://github.com/jmorganca/ollama/assets/62897696/883387f7-6e19-4b09-abd8-38d717122bda">
System: Mac m1 pro.
Also, I tried this on a different user on my mac and it works.
I... | {
"login": "Satyam7166-tech",
"id": 62897696,
"node_id": "MDQ6VXNlcjYyODk3Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/62897696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Satyam7166-tech",
"html_url": "https://github.com/Satyam7166-tech",
"followers_url": "https://api... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/460/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4650 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4650/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4650/comments | https://api.github.com/repos/ollama/ollama/issues/4650/events | https://github.com/ollama/ollama/issues/4650 | 2,317,818,461 | I_kwDOJ0Z1Ps6KJxpd | 4,650 | BCEmbedding model support | {
"login": "laipz8200",
"id": 16485841,
"node_id": "MDQ6VXNlcjE2NDg1ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16485841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laipz8200",
"html_url": "https://github.com/laipz8200",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2024-05-26T16:10:17 | 2024-05-26T23:54:42 | 2024-05-26T23:54:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello,
I would like to request support for the [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) model, which is an embedding model that performs exceptionally well in both Chinese and English.
Thank you very much for your work.
| {
"login": "laipz8200",
"id": 16485841,
"node_id": "MDQ6VXNlcjE2NDg1ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16485841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laipz8200",
"html_url": "https://github.com/laipz8200",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4650/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/970 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/970/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/970/comments | https://api.github.com/repos/ollama/ollama/issues/970/events | https://github.com/ollama/ollama/issues/970 | 1,973,900,109 | I_kwDOJ0Z1Ps51p1NN | 970 | problem on last release | {
"login": "francescoagati",
"id": 175524,
"node_id": "MDQ6VXNlcjE3NTUyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/175524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescoagati",
"html_url": "https://github.com/francescoagati",
"followers_url": "https://api.github... | [] | closed | false | null | [] | null | 7 | 2023-11-02T10:00:29 | 2023-11-04T20:34:03 | 2023-11-04T18:55:24 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hello,
i have notice a big change with last release.
many models in a simple task of summarize become crazy and generate or random words or enter in an infinite loop.
i have do rollback to an old version of ollama | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/970/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/970/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4264 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4264/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4264/comments | https://api.github.com/repos/ollama/ollama/issues/4264/events | https://github.com/ollama/ollama/pull/4264 | 2,286,303,700 | PR_kwDOJ0Z1Ps5u6wk3 | 4,264 | Centralize GPU configuration vars | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-05-08T19:35:08 | 2024-06-15T14:33:56 | 2024-06-15T14:33:52 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4264",
"html_url": "https://github.com/ollama/ollama/pull/4264",
"diff_url": "https://github.com/ollama/ollama/pull/4264.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4264.patch",
"merged_at": "2024-06-15T14:33:52"
} | This should aid in troubleshooting by capturing and reporting the GPU settings at startup in the logs along with all the other server settings.
Fixes #4139
Example output setting the ROCm gfx override:
```
2024/05/08 19:33:27 routes.go:993: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: ... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4264/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/190 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/190/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/190/comments | https://api.github.com/repos/ollama/ollama/issues/190/events | https://github.com/ollama/ollama/issues/190 | 1,818,491,695 | I_kwDOJ0Z1Ps5sY_sv | 190 | brew formula | {
"login": "ryanmerolle",
"id": 9010275,
"node_id": "MDQ6VXNlcjkwMTAyNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9010275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryanmerolle",
"html_url": "https://github.com/ryanmerolle",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-07-24T13:59:35 | 2023-08-30T21:28:48 | 2023-08-30T21:28:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | A brew formula would be super helpful. Thanks for all your work here! | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/190/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/190/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1450 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1450/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1450/comments | https://api.github.com/repos/ollama/ollama/issues/1450/events | https://github.com/ollama/ollama/issues/1450 | 2,034,192,106 | I_kwDOJ0Z1Ps55P07q | 1,450 | Use hard link to import GGUF on the same host to save disk space | {
"login": "xleven",
"id": 10850975,
"node_id": "MDQ6VXNlcjEwODUwOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/10850975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xleven",
"html_url": "https://github.com/xleven",
"followers_url": "https://api.github.com/users/xleven/fo... | [] | closed | false | null | [] | null | 4 | 2023-12-10T02:35:18 | 2023-12-11T17:32:57 | 2023-12-11T17:32:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | If I understand it correctly, first step of a GGUF import is copying the binary to model dir with a hashed name. When the number of models (mainly GGUF) grows, duplicated binaries may take a lot of disk space.
Thinking hard links, or the raw GGUFs if possible, will do the work of space saving, though it only makes sen... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1450/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/7709 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7709/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7709/comments | https://api.github.com/repos/ollama/ollama/issues/7709/events | https://github.com/ollama/ollama/pull/7709 | 2,666,319,423 | PR_kwDOJ0Z1Ps6CKpZp | 7,709 | docs: add customization section in linux.md | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-11-17T19:00:58 | 2024-11-17T19:48:14 | 2024-11-17T19:48:12 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7709",
"html_url": "https://github.com/ollama/ollama/pull/7709",
"diff_url": "https://github.com/ollama/ollama/pull/7709.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7709.patch",
"merged_at": "2024-11-17T19:48:12"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7709/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1192 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1192/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1192/comments | https://api.github.com/repos/ollama/ollama/issues/1192/events | https://github.com/ollama/ollama/pull/1192 | 2,000,612,686 | PR_kwDOJ0Z1Ps5f1Aqq | 1,192 | main_gpu argument is not getting set for llamacpp | {
"login": "purinda",
"id": 3181510,
"node_id": "MDQ6VXNlcjMxODE1MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3181510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/purinda",
"html_url": "https://github.com/purinda",
"followers_url": "https://api.github.com/users/purinda/... | [] | closed | false | null | [] | null | 0 | 2023-11-18T23:06:12 | 2023-11-21T13:05:44 | 2023-11-20T15:52:52 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1192",
"html_url": "https://github.com/ollama/ollama/pull/1192",
"diff_url": "https://github.com/ollama/ollama/pull/1192.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1192.patch",
"merged_at": "2023-11-20T15:52:52"
} | In a multi-GPU platform I observed I cannot set the main GPU to be used to llamacpp though llamacpp itself support this through `main_gpu` argument.
This PR fixes just that. | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1192/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1188 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1188/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1188/comments | https://api.github.com/repos/ollama/ollama/issues/1188/events | https://github.com/ollama/ollama/issues/1188 | 2,000,244,238 | I_kwDOJ0Z1Ps53OU4O | 1,188 | Enhancement Request: Network-Distributed Inference(NDI) and Intuitive Resource Sharing | {
"login": "repollo",
"id": 2671466,
"node_id": "MDQ6VXNlcjI2NzE0NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2671466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/repollo",
"html_url": "https://github.com/repollo",
"followers_url": "https://api.github.com/users/repollo/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-11-18T05:27:53 | 2024-03-11T18:04:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I am proposing an enhancement for the Ollama project that I believe would significantly benefit all users, especially those with an interest in distributed computing and AI.
**Proposed Enhancements:**
1. **Network Distribution Toggle:** I want to implement a toggle option in the system tray menu labeled "Network Di... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1188/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1188/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2801 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2801/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2801/comments | https://api.github.com/repos/ollama/ollama/issues/2801/events | https://github.com/ollama/ollama/issues/2801 | 2,158,182,325 | I_kwDOJ0Z1Ps6Aoz-1 | 2,801 | Port should be changeable | {
"login": "pankajkumar229",
"id": 1482916,
"node_id": "MDQ6VXNlcjE0ODI5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1482916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pankajkumar229",
"html_url": "https://github.com/pankajkumar229",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 3 | 2024-02-28T05:59:53 | 2024-03-04T05:24:45 | 2024-03-01T01:36:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I could not change the port Ollama listens on. I hope we can run multiple instances on different ports. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2801/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4721 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4721/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4721/comments | https://api.github.com/repos/ollama/ollama/issues/4721/events | https://github.com/ollama/ollama/pull/4721 | 2,325,585,662 | PR_kwDOJ0Z1Ps5xAcOr | 4,721 | Add LoongArch64 ISA Support | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | 0 | 2024-05-30T12:40:53 | 2024-06-15T17:18:47 | 2024-06-15T17:18:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4721",
"html_url": "https://github.com/ollama/ollama/pull/4721",
"diff_url": "https://github.com/ollama/ollama/pull/4721.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4721.patch",
"merged_at": null
} | 1. go.mod: replace github.com/chewxy/math32 v1.10.1 to github.com/chewxy/math32 v1.10.2-0.20240509203351, fixed https://github.com/chewxy/math32/issues/23 ;
2. go.sum;
3. llm.go add loong64 support;
4. gen_common.sh add 64bit LoongArch support;
5. gen_linux.sh add loongarch64 ISA LASX/LSX Support.
6. fix https://g... | {
"login": "HougeLangley",
"id": 1161594,
"node_id": "MDQ6VXNlcjExNjE1OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1161594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HougeLangley",
"html_url": "https://github.com/HougeLangley",
"followers_url": "https://api.github.com... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4721/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3453 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3453/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3453/comments | https://api.github.com/repos/ollama/ollama/issues/3453/events | https://github.com/ollama/ollama/issues/3453 | 2,220,053,399 | I_kwDOJ0Z1Ps6EU1OX | 3,453 | Some ollama cli instructions: specially stop | {
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2024-04-02T09:48:19 | 2024-09-02T19:36:38 | 2024-09-01T23:55:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
- Stop
- Restart
- Probably change OLLAMA_MODELS, OLLAMA_HOST AND OLLAMA_PORT
Also would be nice to change ollama serve to ollama start for reasoning of other services or similar things.
### How should we solve this?
- Creating a CLI instructions to make it possible
ex: $ oll... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3453/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3453/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3609 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3609/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3609/comments | https://api.github.com/repos/ollama/ollama/issues/3609/events | https://github.com/ollama/ollama/issues/3609 | 2,238,838,325 | I_kwDOJ0Z1Ps6FcfY1 | 3,609 | Issue Storage Filling up need help! (Ubuntu server 22.04) | {
"login": "alfi4000",
"id": 149228038,
"node_id": "U_kgDOCOUKBg",
"avatar_url": "https://avatars.githubusercontent.com/u/149228038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alfi4000",
"html_url": "https://github.com/alfi4000",
"followers_url": "https://api.github.com/users/alfi4000/... | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 2 | 2024-04-12T00:55:53 | 2024-04-22T23:48:19 | 2024-04-22T23:48:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I run Ollama my /dev/mapper directory is filling up! Check the image the marked storage is filling up how can I solve it to not happen?:
<img width="552" alt="Bildschirmfoto 2024-04-11 um 17 50 08" src="https://github.com/ollama/ollama/assets/166188813/19a72be1-4179-4906-a0e8-ebdd6135a9e... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3609/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6246 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6246/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6246/comments | https://api.github.com/repos/ollama/ollama/issues/6246/events | https://github.com/ollama/ollama/issues/6246 | 2,454,514,541 | I_kwDOJ0Z1Ps6STOtt | 6,246 | Modelfile - Customize a prompt | {
"login": "LucasFreitas88",
"id": 177795987,
"node_id": "U_kgDOCpjzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/177795987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucasFreitas88",
"html_url": "https://github.com/LucasFreitas88",
"followers_url": "https://api.github.c... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 11 | 2024-08-07T23:33:35 | 2024-08-08T18:28:10 | 2024-08-08T18:28:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I cannot change the modelfile file with new prompt instructions, as in the documentation example (Customize a prompt item).
Model: Llama 3.1 8B
Notebook: Macbook Air M1 - Mac Os Sonoma 14.6.1
The answer to the question posed in the example (hi) is an endless sequence of strange charact... | {
"login": "LucasFreitas88",
"id": 177795987,
"node_id": "U_kgDOCpjzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/177795987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucasFreitas88",
"html_url": "https://github.com/LucasFreitas88",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6246/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6245 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6245/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6245/comments | https://api.github.com/repos/ollama/ollama/issues/6245/events | https://github.com/ollama/ollama/issues/6245 | 2,454,480,087 | I_kwDOJ0Z1Ps6STGTX | 6,245 | A character gets skipped here and there in the output, using any model, over any tunnel | {
"login": "embium",
"id": 82550035,
"node_id": "MDQ6VXNlcjgyNTUwMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/82550035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/embium",
"html_url": "https://github.com/embium",
"followers_url": "https://api.github.com/users/embium/fo... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-08-07T22:51:50 | 2024-08-07T22:58:10 | 2024-08-07T22:58:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
Here is the output:
```
**The Art of Ball Handling: Mastering Control on and off the Court**
When it comes to various sports, mastering ball handling is a aspect that separates the good players from the great ones. Whether you're playing basketball, soccer, or tennis, controllin... | {
"login": "embium",
"id": 82550035,
"node_id": "MDQ6VXNlcjgyNTUwMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/82550035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/embium",
"html_url": "https://github.com/embium",
"followers_url": "https://api.github.com/users/embium/fo... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6245/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1199 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1199/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1199/comments | https://api.github.com/repos/ollama/ollama/issues/1199/events | https://github.com/ollama/ollama/pull/1199 | 2,000,995,719 | PR_kwDOJ0Z1Ps5f2MVM | 1,199 | Fix issues sending incomplete body and add retry backoff for `ollama push` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2023-11-19T19:12:20 | 2023-11-19T19:32:20 | 2023-11-19T19:32:19 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1199",
"html_url": "https://github.com/ollama/ollama/pull/1199",
"diff_url": "https://github.com/ollama/ollama/pull/1199.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1199.patch",
"merged_at": "2023-11-19T19:32:19"
} | Builds on #1184
This change increases the upload chunk sizes and adds more graceful retry backoffs to fix issues transient network issues when using `ollama push`.
It also fixes an issue where an incomplete body would be uploaded, requiring the need for a retry. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1199/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7556 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7556/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7556/comments | https://api.github.com/repos/ollama/ollama/issues/7556/events | https://github.com/ollama/ollama/issues/7556 | 2,640,935,358 | I_kwDOJ0Z1Ps6daXm- | 7,556 | llama runner process has terminated: error loading model: unable to allocate backend buffer when AMD iGPU vram allocation larger than 8GB | {
"login": "oatmealm",
"id": 68159077,
"node_id": "MDQ6VXNlcjY4MTU5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/68159077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oatmealm",
"html_url": "https://github.com/oatmealm",
"followers_url": "https://api.github.com/users/oat... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg... | open | false | null | [] | null | 3 | 2024-11-07T12:52:44 | 2024-11-07T21:50:49 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.
```
ollama run llama3.2
Error: llama runner process has terminated: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_mod... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7556/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1712 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1712/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1712/comments | https://api.github.com/repos/ollama/ollama/issues/1712/events | https://github.com/ollama/ollama/issues/1712 | 2,055,880,223 | I_kwDOJ0Z1Ps56ij4f | 1,712 | Ollama version | {
"login": "xyproto",
"id": 52813,
"node_id": "MDQ6VXNlcjUyODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/52813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyproto",
"html_url": "https://github.com/xyproto",
"followers_url": "https://api.github.com/users/xyproto/follower... | [] | closed | false | null | [] | null | 7 | 2023-12-25T21:45:40 | 2024-09-10T11:05:33 | 2023-12-26T23:02:10 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi, maintainer of the Arch Linux [`ollama`](https://gitlab.archlinux.org/archlinux/packaging/packages/ollama/) package here.
`ollama --version` is "0.0.0" after building Ollama from source on Arch Linux. Is this intentional? Is there something this `PKGBUILD` is missing?
Thanks in advance.
```bash
pkgname=oll... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1712/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1712/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1066 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1066/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1066/comments | https://api.github.com/repos/ollama/ollama/issues/1066/events | https://github.com/ollama/ollama/issues/1066 | 1,986,658,571 | I_kwDOJ0Z1Ps52agEL | 1,066 | Error: mkdir permission denied | {
"login": "pepsiamir",
"id": 22083243,
"node_id": "MDQ6VXNlcjIyMDgzMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/22083243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pepsiamir",
"html_url": "https://github.com/pepsiamir",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 2 | 2023-11-10T01:07:08 | 2024-03-27T06:48:55 | 2023-11-16T00:41:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | After installing Mistral and Sqlcoder models I got this error.
```
verifying sha256 digest
writing manifest
Error: mkdir /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/mistral: permission denied
```
I had to make the directory manually which then succeeded.
```
sudo mkdir /usr/share/ollam... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1066/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6136 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6136/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6136/comments | https://api.github.com/repos/ollama/ollama/issues/6136/events | https://github.com/ollama/ollama/pull/6136 | 2,444,014,175 | PR_kwDOJ0Z1Ps53NI7o | 6,136 | docs: Update api.md | {
"login": "farwish",
"id": 6552412,
"node_id": "MDQ6VXNlcjY1NTI0MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6552412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farwish",
"html_url": "https://github.com/farwish",
"followers_url": "https://api.github.com/users/farwish/... | [] | closed | false | null | [] | null | 2 | 2024-08-02T04:31:42 | 2024-11-21T10:16:21 | 2024-11-21T10:16:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6136",
"html_url": "https://github.com/ollama/ollama/pull/6136",
"diff_url": "https://github.com/ollama/ollama/pull/6136.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6136.patch",
"merged_at": null
} | Name is deprecated in api/types.go | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6136/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3191 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3191/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3191/comments | https://api.github.com/repos/ollama/ollama/issues/3191/events | https://github.com/ollama/ollama/issues/3191 | 2,190,489,213 | I_kwDOJ0Z1Ps6CkDZ9 | 3,191 | Error: pull model manifest: Get "https://ollama.com/token?nonce=6xXg08tJu5sXzjqrvWKxQA&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1710652958": read tcp 192.168.5.215:60112->34.120.132.20:443: read: connection reset by peer | {
"login": "wbsxhh201771",
"id": 100500363,
"node_id": "U_kgDOBf2Diw",
"avatar_url": "https://avatars.githubusercontent.com/u/100500363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wbsxhh201771",
"html_url": "https://github.com/wbsxhh201771",
"followers_url": "https://api.github.com/use... | [
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.gi... | null | 5 | 2024-03-17T05:32:41 | 2024-03-29T00:01:48 | 2024-03-28T20:52:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | when i run :
`ollama run llama2`
I ment this problems:
`Error: pull model manifest: Get "https://ollama.com/token?nonce=6xXg08tJu5sXzjqrvWKxQA&scope=repository%!A(MISSING)library%!F(MISSING)llama2%!A(MISSING)pull&service=ollama.com&ts=1710652958": read tcp 192.168.5.215:60112->34.120.132.20:443: read: connection... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3191/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4355 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4355/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4355/comments | https://api.github.com/repos/ollama/ollama/issues/4355/events | https://github.com/ollama/ollama/issues/4355 | 2,290,833,929 | I_kwDOJ0Z1Ps6Ii1oJ | 4,355 | Ollama doesn' t work well with Zluda after 0.1.34 | {
"login": "4thanks",
"id": 63891627,
"node_id": "MDQ6VXNlcjYzODkxNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/63891627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4thanks",
"html_url": "https://github.com/4thanks",
"followers_url": "https://api.github.com/users/4thank... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-05-11T10:48:29 | 2024-05-13T16:03:17 | 2024-05-13T16:03:17 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
when I was using ollama 0.1.32, it worked well with Zluda for my GPU (5700XT) follow the steps [ollama_windows_10_rx6600xt_zluda](https://www.reddit.com/r/ollama/comments/1cf5tq1/ollama_windows_10_rx6600xt_zluda/).
recently update to the newest version (0.1.37), the GPU isn' t being utilized... | {
"login": "4thanks",
"id": 63891627,
"node_id": "MDQ6VXNlcjYzODkxNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/63891627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4thanks",
"html_url": "https://github.com/4thanks",
"followers_url": "https://api.github.com/users/4thank... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4355/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4518 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4518/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4518/comments | https://api.github.com/repos/ollama/ollama/issues/4518/events | https://github.com/ollama/ollama/issues/4518 | 2,304,356,057 | I_kwDOJ0Z1Ps6JWa7Z | 4,518 | Add option to control start of response to generate api | {
"login": "notasquid1938",
"id": 99005612,
"node_id": "U_kgDOBea0rA",
"avatar_url": "https://avatars.githubusercontent.com/u/99005612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notasquid1938",
"html_url": "https://github.com/notasquid1938",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 7706482389,
"node_id": ... | open | false | null | [] | null | 0 | 2024-05-19T00:24:06 | 2024-11-06T17:29:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Using openwebui for ollama or textgenwebui. you can control what the model's response begins with to steer it in a certain direction. It would be very helpful to have this built-in to the api. I have struggled to recreate this effect with the api by trying to include the model's response portion of the template in my i... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4518/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4518/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/798 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/798/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/798/comments | https://api.github.com/repos/ollama/ollama/issues/798/events | https://github.com/ollama/ollama/issues/798 | 1,944,769,217 | I_kwDOJ0Z1Ps5z6tLB | 798 | JSON Marshal Escapes Special Characters in Prompts | {
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2023-10-16T09:25:02 | 2023-10-17T16:31:19 | 2023-10-17T16:31:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using the `json.Marshal()` function in `llama.go`, I've noticed that special characters like `<` and `>` are being automatically escaped to `\u003c` and `\u003e`, respectively. This is problematic, especially for prompts that use these characters.
**Example:**
Consider the following prompt:
```
<|system|>
<... | {
"login": "deichbewohner",
"id": 54838329,
"node_id": "MDQ6VXNlcjU0ODM4MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/54838329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deichbewohner",
"html_url": "https://github.com/deichbewohner",
"followers_url": "https://api.githu... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/798/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4096 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4096/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4096/comments | https://api.github.com/repos/ollama/ollama/issues/4096/events | https://github.com/ollama/ollama/pull/4096 | 2,274,743,627 | PR_kwDOJ0Z1Ps5uURzS | 4,096 | add _defaultApiClient in api/client.go for reuse | {
"login": "alwqx",
"id": 9915368,
"node_id": "MDQ6VXNlcjk5MTUzNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alwqx",
"html_url": "https://github.com/alwqx",
"followers_url": "https://api.github.com/users/alwqx/follower... | [] | closed | false | null | [] | null | 2 | 2024-05-02T06:43:01 | 2024-05-16T12:45:40 | 2024-05-10T00:19:22 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4096",
"html_url": "https://github.com/ollama/ollama/pull/4096",
"diff_url": "https://github.com/ollama/ollama/pull/4096.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4096.patch",
"merged_at": null
} | Hi, this pr mainly make following improvements:
1. add _defaultApiClient for reuse
I find that `api.ClientFromEnvironment()` is called more than one time by some functions in `cmd/cmd.go` (e.g. `RunHandler()` `generateInteractive`). So I add **_defaultApiClient** for reuse and to reduce memory alloc.
2. update tes... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4096/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8164 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8164/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8164/comments | https://api.github.com/repos/ollama/ollama/issues/8164/events | https://github.com/ollama/ollama/issues/8164 | 2,748,824,165 | I_kwDOJ0Z1Ps6j17pl | 8,164 | llama3.2 3B "will fit in available VRAM" of a Nvidia 4060 TI but then runs on CPU. llm server error | {
"login": "felixniemeyer",
"id": 5720176,
"node_id": "MDQ6VXNlcjU3MjAxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5720176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixniemeyer",
"html_url": "https://github.com/felixniemeyer",
"followers_url": "https://api.github.... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-12-18T21:57:41 | 2024-12-25T03:33:33 | 2024-12-18T22:00:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to use llama3.2 on my Nvidia 4060 Ti 16GB but ollama runs it on the CPU.
Here is the server log with debug level logging.
```
2024/12/18 22:54:10 routes.go:1194: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSIO... | {
"login": "felixniemeyer",
"id": 5720176,
"node_id": "MDQ6VXNlcjU3MjAxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5720176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixniemeyer",
"html_url": "https://github.com/felixniemeyer",
"followers_url": "https://api.github.... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8164/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/847 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/847/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/847/comments | https://api.github.com/repos/ollama/ollama/issues/847/events | https://github.com/ollama/ollama/pull/847 | 1,953,059,144 | PR_kwDOJ0Z1Ps5dUSRs | 847 | new readline library | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | [] | closed | false | null | [] | null | 2 | 2023-10-19T21:09:43 | 2023-10-28T14:12:05 | 2023-10-25T23:41:19 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/847",
"html_url": "https://github.com/ollama/ollama/pull/847",
"diff_url": "https://github.com/ollama/ollama/pull/847.diff",
"patch_url": "https://github.com/ollama/ollama/pull/847.patch",
"merged_at": "2023-10-25T23:41:18"
} | This is simplified version of the readline library which cuts out a lot of the complexity of the version that we were using. There's still a few things to add like "history" and getting the multi-line prompts working correctly, but most (many?) things should be more or less working, including:
* Each of the Ctrl-? c... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/847/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/847/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5528 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5528/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5528/comments | https://api.github.com/repos/ollama/ollama/issues/5528/events | https://github.com/ollama/ollama/issues/5528 | 2,394,047,928 | I_kwDOJ0Z1Ps6OskW4 | 5,528 | Error Pulling Manifest MacOSX | {
"login": "Moonlight1220",
"id": 172665223,
"node_id": "U_kgDOCkqphw",
"avatar_url": "https://avatars.githubusercontent.com/u/172665223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moonlight1220",
"html_url": "https://github.com/Moonlight1220",
"followers_url": "https://api.github.com/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-07-07T13:04:52 | 2024-08-10T11:43:32 | 2024-07-09T14:38:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### Issue #5499 Continued
This issue is associate with the issue #5499, please see that issue for more context, after some testing on my Windows 11 Hyper-V machine I can confidently say this bug is exclusive to MacOS, please let me know if you have any ideas on how I can get this up and runnin... | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5528/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/494 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/494/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/494/comments | https://api.github.com/repos/ollama/ollama/issues/494/events | https://github.com/ollama/ollama/pull/494 | 1,887,156,814 | PR_kwDOJ0Z1Ps5Z2YVS | 494 | Remove already applied patches | {
"login": "avri-schneider",
"id": 6785181,
"node_id": "MDQ6VXNlcjY3ODUxODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6785181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avri-schneider",
"html_url": "https://github.com/avri-schneider",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 4 | 2023-09-08T07:52:27 | 2023-09-09T17:36:24 | 2023-09-08T14:21:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/494",
"html_url": "https://github.com/ollama/ollama/pull/494",
"diff_url": "https://github.com/ollama/ollama/pull/494.diff",
"patch_url": "https://github.com/ollama/ollama/pull/494.patch",
"merged_at": null
} | null | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/494/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4644 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4644/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4644/comments | https://api.github.com/repos/ollama/ollama/issues/4644/events | https://github.com/ollama/ollama/issues/4644 | 2,317,415,905 | I_kwDOJ0Z1Ps6KIPXh | 4,644 | more types of models | {
"login": "zsq2010",
"id": 4374659,
"node_id": "MDQ6VXNlcjQzNzQ2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4374659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsq2010",
"html_url": "https://github.com/zsq2010",
"followers_url": "https://api.github.com/users/zsq2010/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-26T03:28:38 | 2024-07-25T23:24:15 | 2024-07-25T23:24:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | could we have more type of modles like,vision model,tts,ocr,etc | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4644/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6018 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6018/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6018/comments | https://api.github.com/repos/ollama/ollama/issues/6018/events | https://github.com/ollama/ollama/issues/6018 | 2,433,585,663 | I_kwDOJ0Z1Ps6RDZH_ | 6,018 | max retries exceeded: unexpected EOF | {
"login": "davidsolal",
"id": 128038753,
"node_id": "U_kgDOB6G3YQ",
"avatar_url": "https://avatars.githubusercontent.com/u/128038753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidsolal",
"html_url": "https://github.com/davidsolal",
"followers_url": "https://api.github.com/users/dav... | [] | closed | false | null | [] | null | 5 | 2024-07-27T18:34:04 | 2024-09-04T04:19:49 | 2024-09-04T04:19:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | > The bug shown in this issue specifically is now fixed as we run models directly rather than in a subprocess. Although there are still on EOF errors. If anyone else sees an EOF please open a new issue so we can triage it appropriately.
_Originally posted by @BruceMacD in https://github.com/ollama/ollama/issues/1158#i... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6018/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2202 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2202/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2202/comments | https://api.github.com/repos/ollama/ollama/issues/2202/events | https://github.com/ollama/ollama/pull/2202 | 2,101,738,189 | PR_kwDOJ0Z1Ps5lIlwY | 2,202 | Add chat app | {
"login": "Yuan-ManX",
"id": 68322456,
"node_id": "MDQ6VXNlcjY4MzIyNDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuan-ManX",
"html_url": "https://github.com/Yuan-ManX",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2024-01-26T07:42:31 | 2024-02-20T02:08:50 | 2024-02-20T02:08:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2202",
"html_url": "https://github.com/ollama/ollama/pull/2202",
"diff_url": "https://github.com/ollama/ollama/pull/2202.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2202.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2202/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4992 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4992/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4992/comments | https://api.github.com/repos/ollama/ollama/issues/4992/events | https://github.com/ollama/ollama/issues/4992 | 2,347,757,142 | I_kwDOJ0Z1Ps6L7-5W | 4,992 | error pulling llama2 manifest | {
"login": "adityapandit1798",
"id": 50072336,
"node_id": "MDQ6VXNlcjUwMDcyMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/50072336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adityapandit1798",
"html_url": "https://github.com/adityapandit1798",
"followers_url": "https://... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2024-06-12T04:04:53 | 2024-06-12T04:05:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### can't pull manifests
ollama pull llama2:7b
pulling manifest
Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/89/8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4992/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5645 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5645/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5645/comments | https://api.github.com/repos/ollama/ollama/issues/5645/events | https://github.com/ollama/ollama/pull/5645 | 2,404,789,175 | PR_kwDOJ0Z1Ps51Lc6e | 5,645 | Clean up old files when installing on Windows | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [] | closed | false | null | [] | null | 0 | 2024-07-12T05:32:49 | 2024-07-12T15:13:34 | 2024-07-12T05:53:46 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5645",
"html_url": "https://github.com/ollama/ollama/pull/5645",
"diff_url": "https://github.com/ollama/ollama/pull/5645.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5645.patch",
"merged_at": "2024-07-12T05:53:46"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5645/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2234 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2234/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2234/comments | https://api.github.com/repos/ollama/ollama/issues/2234/events | https://github.com/ollama/ollama/issues/2234 | 2,103,796,127 | I_kwDOJ0Z1Ps59ZWGf | 2,234 | :memo: Better description for `openchat-3.5-0106-laser` | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | [] | closed | false | null | [] | null | 4 | 2024-01-27T20:35:35 | 2024-01-27T20:55:53 | 2024-01-27T20:55:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | # :grey_question: About
In the following [tweet](https://twitter.com/ivanfioravanti/status/1751329888231915725)

, the `openchat-3.5-0106-laser` model is known for having _Strong math capabilities without compromise!_... | {
"login": "adriens",
"id": 5235127,
"node_id": "MDQ6VXNlcjUyMzUxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adriens",
"html_url": "https://github.com/adriens",
"followers_url": "https://api.github.com/users/adriens/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2234/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2234/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8534/comments | https://api.github.com/repos/ollama/ollama/issues/8534/events | https://github.com/ollama/ollama/issues/8534 | 2,804,315,742 | I_kwDOJ0Z1Ps6nJnZe | 8,534 | Llama 3.1 sha256 mismatch | {
"login": "xihuai18",
"id": 23721828,
"node_id": "MDQ6VXNlcjIzNzIxODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/23721828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xihuai18",
"html_url": "https://github.com/xihuai18",
"followers_url": "https://api.github.com/users/xih... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-22T12:53:01 | 2025-01-22T20:11:00 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
<img width="909" alt="Image" src="https://github.com/user-attachments/assets/a8f79e64-2f9b-4a6f-b5cc-a1534c8479b5" />
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8534/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4330 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4330/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4330/comments | https://api.github.com/repos/ollama/ollama/issues/4330/events | https://github.com/ollama/ollama/pull/4330 | 2,290,515,250 | PR_kwDOJ0Z1Ps5vI_pV | 4,330 | cache and reuse intermediate blobs | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-05-10T23:17:44 | 2024-05-20T21:38:53 | 2024-05-20T20:54:42 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4330",
"html_url": "https://github.com/ollama/ollama/pull/4330",
"diff_url": "https://github.com/ollama/ollama/pull/4330.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4330.patch",
"merged_at": "2024-05-20T20:54:41"
} | particularly useful for zipfiles and f16s | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4330/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5609 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5609/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5609/comments | https://api.github.com/repos/ollama/ollama/issues/5609/events | https://github.com/ollama/ollama/pull/5609 | 2,401,398,711 | PR_kwDOJ0Z1Ps51AK3K | 5,609 | Return 405 for Unsupported Methods on Endpoints, 204 for Cross-Origin OPTIONS | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjha... | [] | open | false | null | [] | null | 1 | 2024-07-10T18:25:26 | 2024-08-12T18:41:54 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5609",
"html_url": "https://github.com/ollama/ollama/pull/5609",
"diff_url": "https://github.com/ollama/ollama/pull/5609.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5609.patch",
"merged_at": null
} | Resolves #5483
Previously `curl -X POST http://localhost:11434/api/ps` --> 404 (corrected to 405)
Resolves #5294
Previously, `curl -X OPTIONS http://localhost:11434/api/chat` --> 204
`curl -X OPTIONS http://127.0.0.1:11434/api/chat` --> 404 (corrected to 204) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5609/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5526 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5526/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5526/comments | https://api.github.com/repos/ollama/ollama/issues/5526/events | https://github.com/ollama/ollama/issues/5526 | 2,393,916,063 | I_kwDOJ0Z1Ps6OsEKf | 5,526 | Models Created from GGUF File Missing from api/models Endpoint (after some time) Despite Appearing in ollama list | {
"login": "chrisoutwright",
"id": 27736055,
"node_id": "MDQ6VXNlcjI3NzM2MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/27736055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisoutwright",
"html_url": "https://github.com/chrisoutwright",
"followers_url": "https://api.gi... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/... | null | 4 | 2024-07-07T06:34:45 | 2024-08-20T21:09:59 | 2024-08-20T21:09:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### Issue with Getting Model over Ollama AP api/models after Creating Model in Ollama
#### Steps to Reproduce
1. Execute the following command to merge GGUF files:
example
```shell
C:\Users\Chris>D:\llama-b3066-bin-win-avx512-x64\gguf-split --merge C:\Users\Chris\Downloads\Q4_... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5526/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5526/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4975 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4975/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4975/comments | https://api.github.com/repos/ollama/ollama/issues/4975/events | https://github.com/ollama/ollama/issues/4975 | 2,346,038,035 | I_kwDOJ0Z1Ps6L1bMT | 4,975 | Is RTX 4070 and not RTX 4070ti supported - ambigous documentation | {
"login": "thinkrapido",
"id": 1568087,
"node_id": "MDQ6VXNlcjE1NjgwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1568087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thinkrapido",
"html_url": "https://github.com/thinkrapido",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-06-11T10:22:49 | 2024-06-14T00:07:46 | 2024-06-14T00:07:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
my prompts to ollama model codellama:34b-code-q6_K is taking very long to process.
And in the CPU Monitor many CPUs get envolved when calculating an answer.
What am I doing wrong? Is it a bug or do I have to bear with it?
I expect answers within a second delay.
The documentation... | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4975/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/30 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/30/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/30/comments | https://api.github.com/repos/ollama/ollama/issues/30/events | https://github.com/ollama/ollama/issues/30 | 1,783,169,821 | I_kwDOJ0Z1Ps5qSQMd | 30 | cli feedback for models already downloaded | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/us... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-06-30T21:09:28 | 2023-07-04T14:29:34 | 2023-07-04T14:29:34 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In the case of having the model downloaded already, it would just not output anything.
```
ollama pull huggingface.co/TheBloke/orca_mini_3B-GGML
``` | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/Br... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/30/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/30/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1701 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1701/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1701/comments | https://api.github.com/repos/ollama/ollama/issues/1701/events | https://github.com/ollama/ollama/issues/1701 | 2,055,343,482 | I_kwDOJ0Z1Ps56gg16 | 1,701 | Create uninstall script | {
"login": "vtrenton",
"id": 85969349,
"node_id": "MDQ6VXNlcjg1OTY5MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/85969349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vtrenton",
"html_url": "https://github.com/vtrenton",
"followers_url": "https://api.github.com/users/vtr... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2023-12-25T03:47:26 | 2024-09-06T04:53:48 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello, it would be nice to have an uninstall script to automate the uninstall process specified here: https://github.com/jmorganca/ollama/blob/main/docs/linux.md#uninstall adding a PR to this issue with something i made that I'd like to contribute.
Happy Holidays! :) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1701/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1701/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5380 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5380/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5380/comments | https://api.github.com/repos/ollama/ollama/issues/5380/events | https://github.com/ollama/ollama/issues/5380 | 2,381,547,776 | I_kwDOJ0Z1Ps6N84kA | 5,380 | Ollama Run provides numerical choice to run one of models from list | {
"login": "rayking99",
"id": 85595170,
"node_id": "MDQ6VXNlcjg1NTk1MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/85595170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayking99",
"html_url": "https://github.com/rayking99",
"followers_url": "https://api.github.com/users/... | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-06-29T07:13:48 | 2024-06-29T23:17:02 | 2024-06-29T23:15:25 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I think it would be cool if ollama run without any extra arguments showed the models in ollama list, but with a number next to them.
Ie ollama run ->
```sh
TYPE NUMBER OF MODEL TO RUN
[0] gemma2:27b-instruct-q8_0
[1] qwen2:0.5b
[2] mistral:7b-instruct-v0.3-q8_0
[3] gemma:2b-instruct
[4] phi3:3.8b-... | {
"login": "rayking99",
"id": 85595170,
"node_id": "MDQ6VXNlcjg1NTk1MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/85595170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayking99",
"html_url": "https://github.com/rayking99",
"followers_url": "https://api.github.com/users/... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5380/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7520/comments | https://api.github.com/repos/ollama/ollama/issues/7520/events | https://github.com/ollama/ollama/issues/7520 | 2,637,042,503 | I_kwDOJ0Z1Ps6dLhNH | 7,520 | Build instructions in https://github.com/ollama/ollama/blob/main/llama/README.md are outdated or non-functional | {
"login": "yeahdongcn",
"id": 2831050,
"node_id": "MDQ6VXNlcjI4MzEwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeahdongcn",
"html_url": "https://github.com/yeahdongcn",
"followers_url": "https://api.github.com/users... | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 7700262114,
... | open | false | null | [] | null | 2 | 2024-11-06T04:58:03 | 2024-11-17T14:08:53 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Following the build instructions in [README.md](https://github.com/ollama/ollama/blob/main/llama/README.md#cuda) for Linux/CUDA results in an error when running `make ggml_cuda.so`. The error is:
```bash
make: *** No rule to make target 'ggml_cuda.so'. Stop.
```
Could you please confirm... | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7520/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7520/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1916 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1916/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1916/comments | https://api.github.com/repos/ollama/ollama/issues/1916/events | https://github.com/ollama/ollama/pull/1916 | 2,075,490,529 | PR_kwDOJ0Z1Ps5jvqSp | 1,916 | download: add inactivity monitor | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 0 | 2024-01-11T00:55:08 | 2024-01-26T18:56:01 | 2024-01-26T18:56:01 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1916",
"html_url": "https://github.com/ollama/ollama/pull/1916",
"diff_url": "https://github.com/ollama/ollama/pull/1916.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1916.patch",
"merged_at": "2024-01-26T18:56:01"
} | If a download part is inactive for some time, restart it. From profiling, it's possible for one or more of the download parts to stall and receive no content from the storage backend for many consecutive seconds.
This generally causes the download to slow to a rate of near zero at the end as other, faster parts com... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1916/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1916/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/653 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/653/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/653/comments | https://api.github.com/repos/ollama/ollama/issues/653/events | https://github.com/ollama/ollama/pull/653 | 1,920,087,518 | PR_kwDOJ0Z1Ps5blIh2 | 653 | pythonic python client | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | [] | closed | false | null | [] | null | 2 | 2023-09-30T03:43:51 | 2024-01-11T23:52:57 | 2024-01-11T23:52:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/653",
"html_url": "https://github.com/ollama/ollama/pull/653",
"diff_url": "https://github.com/ollama/ollama/pull/653.diff",
"patch_url": "https://github.com/ollama/ollama/pull/653.patch",
"merged_at": null
} | new features:
- chat
```python
client.chat('name', messages=[
{
'role': 'system',
'content': 'you are a good bot',
},
])
```
- create with a string input instead of a file
```python
client.create('name', modelfile='''
FROM llama2
PARAMETER stop </s>
''')
```
key differences:
- errors are... | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/653/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5786 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5786/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5786/comments | https://api.github.com/repos/ollama/ollama/issues/5786/events | https://github.com/ollama/ollama/issues/5786 | 2,417,988,954 | I_kwDOJ0Z1Ps6QH5Va | 5,786 | Request to add support for InternVL-2 model | {
"login": "CNEA-lw",
"id": 164863967,
"node_id": "U_kgDOCdOf3w",
"avatar_url": "https://avatars.githubusercontent.com/u/164863967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CNEA-lw",
"html_url": "https://github.com/CNEA-lw",
"followers_url": "https://api.github.com/users/CNEA-lw/foll... | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 7 | 2024-07-19T05:49:22 | 2025-01-28T13:43:47 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It is hoped that the ollama platform can add the model InternVL-2 series. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5786/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5786/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6514 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6514/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6514/comments | https://api.github.com/repos/ollama/ollama/issues/6514/events | https://github.com/ollama/ollama/pull/6514 | 2,486,543,854 | PR_kwDOJ0Z1Ps55bD_f | 6,514 | Implicit openai model parameter multiplication disabled | {
"login": "yaroslavyaroslav",
"id": 16612247,
"node_id": "MDQ6VXNlcjE2NjEyMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/16612247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaroslavyaroslav",
"html_url": "https://github.com/yaroslavyaroslav",
"followers_url": "https://... | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.g... | null | 6 | 2024-08-26T10:40:00 | 2024-10-29T11:48:13 | 2024-09-07T00:45:45 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6514",
"html_url": "https://github.com/ollama/ollama/pull/6514",
"diff_url": "https://github.com/ollama/ollama/pull/6514.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6514.patch",
"merged_at": "2024-09-07T00:45:45"
} | Current state of openai.go setup makes absolutely valid openai config to be broken. This happens because of implicit doubling the config numbers performed in it.
I see the idea of making OpenAI API endpoint compatible with native ollama endpoint, but I think it've made wrong, as again, it leads to completely valid O... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6514/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8011 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8011/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8011/comments | https://api.github.com/repos/ollama/ollama/issues/8011/events | https://github.com/ollama/ollama/issues/8011 | 2,726,681,192 | I_kwDOJ0Z1Ps6ihdpo | 8,011 | Underflow error when using GPU memory overhead | {
"login": "ProjectMoon",
"id": 183856,
"node_id": "MDQ6VXNlcjE4Mzg1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/183856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ProjectMoon",
"html_url": "https://github.com/ProjectMoon",
"followers_url": "https://api.github.com/user... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-12-09T10:51:38 | 2024-12-10T17:10:41 | 2024-12-10T17:10:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
GPUs:
- AMD RX 6800 XT (16 GB VRAM)
- NVidia GTX 970 (4 GB VRAM)
I have discovered a very odd and very dangerous problem in ollama. I am running OpenWebUI on a machine that has a ROCm device (main GPU; 16 GB VRAM) and a CUDA device (ancient old NVidia GPU). The NVidia GPU has 4 GB of VRAM... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8011/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6048 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6048/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6048/comments | https://api.github.com/repos/ollama/ollama/issues/6048/events | https://github.com/ollama/ollama/issues/6048 | 2,435,411,779 | I_kwDOJ0Z1Ps6RKW9D | 6,048 | I can't run llama3.1 | {
"login": "Saber120",
"id": 108297159,
"node_id": "U_kgDOBnR7xw",
"avatar_url": "https://avatars.githubusercontent.com/u/108297159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saber120",
"html_url": "https://github.com/Saber120",
"followers_url": "https://api.github.com/users/Saber120/... | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 8 | 2024-07-29T13:18:15 | 2024-07-31T08:27:13 | 2024-07-30T16:29:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I trained my own model from llama version 3.1 8b and created the modlefile for it successfully, but when I start running it it does not work and I get this error
`
ollama run mymodle:latest
Error: llama runner process has terminated: error loading model: done_getting_tensors: wrong number... | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmor... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6048/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5410 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5410/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5410/comments | https://api.github.com/repos/ollama/ollama/issues/5410/events | https://github.com/ollama/ollama/pull/5410 | 2,384,283,202 | PR_kwDOJ0Z1Ps50GC8j | 5,410 | Fix case for NumCtx | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-07-01T16:44:37 | 2024-07-01T16:54:23 | 2024-07-01T16:54:21 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5410",
"html_url": "https://github.com/ollama/ollama/pull/5410",
"diff_url": "https://github.com/ollama/ollama/pull/5410.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5410.patch",
"merged_at": "2024-07-01T16:54:21"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5410/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6122 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6122/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6122/comments | https://api.github.com/repos/ollama/ollama/issues/6122/events | https://github.com/ollama/ollama/pull/6122 | 2,442,737,119 | PR_kwDOJ0Z1Ps53ItZE | 6,122 | llama: Implement timings response in Go server | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | [] | closed | false | null | [] | null | 0 | 2024-08-01T15:16:03 | 2024-08-01T22:52:08 | 2024-08-01T22:52:06 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6122",
"html_url": "https://github.com/ollama/ollama/pull/6122",
"diff_url": "https://github.com/ollama/ollama/pull/6122.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6122.patch",
"merged_at": null
} | This implements the fields necessary for `run --verbose` to generate timing information.
(Examples from my [other branch wiring this into the main ollama serve](https://github.com/ollama/ollama/pull/5287))
C++ runner:
```
% ollama run orca-mini --verbose "what is the origin of independence day?"
Independence Da... | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhilt... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6122/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3106 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3106/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3106/comments | https://api.github.com/repos/ollama/ollama/issues/3106/events | https://github.com/ollama/ollama/issues/3106 | 2,184,204,424 | I_kwDOJ0Z1Ps6CMFCI | 3,106 | Ollama ls not included in the -h/--help flags | {
"login": "aosan",
"id": 8534160,
"node_id": "MDQ6VXNlcjg1MzQxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8534160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aosan",
"html_url": "https://github.com/aosan",
"followers_url": "https://api.github.com/users/aosan/follower... | [] | closed | false | null | [] | null | 3 | 2024-03-13T14:49:37 | 2024-03-15T01:46:58 | 2024-03-14T22:23:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | While working on a [Bash completion PR](https://github.com/ollama/ollama/pull/3105), I noticed the absence of ls from the arguments list for -h/--help and ollama listing.

| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/follower... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3106/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1123 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1123/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1123/comments | https://api.github.com/repos/ollama/ollama/issues/1123/events | https://github.com/ollama/ollama/issues/1123 | 1,992,568,966 | I_kwDOJ0Z1Ps52xDCG | 1,123 | wizard-math:7b terminator not recognized | {
"login": "Detlev1",
"id": 71934197,
"node_id": "MDQ6VXNlcjcxOTM0MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71934197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Detlev1",
"html_url": "https://github.com/Detlev1",
"followers_url": "https://api.github.com/users/Detlev... | [] | closed | false | null | [] | null | 4 | 2023-11-14T11:38:17 | 2023-12-26T21:28:29 | 2023-12-26T21:28:29 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I'm using the latest version of Ollama. When using the wizard-math:7b model, the requests don't complete, and instead, a “</s>” terminator is returned. Can I configure the terminator myself to fix this, or is there a way to terminate the request through the API? | {
"login": "Detlev1",
"id": 71934197,
"node_id": "MDQ6VXNlcjcxOTM0MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/71934197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Detlev1",
"html_url": "https://github.com/Detlev1",
"followers_url": "https://api.github.com/users/Detlev... | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1123/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.