url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/6789
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6789/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6789/comments
https://api.github.com/repos/ollama/ollama/issues/6789/events
https://github.com/ollama/ollama/pull/6789
2,524,124,073
PR_kwDOJ0Z1Ps57ZZmI
6,789
readme: add Obsidian Quiz Generator plugin to community integrations
{ "login": "ECuiDev", "id": 37892357, "node_id": "MDQ6VXNlcjM3ODkyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/37892357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ECuiDev", "html_url": "https://github.com/ECuiDev", "followers_url": "https://api.github.com/users/ECuiDe...
[]
closed
false
null
[]
null
0
2024-09-13T07:48:01
2024-09-15T03:52:37
2024-09-15T03:52:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6789", "html_url": "https://github.com/ollama/ollama/pull/6789", "diff_url": "https://github.com/ollama/ollama/pull/6789.diff", "patch_url": "https://github.com/ollama/ollama/pull/6789.patch", "merged_at": "2024-09-15T03:52:37" }
**Plugin Demo** https://github.com/user-attachments/assets/24e57fcf-2cbf-4797-a161-4c4a05e518bf
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6789/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2445
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2445/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2445/comments
https://api.github.com/repos/ollama/ollama/issues/2445/events
https://github.com/ollama/ollama/issues/2445
2,128,848,372
I_kwDOJ0Z1Ps5-46X0
2,445
Ollama stuck on "CUDA Compute Capability detected: 7.5"
{ "login": "Rhimzy", "id": 88019073, "node_id": "MDQ6VXNlcjg4MDE5MDcz", "avatar_url": "https://avatars.githubusercontent.com/u/88019073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rhimzy", "html_url": "https://github.com/Rhimzy", "followers_url": "https://api.github.com/users/Rhimzy/fo...
[]
closed
false
null
[]
null
2
2024-02-11T05:37:55
2024-02-20T07:49:09
2024-02-20T07:48:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
WIndows 11 Ubuntu WSL Logs: ``` > OLLAMA_HOST=127.0.0.1:11435 ollama serve time=2024-02-11T11:04:49.410+05:30 level=INFO source=images.go:863 msg="total blobs: 0" time=2024-02-11T11:04:49.410+05:30 level=INFO source=images.go:870 msg="total unused blobs removed: 0" time=2024-02-11T11:04:49.410+05:30 level=INF...
{ "login": "Rhimzy", "id": 88019073, "node_id": "MDQ6VXNlcjg4MDE5MDcz", "avatar_url": "https://avatars.githubusercontent.com/u/88019073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rhimzy", "html_url": "https://github.com/Rhimzy", "followers_url": "https://api.github.com/users/Rhimzy/fo...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2445/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2821
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2821/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2821/comments
https://api.github.com/repos/ollama/ollama/issues/2821/events
https://github.com/ollama/ollama/issues/2821
2,160,166,301
I_kwDOJ0Z1Ps6AwYWd
2,821
Can we have the newest 1-bit model
{ "login": "chuangtc", "id": 2288469, "node_id": "MDQ6VXNlcjIyODg0Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/2288469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chuangtc", "html_url": "https://github.com/chuangtc", "followers_url": "https://api.github.com/users/chuan...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
16
2024-02-29T01:24:50
2025-01-14T02:04:26
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits https://thegenerality.com/agi/ https://arxiv.org/abs/2402.17764
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2821/reactions", "total_count": 29, "+1": 27, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/2821/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1138
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1138/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1138/comments
https://api.github.com/repos/ollama/ollama/issues/1138/events
https://github.com/ollama/ollama/pull/1138
1,995,005,325
PR_kwDOJ0Z1Ps5fh_2M
1,138
Add error handling for get_summary function in the newssummary example.
{ "login": "Amosel", "id": 61532, "node_id": "MDQ6VXNlcjYxNTMy", "avatar_url": "https://avatars.githubusercontent.com/u/61532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Amosel", "html_url": "https://github.com/Amosel", "followers_url": "https://api.github.com/users/Amosel/followers", ...
[]
open
false
null
[]
null
0
2023-11-15T15:28:22
2023-11-24T18:08:21
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1138", "html_url": "https://github.com/ollama/ollama/pull/1138", "diff_url": "https://github.com/ollama/ollama/pull/1138.diff", "patch_url": "https://github.com/ollama/ollama/pull/1138.patch", "merged_at": null }
Calling `get_summary` fails when the hard coded model `mistral-openorca` is not installed. It makes sense to save people trying to figure out why.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1138/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4953
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4953/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4953/comments
https://api.github.com/repos/ollama/ollama/issues/4953/events
https://github.com/ollama/ollama/pull/4953
2,342,275,701
PR_kwDOJ0Z1Ps5x5Dt7
4,953
refactor: modiffy dockerignore
{ "login": "Gabrielfernandes7", "id": 78227127, "node_id": "MDQ6VXNlcjc4MjI3MTI3", "avatar_url": "https://avatars.githubusercontent.com/u/78227127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gabrielfernandes7", "html_url": "https://github.com/Gabrielfernandes7", "followers_url": "https...
[]
closed
false
null
[]
null
0
2024-06-09T14:04:30
2024-06-09T15:45:30
2024-06-09T15:45:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4953", "html_url": "https://github.com/ollama/ollama/pull/4953", "diff_url": "https://github.com/ollama/ollama/pull/4953.diff", "patch_url": "https://github.com/ollama/ollama/pull/4953.patch", "merged_at": null }
This PR modify a `.dockerignore` file to the repository to optimize the Docker image build. References issue #4952
{ "login": "Gabrielfernandes7", "id": 78227127, "node_id": "MDQ6VXNlcjc4MjI3MTI3", "avatar_url": "https://avatars.githubusercontent.com/u/78227127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gabrielfernandes7", "html_url": "https://github.com/Gabrielfernandes7", "followers_url": "https...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4953/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4780
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4780/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4780/comments
https://api.github.com/repos/ollama/ollama/issues/4780/events
https://github.com/ollama/ollama/issues/4780
2,329,548,993
I_kwDOJ0Z1Ps6K2hjB
4,780
Radeon VII gfx906:sramecc-:xnack- windows support
{ "login": "MrSteelRat", "id": 31157848, "node_id": "MDQ6VXNlcjMxMTU3ODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/31157848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MrSteelRat", "html_url": "https://github.com/MrSteelRat", "followers_url": "https://api.github.com/use...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5860134234, "node_id": ...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
10
2024-06-02T09:08:08
2024-08-03T10:52:03
2024-07-22T16:50:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, please add support for this GPU Radeon VII to the regular version, I can help with testing if necessary
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4780/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4780/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7525
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7525/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7525/comments
https://api.github.com/repos/ollama/ollama/issues/7525/events
https://github.com/ollama/ollama/issues/7525
2,637,669,131
I_kwDOJ0Z1Ps6dN6ML
7,525
向量库问答问题
{ "login": "NXL333", "id": 62203971, "node_id": "MDQ6VXNlcjYyMjAzOTcx", "avatar_url": "https://avatars.githubusercontent.com/u/62203971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NXL333", "html_url": "https://github.com/NXL333", "followers_url": "https://api.github.com/users/NXL333/fo...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
4
2024-11-06T10:31:39
2024-11-17T14:07:43
2024-11-17T14:07:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? 1: 智慧陆边口岸大模型 一、三大应用场景需求: (1)口岸整体情况分析: 1、口岸四个流分析,包括人流、物流、信息流、商品流:例如人员往来(跨国人员流 动、非法出入境的分析和统计),往返人员携带的物品(发现可能存在的风险和机 遇),口岸信息的集成; 2、贸易分析:通过年鉴以及商务部文件及数据,分析口岸贸易量变迁、贸易商品结构 (原料、加工、一般贸易等)、产品进出口情况、原产地; 3、地缘政治:中亚五国和东盟 4、政策预警、政府辅助决策(双边关系约定情况,关税政策、政策怎么约定,商务部 网站免税协定与哪些国家签署(方便企业自动查询)) 5、检验检疫政策与流程...
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7525/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/170
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/170/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/170/comments
https://api.github.com/repos/ollama/ollama/issues/170/events
https://github.com/ollama/ollama/issues/170
1,816,537,190
I_kwDOJ0Z1Ps5sRihm
170
How to fix `Error: stream: digest mismatch`
{ "login": "dtgriscom", "id": 842958, "node_id": "MDQ6VXNlcjg0Mjk1OA==", "avatar_url": "https://avatars.githubusercontent.com/u/842958?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dtgriscom", "html_url": "https://github.com/dtgriscom", "followers_url": "https://api.github.com/users/dtgr...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
6
2023-07-22T02:04:21
2023-07-24T20:52:41
2023-07-24T20:52:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was downloading `llama2:13b`, and for some reason the download went wrong. Now, when I try to run it, I get an error: ``` MacBook-Pro-2:~ griscom$ ollama run llama2:13b pulling manifest pulling f79142715bc9... 100% |█████████████████████████████████████████████████| (7.3/7.3 GB, 3.5 TB/s) pulling 2cc93...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/170/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/ollama/ollama/issues/170/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2316
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2316/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2316/comments
https://api.github.com/repos/ollama/ollama/issues/2316/events
https://github.com/ollama/ollama/pull/2316
2,113,887,631
PR_kwDOJ0Z1Ps5lxott
2,316
Clear previous images when submitting a new image to `ollama run`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
1
2024-02-02T02:11:43
2024-02-02T05:30:26
2024-02-02T05:30:26
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2316", "html_url": "https://github.com/ollama/ollama/pull/2316", "diff_url": "https://github.com/ollama/ollama/pull/2316.diff", "patch_url": "https://github.com/ollama/ollama/pull/2316.patch", "merged_at": "2024-02-02T05:30:26" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2316/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6866
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6866/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6866/comments
https://api.github.com/repos/ollama/ollama/issues/6866/events
https://github.com/ollama/ollama/issues/6866
2,535,151,715
I_kwDOJ0Z1Ps6XG1hj
6,866
High CPU load with Jetson Orin NX
{ "login": "s0301132", "id": 47412725, "node_id": "MDQ6VXNlcjQ3NDEyNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/47412725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s0301132", "html_url": "https://github.com/s0301132", "followers_url": "https://api.github.com/users/s03...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-09-19T02:43:22
2024-09-25T20:51:16
2024-09-25T20:51:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Using the amr64 build package and run it successfully However when LLM answering the question the CPU load is 100% but the GPU is nearly 0 % in `jtop` Is it normal or the amr64 build cannot use GPU by default? ![Screenshot from 2024-09-18 19-20-47](https://github.com/user-attachments/assets/3...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6866/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4187
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4187/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4187/comments
https://api.github.com/repos/ollama/ollama/issues/4187/events
https://github.com/ollama/ollama/pull/4187
2,279,804,557
PR_kwDOJ0Z1Ps5ulIyg
4,187
Fix rare nil pointer dereference when model unloads
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-05-06T00:04:36
2024-05-06T00:18:27
2024-05-06T00:18:27
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4187", "html_url": "https://github.com/ollama/ollama/pull/4187", "diff_url": "https://github.com/ollama/ollama/pull/4187.diff", "patch_url": "https://github.com/ollama/ollama/pull/4187.patch", "merged_at": "2024-05-06T00:18:27" }
While testing concurrency I noticed a segfault happen occasionally when loading, canceling, and loading the same model repeatedly over and over again with a script like this: ``` #!/bin/bash # Command to run COMMAND="ollama run llama3 hello" # Number of times to run the command concurrently N=100 # Runni...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4187/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1399
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1399/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1399/comments
https://api.github.com/repos/ollama/ollama/issues/1399/events
https://github.com/ollama/ollama/pull/1399
2,028,709,419
PR_kwDOJ0Z1Ps5hUCmG
1,399
List "Send chat messages" in table of contents
{ "login": "calderonsamuel", "id": 19418298, "node_id": "MDQ6VXNlcjE5NDE4Mjk4", "avatar_url": "https://avatars.githubusercontent.com/u/19418298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calderonsamuel", "html_url": "https://github.com/calderonsamuel", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
0
2023-12-06T14:40:12
2023-12-06T20:34:27
2023-12-06T20:34:27
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1399", "html_url": "https://github.com/ollama/ollama/pull/1399", "diff_url": "https://github.com/ollama/ollama/pull/1399.diff", "patch_url": "https://github.com/ollama/ollama/pull/1399.patch", "merged_at": "2023-12-06T20:34:27" }
This PR just adds a line in "docs/api.md" to list the new endpoint in TOC
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1399/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4019
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4019/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4019/comments
https://api.github.com/repos/ollama/ollama/issues/4019/events
https://github.com/ollama/ollama/pull/4019
2,268,060,797
PR_kwDOJ0Z1Ps5t9gXg
4,019
Fix copying model to itself
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-04-29T03:45:26
2024-04-29T03:47:50
2024-04-29T03:47:49
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4019", "html_url": "https://github.com/ollama/ollama/pull/4019", "diff_url": "https://github.com/ollama/ollama/pull/4019.diff", "patch_url": "https://github.com/ollama/ollama/pull/4019.patch", "merged_at": "2024-04-29T03:47:49" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4019/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2255
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2255/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2255/comments
https://api.github.com/repos/ollama/ollama/issues/2255/events
https://github.com/ollama/ollama/issues/2255
2,105,597,445
I_kwDOJ0Z1Ps59gN4F
2,255
Output truncated in the extension
{ "login": "pums974", "id": 1005109, "node_id": "MDQ6VXNlcjEwMDUxMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1005109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pums974", "html_url": "https://github.com/pums974", "followers_url": "https://api.github.com/users/pums974/...
[]
closed
false
null
[]
null
1
2024-01-29T14:14:34
2024-01-31T07:59:12
2024-01-31T07:59:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
While the model (codellama:7b) answered (badly) to my prompt, and the transcript shows the entirety of it (see bellow) the interface show almost no output. This might be caused by a block of code of a language not supported by the markdown interpreter ? ![image](https://github.com/ollama/ollama/assets/1005109/89...
{ "login": "pums974", "id": 1005109, "node_id": "MDQ6VXNlcjEwMDUxMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1005109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pums974", "html_url": "https://github.com/pums974", "followers_url": "https://api.github.com/users/pums974/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2255/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3090
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3090/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3090/comments
https://api.github.com/repos/ollama/ollama/issues/3090/events
https://github.com/ollama/ollama/issues/3090
2,182,976,943
I_kwDOJ0Z1Ps6CHZWv
3,090
How can I modify the model's existence duration on the GPU?
{ "login": "papandadj", "id": 25424898, "node_id": "MDQ6VXNlcjI1NDI0ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/papandadj", "html_url": "https://github.com/papandadj", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
1
2024-03-13T02:07:15
2024-03-13T03:30:46
2024-03-13T03:30:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Recently, I used Ollama to build my application. When I run a model, it automatically loads onto my GPU. However, after a few minutes, the model seems to be unloaded. How can I force the model to always remain loaded on the GPU?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3090/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1845
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1845/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1845/comments
https://api.github.com/repos/ollama/ollama/issues/1845/events
https://github.com/ollama/ollama/issues/1845
2,069,237,273
I_kwDOJ0Z1Ps57Vg4Z
1,845
Ollama from remote
{ "login": "HAL9KKK", "id": 63504776, "node_id": "MDQ6VXNlcjYzNTA0Nzc2", "avatar_url": "https://avatars.githubusercontent.com/u/63504776?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HAL9KKK", "html_url": "https://github.com/HAL9KKK", "followers_url": "https://api.github.com/users/HAL9KK...
[]
closed
false
null
[]
null
5
2024-01-07T18:22:32
2024-05-16T13:17:20
2024-01-08T19:14:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Ollama is using always localhost. I have 2 colab istances: **Colab1 (server)** ``` # Set LD_LIBRARY_PATH so the system NVIDIA library import os import asyncio os.environ.update({'LD_LIBRARY_PATH': '/usr/lib64-nvidia'}) async def run_process(cmd): print('>>> starting', *cmd) p = await asyncio.subproce...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1845/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6788
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6788/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6788/comments
https://api.github.com/repos/ollama/ollama/issues/6788/events
https://github.com/ollama/ollama/pull/6788
2,523,991,633
PR_kwDOJ0Z1Ps57Y8ue
6,788
add Agents-Flex Libraries in README.md
{ "login": "yangfuhai", "id": 1539806, "node_id": "MDQ6VXNlcjE1Mzk4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/1539806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangfuhai", "html_url": "https://github.com/yangfuhai", "followers_url": "https://api.github.com/users/ya...
[]
closed
false
null
[]
null
0
2024-09-13T06:37:08
2024-09-16T20:42:53
2024-09-16T20:42:52
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6788", "html_url": "https://github.com/ollama/ollama/pull/6788", "diff_url": "https://github.com/ollama/ollama/pull/6788.diff", "patch_url": "https://github.com/ollama/ollama/pull/6788.patch", "merged_at": "2024-09-16T20:42:52" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6788/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5456
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5456/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5456/comments
https://api.github.com/repos/ollama/ollama/issues/5456/events
https://github.com/ollama/ollama/issues/5456
2,388,213,427
I_kwDOJ0Z1Ps6OWT6z
5,456
ollama push suddenly not working / giving not authorized error
{ "login": "ashokgit", "id": 3615537, "node_id": "MDQ6VXNlcjM2MTU1Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/3615537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashokgit", "html_url": "https://github.com/ashokgit", "followers_url": "https://api.github.com/users/ashok...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-07-03T10:02:07
2024-07-03T13:49:23
2024-07-03T13:49:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I get the following error on push ollama push myuseer/model-name retrieving manifest pushing 663944096011... 100% ▕█████████████████████████████▏ 667 MB pushing c3be5dc5651b... 100% ▕█████████████████████████████▏ 54 B pushing ...
{ "login": "ashokgit", "id": 3615537, "node_id": "MDQ6VXNlcjM2MTU1Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/3615537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashokgit", "html_url": "https://github.com/ashokgit", "followers_url": "https://api.github.com/users/ashok...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5456/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1956
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1956/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1956/comments
https://api.github.com/repos/ollama/ollama/issues/1956/events
https://github.com/ollama/ollama/issues/1956
2,079,269,605
I_kwDOJ0Z1Ps577yLl
1,956
Handle Multiple parallel request
{ "login": "lauvindra", "id": 82690315, "node_id": "MDQ6VXNlcjgyNjkwMzE1", "avatar_url": "https://avatars.githubusercontent.com/u/82690315?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lauvindra", "html_url": "https://github.com/lauvindra", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
3
2024-01-12T16:48:13
2024-01-26T23:51:33
2024-01-26T23:51:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Does Ollama uses some kind of scheduling algorithm to manage high concurrent request? can you explain this
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1956/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1495
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1495/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1495/comments
https://api.github.com/repos/ollama/ollama/issues/1495/events
https://github.com/ollama/ollama/issues/1495
2,038,914,078
I_kwDOJ0Z1Ps55h1we
1,495
ollama on Proxmox??
{ "login": "Paulie420", "id": 59846077, "node_id": "MDQ6VXNlcjU5ODQ2MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/59846077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paulie420", "html_url": "https://github.com/Paulie420", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
18
2023-12-13T04:08:53
2024-10-29T22:46:07
2024-01-27T01:39:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
So I know this is user error, but... I can install and use ollama on my Framework laptop (without GPU) easily. Install w/ curl command and get going right away - but on a ProxMox VM w/ MORE RAM than my Framework, I get an Error ollama failed at the run command. Am I missing something simple that I can 'fix'? I feel ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1495/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6792
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6792/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6792/comments
https://api.github.com/repos/ollama/ollama/issues/6792/events
https://github.com/ollama/ollama/issues/6792
2,524,410,670
I_kwDOJ0Z1Ps6Wd3Mu
6,792
The system parameter OLLAMA_NUM_PALLEL is invalid for embeding model
{ "login": "black-fox-user", "id": 181464167, "node_id": "U_kgDOCtDsZw", "avatar_url": "https://avatars.githubusercontent.com/u/181464167?v=4", "gravatar_id": "", "url": "https://api.github.com/users/black-fox-user", "html_url": "https://github.com/black-fox-user", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-09-13T09:50:22
2024-09-18T01:30:43
2024-09-18T01:30:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I have set the system parameters, but when loading the embedding model, only one is still in effect. I copied this model, and surprisingly, their model IDs are the same. After importing the model, the model ID changed, but the same model was still used in the end。 ![image](https://github.com/us...
{ "login": "black-fox-user", "id": 181464167, "node_id": "U_kgDOCtDsZw", "avatar_url": "https://avatars.githubusercontent.com/u/181464167?v=4", "gravatar_id": "", "url": "https://api.github.com/users/black-fox-user", "html_url": "https://github.com/black-fox-user", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6792/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4976
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4976/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4976/comments
https://api.github.com/repos/ollama/ollama/issues/4976/events
https://github.com/ollama/ollama/issues/4976
2,346,104,632
I_kwDOJ0Z1Ps6L1rc4
4,976
Error: pull model manifest: Get
{ "login": "funnyPhani", "id": 58216617, "node_id": "MDQ6VXNlcjU4MjE2NjE3", "avatar_url": "https://avatars.githubusercontent.com/u/58216617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/funnyPhani", "html_url": "https://github.com/funnyPhani", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
2024-06-11T10:53:33
2024-11-21T22:50:02
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Ollama is not able to pull the models. ollama run moon dream pulling manifest Error: pull model manifest: Get "https://ollama.com/token?nonce=F_Rh4t6Jrv-EM0eRltrU-Q&scope=repository%!A(MISSING)library%!F(MISSING)moondream%!A(MISSING)pull&service=ollama.com&ts=1718102949": read tcp 10.1...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4976/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2626
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2626/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2626/comments
https://api.github.com/repos/ollama/ollama/issues/2626/events
https://github.com/ollama/ollama/pull/2626
2,145,926,276
PR_kwDOJ0Z1Ps5nen5g
2,626
Update big-AGI config file link
{ "login": "mogudian", "id": 122781024, "node_id": "U_kgDOB1F9YA", "avatar_url": "https://avatars.githubusercontent.com/u/122781024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mogudian", "html_url": "https://github.com/mogudian", "followers_url": "https://api.github.com/users/mogudian/...
[]
closed
false
null
[]
null
0
2024-02-21T06:21:02
2024-02-21T06:24:49
2024-02-21T06:24:49
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2626", "html_url": "https://github.com/ollama/ollama/pull/2626", "diff_url": "https://github.com/ollama/ollama/pull/2626.diff", "patch_url": "https://github.com/ollama/ollama/pull/2626.patch", "merged_at": "2024-02-21T06:24:49" }
The old URL of big-AGI config file is not available, replace it to the latest
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2626/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4440
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4440/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4440/comments
https://api.github.com/repos/ollama/ollama/issues/4440/events
https://github.com/ollama/ollama/issues/4440
2,296,577,723
I_kwDOJ0Z1Ps6I4v67
4,440
Add support for third-party hosted APIs
{ "login": "19h", "id": 280212, "node_id": "MDQ6VXNlcjI4MDIxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/280212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/19h", "html_url": "https://github.com/19h", "followers_url": "https://api.github.com/users/19h/followers", "fol...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
9
2024-05-14T23:27:37
2024-11-06T17:36:25
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
We've been coding against the Ollama API internally and eventually it hit me .. Ollama should be able to support third-party API providers, making it a de-facto gateway to LLMs. For example, it would easily blur the lines between an OpenAI's assistant / user and a Gemini model / user conversation; it could transpare...
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjha...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4440/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4440/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/2790
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2790/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2790/comments
https://api.github.com/repos/ollama/ollama/issues/2790/events
https://github.com/ollama/ollama/issues/2790
2,157,718,012
I_kwDOJ0Z1Ps6AnCn8
2,790
Function calling with OpenAI API
{ "login": "codearranger", "id": 80373433, "node_id": "MDQ6VXNlcjgwMzczNDMz", "avatar_url": "https://avatars.githubusercontent.com/u/80373433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codearranger", "html_url": "https://github.com/codearranger", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-02-27T22:26:04
2024-07-26T00:52:48
2024-07-26T00:52:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://ollama.com/joefamous/firefunction-v1 https://platform.openai.com/docs/guides/function-calling
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2790/reactions", "total_count": 16, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2790/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5317
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5317/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5317/comments
https://api.github.com/repos/ollama/ollama/issues/5317/events
https://github.com/ollama/ollama/issues/5317
2,377,175,757
I_kwDOJ0Z1Ps6NsNLN
5,317
请上架Florence-2
{ "login": "enryteam", "id": 20081090, "node_id": "MDQ6VXNlcjIwMDgxMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enryteam", "html_url": "https://github.com/enryteam", "followers_url": "https://api.github.com/users/enr...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2024-06-27T05:31:19
2024-06-27T19:05:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/microsoft/Florence-2-large/tree/main 使用 pytorch https://huggingface.co/spaces/SixOpen/Florence-2-large-ft thanks
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5317/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5317/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2262
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2262/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2262/comments
https://api.github.com/repos/ollama/ollama/issues/2262/events
https://github.com/ollama/ollama/issues/2262
2,106,694,887
I_kwDOJ0Z1Ps59kZzn
2,262
the tags page is confusing
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[ { "id": 6573197867, "node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw", "url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com", "name": "ollama.com", "color": "ffffff", "default": false, "description": "" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
1
2024-01-30T00:10:48
2024-03-11T17:42:03
2024-03-11T17:42:03
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![CleanShot 2024-01-29 at 16 08 21](https://github.com/ollama/ollama/assets/633681/1b043f4b-1a0e-41c7-83a3-5bac7e6a1ac0) And why do I need to know that a layer is 55 bytes?
{ "login": "hoyyeva", "id": 63033505, "node_id": "MDQ6VXNlcjYzMDMzNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoyyeva", "html_url": "https://github.com/hoyyeva", "followers_url": "https://api.github.com/users/hoyyev...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2262/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2262/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4455
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4455/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4455/comments
https://api.github.com/repos/ollama/ollama/issues/4455/events
https://github.com/ollama/ollama/issues/4455
2,298,300,142
I_kwDOJ0Z1Ps6I_Ubu
4,455
[REPORTING] For arch or arch based linux users storage path for models is /var/lib/ollama/.ollama/models/blobs
{ "login": "Greatz08", "id": 55040435, "node_id": "MDQ6VXNlcjU1MDQwNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/55040435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Greatz08", "html_url": "https://github.com/Greatz08", "followers_url": "https://api.github.com/users/Gre...
[]
closed
false
null
[]
null
1
2024-05-15T15:58:36
2024-10-23T20:54:29
2024-10-23T20:54:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I yesterday only installed ollama in my arch linux from aur and did pull 2 models phi3 and llama3 but couldnt find where they are actually stored and unfortunately in all reddit and FAQ the path mentioned was i guess for ubuntu users only so had to struggle but couldnt find so used fzf and service file to locate exact ...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4455/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4455/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2500
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2500/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2500/comments
https://api.github.com/repos/ollama/ollama/issues/2500/events
https://github.com/ollama/ollama/issues/2500
2,135,100,345
I_kwDOJ0Z1Ps5_Qwu5
2,500
Auto Tagging Documents in Ollama
{ "login": "asanchez-appliedres", "id": 160036440, "node_id": "U_kgDOCYn2WA", "avatar_url": "https://avatars.githubusercontent.com/u/160036440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asanchez-appliedres", "html_url": "https://github.com/asanchez-appliedres", "followers_url": "https...
[]
closed
false
null
[]
null
2
2024-02-14T19:58:14
2024-02-14T21:54:37
2024-02-14T21:54:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, When uploading documents to Ollama, users are currently required to manually tag documents. I would like to request a feature that allows for automatic document tagging based on the contents of the document.
{ "login": "asanchez-appliedres", "id": 160036440, "node_id": "U_kgDOCYn2WA", "avatar_url": "https://avatars.githubusercontent.com/u/160036440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asanchez-appliedres", "html_url": "https://github.com/asanchez-appliedres", "followers_url": "https...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2500/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4850
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4850/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4850/comments
https://api.github.com/repos/ollama/ollama/issues/4850/events
https://github.com/ollama/ollama/issues/4850
2,337,658,770
I_kwDOJ0Z1Ps6LVdeS
4,850
ollama built with docker - docker run ollama How do I set the --n-gpu-layers parameter because this results in an error that prevents running the model
{ "login": "mingLvft", "id": 50644675, "node_id": "MDQ6VXNlcjUwNjQ0Njc1", "avatar_url": "https://avatars.githubusercontent.com/u/50644675?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mingLvft", "html_url": "https://github.com/mingLvft", "followers_url": "https://api.github.com/users/min...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
1
2024-06-06T08:31:18
2024-07-03T23:20:20
2024-07-03T23:20:20
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str =...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4850/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7735/comments
https://api.github.com/repos/ollama/ollama/issues/7735/events
https://github.com/ollama/ollama/issues/7735
2,670,944,941
I_kwDOJ0Z1Ps6fM2Kt
7,735
docker build error
{ "login": "zimmortal", "id": 23369761, "node_id": "MDQ6VXNlcjIzMzY5NzYx", "avatar_url": "https://avatars.githubusercontent.com/u/23369761?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zimmortal", "html_url": "https://github.com/zimmortal", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
2024-11-19T06:43:42
2024-12-20T09:27:51
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ERROR: failed to solve: process "/bin/sh -c CMAKE_VERSION=${CMAKE_VERSION} GOLANG_VERSION=${GOLANG_VERSION} sh /rh_linux_deps.sh" did not complete successfully: exit code: 2 ``` 1289.9 1289.9 Complete! 1290.2 + '[' x86_64 = x86_64 ']' 1290.2 + curl -s -L https://github.com/ccache/ccach...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7735/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/964
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/964/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/964/comments
https://api.github.com/repos/ollama/ollama/issues/964/events
https://github.com/ollama/ollama/issues/964
1,972,603,375
I_kwDOJ0Z1Ps51k4nv
964
unbalanced vram usage on 2x3070 GPUs with coodbooga & nexusraven
{ "login": "chymian", "id": 1899961, "node_id": "MDQ6VXNlcjE4OTk5NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1899961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chymian", "html_url": "https://github.com/chymian", "followers_url": "https://api.github.com/users/chymian/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
7
2023-11-01T15:31:28
2024-05-04T21:52:19
2024-05-04T21:52:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
running coodboga & nexusraven segfaults and makeing the host unresponsiv. they load w/o pbls and crash "on the first token". (zephyr works good) I tried that with stock ollama 0.1.7, (linux install), docker & selfcompiled ([516](https://github.com/jmorganca/ollama/issues/516)). - checked the sha256: ok - running...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/964/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/111
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/111/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/111/comments
https://api.github.com/repos/ollama/ollama/issues/111/events
https://github.com/ollama/ollama/issues/111
1,810,998,500
I_kwDOJ0Z1Ps5r8aTk
111
Error trying to create custom model, fresh install
{ "login": "saqbach", "id": 6180399, "node_id": "MDQ6VXNlcjYxODAzOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6180399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saqbach", "html_url": "https://github.com/saqbach", "followers_url": "https://api.github.com/users/saqbach/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2023-07-19T02:00:32
2023-07-19T03:42:44
2023-07-19T03:42:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
First off, this is awesome. Thank you for creating this. Running into a `Error: 400 Bad Request` when trying to follow the README and create a custom model. Steps: 1. Download Apple Silicon app from `https://ollama.ai/download` & install to CLI 2. Run `ollama run llama2` successfully 3. Create a `Modelfile` and ...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/111/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2568
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2568/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2568/comments
https://api.github.com/repos/ollama/ollama/issues/2568/events
https://github.com/ollama/ollama/issues/2568
2,140,637,001
I_kwDOJ0Z1Ps5_l4dJ
2,568
`/set system` in CLI still append to System Prompt after ollama#2542
{ "login": "hyjwei", "id": 76876891, "node_id": "MDQ6VXNlcjc2ODc2ODkx", "avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hyjwei", "html_url": "https://github.com/hyjwei", "followers_url": "https://api.github.com/users/hyjwei/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[ { "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/us...
null
1
2024-02-17T22:12:42
2024-12-20T23:48:05
2024-12-20T23:48:05
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is a follow-up for #2492 . Thanks for fixing that issue. However, `/set system` in CLI still append to System Prompt after PR ollama#2542 . In the second scenario of #2492, when I load a model, then use `/set system` to specify a custom System Prompt, it does replace the old one. However, if I load a model, ...
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2568/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2186
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2186/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2186/comments
https://api.github.com/repos/ollama/ollama/issues/2186/events
https://github.com/ollama/ollama/pull/2186
2,100,011,110
PR_kwDOJ0Z1Ps5lC43c
2,186
Fix clearing kv cache between requests with the same prompt
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2024-01-25T10:03:41
2024-01-25T21:46:21
2024-01-25T21:46:21
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2186", "html_url": "https://github.com/ollama/ollama/pull/2186", "diff_url": "https://github.com/ollama/ollama/pull/2186.diff", "patch_url": "https://github.com/ollama/ollama/pull/2186.patch", "merged_at": "2024-01-25T21:46:21" }
This is a (draft) fix for #1573, as it seems that the kv cache isn't cleared properly when the exact same prompt is provided repetitively.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2186/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 2, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2186/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7490
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7490/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7490/comments
https://api.github.com/repos/ollama/ollama/issues/7490/events
https://github.com/ollama/ollama/issues/7490
2,632,778,502
I_kwDOJ0Z1Ps6c7QMG
7,490
Return an empty embed list
{ "login": "utopeadia", "id": 98788152, "node_id": "U_kgDOBeNjOA", "avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/utopeadia", "html_url": "https://github.com/utopeadia", "followers_url": "https://api.github.com/users/utopeadi...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-11-04T13:10:35
2025-01-16T06:34:28
2024-11-04T13:59:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I use the [bge-m3](https://ollama.com/library/bge-m3) model, the return is an empty list regardless of the input. My test code: ```python import requests url = "http://localhost:12121/api/embeddings" payload = { "model": "bge-m3", "input": "lol" } response = requests.post(u...
{ "login": "utopeadia", "id": 98788152, "node_id": "U_kgDOBeNjOA", "avatar_url": "https://avatars.githubusercontent.com/u/98788152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/utopeadia", "html_url": "https://github.com/utopeadia", "followers_url": "https://api.github.com/users/utopeadi...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7490/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/544
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/544/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/544/comments
https://api.github.com/repos/ollama/ollama/issues/544/events
https://github.com/ollama/ollama/pull/544
1,899,587,123
PR_kwDOJ0Z1Ps5agAdf
544
Linking ollama-ui in README
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
1
2023-09-16T22:38:21
2023-09-18T18:18:21
2023-09-18T16:50:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/544", "html_url": "https://github.com/ollama/ollama/pull/544", "diff_url": "https://github.com/ollama/ollama/pull/544.diff", "patch_url": "https://github.com/ollama/ollama/pull/544.patch", "merged_at": null }
Adding info from https://github.com/jmorganca/ollama/issues/538#issuecomment-1722109233 to `REAMDE.md`.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/544/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/161
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/161/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/161/comments
https://api.github.com/repos/ollama/ollama/issues/161/events
https://github.com/ollama/ollama/issues/161
1,815,801,366
I_kwDOJ0Z1Ps5sOu4W
161
Asking Llama 2 to read a local text file
{ "login": "wwavess", "id": 54215600, "node_id": "MDQ6VXNlcjU0MjE1NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/54215600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwavess", "html_url": "https://github.com/wwavess", "followers_url": "https://api.github.com/users/wwaves...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2023-07-21T13:20:49
2023-08-30T21:33:55
2023-08-30T21:33:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Has anyone been able to get Llama 2 to read a txt file for analysis?
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/161/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2948
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2948/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2948/comments
https://api.github.com/repos/ollama/ollama/issues/2948/events
https://github.com/ollama/ollama/issues/2948
2,171,037,442
I_kwDOJ0Z1Ps6BZ2cC
2,948
Allow `api.Client` to be constructed using URL & http.Client
{ "login": "jackielii", "id": 360983, "node_id": "MDQ6VXNlcjM2MDk4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/360983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackielii", "html_url": "https://github.com/jackielii", "followers_url": "https://api.github.com/users/jack...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5667396210, "node_id": ...
closed
false
null
[]
null
1
2024-03-06T09:28:10
2024-05-07T08:00:46
2024-05-07T08:00:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm using `github.com/jmorganca/ollama/api` to connect to Ollama in my Go project. It works great. But I run two instance of ollama via different URL & port. At the moment the `api` package only supports construct client from env: ```go func ClientFromEnvironment() (*Client, error) {} ``` I have to use hacks li...
{ "login": "jackielii", "id": 360983, "node_id": "MDQ6VXNlcjM2MDk4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/360983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackielii", "html_url": "https://github.com/jackielii", "followers_url": "https://api.github.com/users/jack...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2948/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2948/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7466
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7466/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7466/comments
https://api.github.com/repos/ollama/ollama/issues/7466/events
https://github.com/ollama/ollama/pull/7466
2,629,791,219
PR_kwDOJ0Z1Ps6Aqywe
7,466
Workaround buggy P2P ROCm copy on windows
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-11-01T20:07:53
2024-11-07T22:26:34
2024-11-07T22:26:31
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7466", "html_url": "https://github.com/ollama/ollama/pull/7466", "diff_url": "https://github.com/ollama/ollama/pull/7466.diff", "patch_url": "https://github.com/ollama/ollama/pull/7466.patch", "merged_at": "2024-11-07T22:26:31" }
This enables the workaround code only for windows which should help windows users with muliple AMD GPUs While testing #7378 I've only been able to reproduce the gibberish behavior on one system and only on Windows. Windows ROCm shouldn't allow smaller system memory compared to VRAM, so we believe enabling this flag...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7466/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7089
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7089/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7089/comments
https://api.github.com/repos/ollama/ollama/issues/7089/events
https://github.com/ollama/ollama/issues/7089
2,563,571,951
I_kwDOJ0Z1Ps6YzQDv
7,089
[prompt] add ollama configuration file
{ "login": "abitrolly", "id": 8781107, "node_id": "MDQ6VXNlcjg3ODExMDc=", "avatar_url": "https://avatars.githubusercontent.com/u/8781107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abitrolly", "html_url": "https://github.com/abitrolly", "followers_url": "https://api.github.com/users/ab...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-10-03T09:35:37
2024-11-12T06:50:28
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I think we can now try eating our own dog food, and let LLM write the code to solve [second most voted](https://github.com/ollama/ollama/issues?q=config+file+is%3Aopen+sort%3Areactions-%2B1-desc) issue "Please don't clutter the user home directory" (https://github.com/ollama/ollama/issues/228). Here is my try at pro...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7089/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7089/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2789
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2789/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2789/comments
https://api.github.com/repos/ollama/ollama/issues/2789/events
https://github.com/ollama/ollama/pull/2789
2,157,717,842
PR_kwDOJ0Z1Ps5oGy2r
2,789
prepend image tags
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
3
2024-02-27T22:25:53
2024-02-29T19:30:15
2024-02-29T19:30:14
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2789", "html_url": "https://github.com/ollama/ollama/pull/2789", "diff_url": "https://github.com/ollama/ollama/pull/2789.diff", "patch_url": "https://github.com/ollama/ollama/pull/2789.patch", "merged_at": "2024-02-29T19:30:14" }
instead of appending image tags, prepend them which produces better results in general resolves #2769 resolves #2788
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2789/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2789/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5056
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5056/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5056/comments
https://api.github.com/repos/ollama/ollama/issues/5056/events
https://github.com/ollama/ollama/issues/5056
2,354,580,351
I_kwDOJ0Z1Ps6MWAt_
5,056
qwen2 model error
{ "login": "misi0202", "id": 101965629, "node_id": "U_kgDOBhPfPQ", "avatar_url": "https://avatars.githubusercontent.com/u/101965629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misi0202", "html_url": "https://github.com/misi0202", "followers_url": "https://api.github.com/users/misi0202/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-06-15T06:13:13
2024-06-17T02:46:39
2024-06-17T02:46:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I'm try to use Qwen2-7b model by ollama(ollama run qwen2) , but meet timeout error by POST /api/compete, timed out occured.When I POST /api/chat,it can return reply error code like GGGGGGGGG, If the ollama don't support qwen2? ### OS _No response_ ### GPU _No response_ ### CPU...
{ "login": "misi0202", "id": 101965629, "node_id": "U_kgDOBhPfPQ", "avatar_url": "https://avatars.githubusercontent.com/u/101965629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misi0202", "html_url": "https://github.com/misi0202", "followers_url": "https://api.github.com/users/misi0202/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5056/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4551
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4551/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4551/comments
https://api.github.com/repos/ollama/ollama/issues/4551/events
https://github.com/ollama/ollama/pull/4551
2,307,149,054
PR_kwDOJ0Z1Ps5wBZQr
4,551
Added docker healthcheck to all runtime stages
{ "login": "codearranger", "id": 80373433, "node_id": "MDQ6VXNlcjgwMzczNDMz", "avatar_url": "https://avatars.githubusercontent.com/u/80373433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codearranger", "html_url": "https://github.com/codearranger", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
1
2024-05-21T02:45:06
2024-11-23T21:12:29
2024-11-23T21:12:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4551", "html_url": "https://github.com/ollama/ollama/pull/4551", "diff_url": "https://github.com/ollama/ollama/pull/4551.diff", "patch_url": "https://github.com/ollama/ollama/pull/4551.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4551/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2426
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2426/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2426/comments
https://api.github.com/repos/ollama/ollama/issues/2426/events
https://github.com/ollama/ollama/issues/2426
2,126,843,376
I_kwDOJ0Z1Ps5-xQ3w
2,426
In the blog post -> https://ollama.ai/blog/openai-compatibility change the name of Autogen
{ "login": "Naqqash", "id": 4791247, "node_id": "MDQ6VXNlcjQ3OTEyNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/4791247?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Naqqash", "html_url": "https://github.com/Naqqash", "followers_url": "https://api.github.com/users/Naqqash/...
[]
closed
false
null
[]
null
1
2024-02-09T10:19:18
2024-02-09T13:18:23
2024-02-09T13:18:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In the blog the installation instruction is written as `pip install autogenpy` it should be `pip install pyautogen` Reference -> https://github.com/microsoft/autogen
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2426/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5018
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5018/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5018/comments
https://api.github.com/repos/ollama/ollama/issues/5018/events
https://github.com/ollama/ollama/pull/5018
2,350,685,626
PR_kwDOJ0Z1Ps5yVum6
5,018
fix utf8 parser error
{ "login": "007gzs", "id": 5856259, "node_id": "MDQ6VXNlcjU4NTYyNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5856259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/007gzs", "html_url": "https://github.com/007gzs", "followers_url": "https://api.github.com/users/007gzs/foll...
[]
closed
false
null
[]
null
4
2024-06-13T09:44:18
2024-06-13T17:35:39
2024-06-13T17:35:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5018", "html_url": "https://github.com/ollama/ollama/pull/5018", "diff_url": "https://github.com/ollama/ollama/pull/5018.diff", "patch_url": "https://github.com/ollama/ollama/pull/5018.patch", "merged_at": null }
in `v0.1.43` when utf8 char in modelfile ,after parse got `�������������` test code : ``` var Modelfile string = "FROM llama3:70b\nSYSTEM \"\"\"\n提问和回答都使用中文\n\"\"\"" var sr io.Reader = strings.NewReader(Modelfile) f, err := parser.ParseFile(sr) fmt.Printf("err: %v\n", err) fmt.Printf("f: %v\n", f) ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5018/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1729
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1729/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1729/comments
https://api.github.com/repos/ollama/ollama/issues/1729/events
https://github.com/ollama/ollama/issues/1729
2,057,390,143
I_kwDOJ0Z1Ps56oUg_
1,729
Function call with Ollama and LlamaIndex
{ "login": "sandangel", "id": 22189661, "node_id": "MDQ6VXNlcjIyMTg5NjYx", "avatar_url": "https://avatars.githubusercontent.com/u/22189661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sandangel", "html_url": "https://github.com/sandangel", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
15
2023-12-27T13:44:04
2024-07-26T00:47:05
2024-07-26T00:47:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I'm looking for a way to add function call to work with Ollama and LlamaIndex. From my research we have format json in Ollama, so theoretically, there are 2 ways we can support function call: 1. Enforce the LLM to output json following a schema, and we can call the function based on the json output. * Not s...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1729/reactions", "total_count": 12, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 6 }
https://api.github.com/repos/ollama/ollama/issues/1729/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/613
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/613/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/613/comments
https://api.github.com/repos/ollama/ollama/issues/613/events
https://github.com/ollama/ollama/issues/613
1,914,519,684
I_kwDOJ0Z1Ps5yHUCE
613
Getting permission denied when attempting to create a model
{ "login": "DeanKamali", "id": 1252959, "node_id": "MDQ6VXNlcjEyNTI5NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/1252959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DeanKamali", "html_url": "https://github.com/DeanKamali", "followers_url": "https://api.github.com/users...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
14
2023-09-27T01:20:48
2024-12-02T04:40:44
2023-11-16T00:41:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ollama version: v0.1.0 **Steps to Reproduce:** - Ran` curl https://ollama.ai/install.sh | sh` to install ollama. - Navigated to ollama/examples/devops-engineer/. - Executed `ollama create devops-engineer -f ./Modelfile`. Error Encountered: `couldn't open modelfile '/root/ollama/examples/devops-engineer/Mode...
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/613/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3078
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3078/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3078/comments
https://api.github.com/repos/ollama/ollama/issues/3078/events
https://github.com/ollama/ollama/issues/3078
2,181,653,843
I_kwDOJ0Z1Ps6CCWVT
3,078
Ollama is not using the 100% of RTX4000 VRAM (18 of 20GB)
{ "login": "nfsecurity", "id": 16274031, "node_id": "MDQ6VXNlcjE2Mjc0MDMx", "avatar_url": "https://avatars.githubusercontent.com/u/16274031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nfsecurity", "html_url": "https://github.com/nfsecurity", "followers_url": "https://api.github.com/use...
[ { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", ...
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/...
null
29
2024-03-12T13:40:42
2025-01-08T20:34:31
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, thank you for the wonderful ollama project and the amazing community! <img width="742" alt="Screenshot 2024-03-12 at 8 32 31 AM" src="https://github.com/ollama/ollama/assets/16274031/a47d6ad9-3602-4ffe-984d-0ec858f95b6f"> I am testing the Mixtral 3Bit Quantized model under a RTX400 with 20GB of VRAM. The mode...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3078/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3078/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2111
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2111/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2111/comments
https://api.github.com/repos/ollama/ollama/issues/2111/events
https://github.com/ollama/ollama/issues/2111
2,092,182,139
I_kwDOJ0Z1Ps58tCp7
2,111
Enable installation without root priviledge
{ "login": "chunhualiao", "id": 1627206, "node_id": "MDQ6VXNlcjE2MjcyMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/1627206?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chunhualiao", "html_url": "https://github.com/chunhualiao", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
12
2024-01-20T17:59:05
2024-10-08T11:44:57
2024-01-21T00:01:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It seems like ollama will run sudo during its installation on Linux. Please support the installation and use of users without sudo priviledge. thanks.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2111/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1386
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1386/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1386/comments
https://api.github.com/repos/ollama/ollama/issues/1386/events
https://github.com/ollama/ollama/issues/1386
2,025,089,735
I_kwDOJ0Z1Ps54tGrH
1,386
Linux kernel traps ollama runner with invalid opcode
{ "login": "clvgt12", "id": 15834506, "node_id": "MDQ6VXNlcjE1ODM0NTA2", "avatar_url": "https://avatars.githubusercontent.com/u/15834506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clvgt12", "html_url": "https://github.com/clvgt12", "followers_url": "https://api.github.com/users/clvgt1...
[]
closed
false
null
[]
null
8
2023-12-05T01:13:12
2024-01-27T01:55:52
2024-01-27T01:55:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a Ubuntu laptop, and installed ollama and the llama2 model using this script: ``` $ curl https://ollama.ai/install.sh | sh >>> The Ollama API is now available at 0.0.0.0:11434. >>> Install complete. Run "ollama" from the command line. $ ollama pull llama2 pulling manifest pulling 22f7f8ef5f4c... 100% ▕██...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1386/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3623
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3623/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3623/comments
https://api.github.com/repos/ollama/ollama/issues/3623/events
https://github.com/ollama/ollama/issues/3623
2,241,475,830
I_kwDOJ0Z1Ps6FmjT2
3,623
[v0.1.32-pre for Windows] ollama server does not exit quitting from the system tray icon
{ "login": "mann1x", "id": 20623405, "node_id": "MDQ6VXNlcjIwNjIzNDA1", "avatar_url": "https://avatars.githubusercontent.com/u/20623405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mann1x", "html_url": "https://github.com/mann1x", "followers_url": "https://api.github.com/users/mann1x/fo...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.g...
null
0
2024-04-13T10:09:38
2024-04-14T22:33:26
2024-04-14T22:33:26
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ollama is not stopped when quitting using the system tray icon ### What did you expect to see? ollama app.exe and ollama.exe not running anymore ### Steps to reproduce just quit from the system tray icon ### Are there any recent changes that introduced the issue? This issue is new with the...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3623/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5691
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5691/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5691/comments
https://api.github.com/repos/ollama/ollama/issues/5691/events
https://github.com/ollama/ollama/issues/5691
2,407,610,489
I_kwDOJ0Z1Ps6PgTh5
5,691
Run model by index
{ "login": "peteruithoven", "id": 523210, "node_id": "MDQ6VXNlcjUyMzIxMA==", "avatar_url": "https://avatars.githubusercontent.com/u/523210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peteruithoven", "html_url": "https://github.com/peteruithoven", "followers_url": "https://api.github.co...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-07-14T21:55:44
2024-07-14T22:07:54
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When running models using the cli the whole names needs to be used e.g. `ollama run deepseek-coder-v2`. Some of these names are hard to remember. I often copy them from `ollama list`. What if we could also run them by their index in the `ollama list`? You could just run `ollama list` see the indexes and run `ollama...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5691/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7125
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7125/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7125/comments
https://api.github.com/repos/ollama/ollama/issues/7125/events
https://github.com/ollama/ollama/issues/7125
2,571,827,594
I_kwDOJ0Z1Ps6ZSvmK
7,125
openai: support max_completion_tokens due to deprecation of max_tokens
{ "login": "codefromthecrypt", "id": 64215, "node_id": "MDQ6VXNlcjY0MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/64215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codefromthecrypt", "html_url": "https://github.com/codefromthecrypt", "followers_url": "https://api.github...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7706482389, "node_id": ...
open
false
null
[]
null
0
2024-10-08T01:17:37
2024-11-06T00:02:14
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
max_tokens is now deprecated for max_completion_tokens. I suspect we should support both. One way is to define another field in our request object and then default if one or the other isn't set https://github.com/ollama/ollama/blob/defbf9425af8228f3420d567e9eeaa29d8ac87e3/openai/openai.go#L77 See https://platform.op...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7125/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7125/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5954
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5954/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5954/comments
https://api.github.com/repos/ollama/ollama/issues/5954/events
https://github.com/ollama/ollama/issues/5954
2,430,404,262
I_kwDOJ0Z1Ps6Q3Qam
5,954
Detecting macOS GPUs when using Podman with GPU passthrough
{ "login": "ThomasVitale", "id": 8523418, "node_id": "MDQ6VXNlcjg1MjM0MTg=", "avatar_url": "https://avatars.githubusercontent.com/u/8523418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThomasVitale", "html_url": "https://github.com/ThomasVitale", "followers_url": "https://api.github.com...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-07-25T15:58:38
2024-12-10T16:40:20
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Podman provides support for making the local GPU on a macOS computer available from within a container. This article describes the setup for it: https://blog.podman.io/2024/07/podman-and-libkrun/. ```shell % podman machine ssh ls -l /dev/dri total 0 drwxr-xr-x. 2 root root 80 Jul 25 17:12 by-path crw-rw-...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5954/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5954/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8627
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8627/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8627/comments
https://api.github.com/repos/ollama/ollama/issues/8627/events
https://github.com/ollama/ollama/issues/8627
2,815,141,322
I_kwDOJ0Z1Ps6ny6XK
8,627
Deepseek-r1 can't read document or picture
{ "login": "hereshui3", "id": 163418623, "node_id": "U_kgDOCb2R_w", "avatar_url": "https://avatars.githubusercontent.com/u/163418623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hereshui3", "html_url": "https://github.com/hereshui3", "followers_url": "https://api.github.com/users/heresh...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2025-01-28T09:53:59
2025-01-28T11:01:15
2025-01-28T11:01:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I use chatbox to visualize deepseek,but when I try to send a document or picture,the output is a chaos. ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version deepseek
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8627/reactions", "total_count": 1, "+1": 0, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8627/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2176
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2176/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2176/comments
https://api.github.com/repos/ollama/ollama/issues/2176/events
https://github.com/ollama/ollama/issues/2176
2,098,920,149
I_kwDOJ0Z1Ps59GvrV
2,176
Ollama instance stuck and hanging after few hours.
{ "login": "jayouimet", "id": 54856778, "node_id": "MDQ6VXNlcjU0ODU2Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/54856778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jayouimet", "html_url": "https://github.com/jayouimet", "followers_url": "https://api.github.com/users/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
6
2024-01-24T19:17:39
2024-10-02T17:09:36
2024-06-01T20:09:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello, We have an ollama instance that starts to hang after a few hours of use. When using ctrl + c to stop the serve, we get a long stack trace resembling this, could be missing lines at the top as it is the maximum I can get from my ssh instance : ``` net/http/server.go:3086 +0x30 fp=0x140008e5fd0 sp=0x140008e5f...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2176/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6174
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6174/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6174/comments
https://api.github.com/repos/ollama/ollama/issues/6174/events
https://github.com/ollama/ollama/issues/6174
2,447,931,434
I_kwDOJ0Z1Ps6R6Hgq
6,174
Unable to run / pull llama3 model
{ "login": "Maha-vignesh09", "id": 177517255, "node_id": "U_kgDOCpSyxw", "avatar_url": "https://avatars.githubusercontent.com/u/177517255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maha-vignesh09", "html_url": "https://github.com/Maha-vignesh09", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-08-05T08:26:29
2024-08-13T05:06:18
2024-08-13T05:06:18
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama3/manifests/latest": proxyconnect tcp: dial tcp: lookup https on 100.76.191.242:53: no such host when trying to pull llama3 [root@fsgbu-mum-918 ~]# export https_proxy=https://www-***.com:80 [root@fsgbu-mum-918 ~]...
{ "login": "Maha-vignesh09", "id": 177517255, "node_id": "U_kgDOCpSyxw", "avatar_url": "https://avatars.githubusercontent.com/u/177517255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maha-vignesh09", "html_url": "https://github.com/Maha-vignesh09", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6174/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8493
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8493/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8493/comments
https://api.github.com/repos/ollama/ollama/issues/8493/events
https://github.com/ollama/ollama/issues/8493
2,797,975,587
I_kwDOJ0Z1Ps6mxbgj
8,493
Long context for Qwen2.5 is possible but needs something to work
{ "login": "devlux76", "id": 86517969, "node_id": "MDQ6VXNlcjg2NTE3OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/86517969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devlux76", "html_url": "https://github.com/devlux76", "followers_url": "https://api.github.com/users/dev...
[]
open
false
null
[]
null
1
2025-01-20T01:34:24
2025-01-20T09:37:07
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The instructions for Qwen2.5 (all of them) state quite clearly that everything from 7B on up have 128k context. However in order to use that context you need to do something... https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts For supported frameworks, you could add the following to config.j...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8493/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3854
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3854/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3854/comments
https://api.github.com/repos/ollama/ollama/issues/3854/events
https://github.com/ollama/ollama/issues/3854
2,259,789,985
I_kwDOJ0Z1Ps6Gsaih
3,854
request command-r-plus Q6
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
0
2024-04-23T21:21:07
2024-05-06T23:28:00
2024-05-06T23:28:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
request command-r-plus Q6
{ "login": "taozhiyuai", "id": 146583103, "node_id": "U_kgDOCLyuPw", "avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taozhiyuai", "html_url": "https://github.com/taozhiyuai", "followers_url": "https://api.github.com/users/tao...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3854/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7826
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7826/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7826/comments
https://api.github.com/repos/ollama/ollama/issues/7826/events
https://github.com/ollama/ollama/pull/7826
2,689,335,494
PR_kwDOJ0Z1Ps6C-zuq
7,826
Use default transport to preserve proxy settings
{ "login": "Mazyod", "id": 860511, "node_id": "MDQ6VXNlcjg2MDUxMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/860511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mazyod", "html_url": "https://github.com/Mazyod", "followers_url": "https://api.github.com/users/Mazyod/follow...
[]
closed
false
null
[]
null
2
2024-11-25T06:21:07
2024-11-26T00:32:36
2024-11-26T00:32:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7826", "html_url": "https://github.com/ollama/ollama/pull/7826", "diff_url": "https://github.com/ollama/ollama/pull/7826.diff", "patch_url": "https://github.com/ollama/ollama/pull/7826.patch", "merged_at": null }
Attempt to fix regression in 0.4.3 as per #7788 To test this change, I created a small program to verify that the change indeed respects the proxy settings: ```go package main import ( "fmt" "net/http" "os" ) func BuggyClient() *http.Client { return &http.Client{ Transport: &http.Transport{}, ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7826/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8373
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8373/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8373/comments
https://api.github.com/repos/ollama/ollama/issues/8373/events
https://github.com/ollama/ollama/issues/8373
2,780,327,075
I_kwDOJ0Z1Ps6luGyj
8,373
ollama rm xxx failed to delete file /usr/share/ollama/.ollama/models/blobs/sha256-xxx
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/follower...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-10T14:32:17
2025-01-10T15:23:22
2025-01-10T15:23:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? How to achieve the automatic deletion of the corresponding model files to free up hard drive space after running "ollama rm xx"? "ollama rm xxx" failed to delete the file "/usr/share/ollama/.ollama/models/blobs/sha256-xxx". ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ...
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8373/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2982
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2982/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2982/comments
https://api.github.com/repos/ollama/ollama/issues/2982/events
https://github.com/ollama/ollama/issues/2982
2,174,103,800
I_kwDOJ0Z1Ps6BljD4
2,982
add a support matrix to the docs
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api...
null
0
2024-03-07T15:16:27
2024-03-21T11:32:19
2024-03-21T11:32:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
A table showing which GPUs are supported OS would be nice to allow users to evaluate if their hardware is supported.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2982/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1173
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1173/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1173/comments
https://api.github.com/repos/ollama/ollama/issues/1173/events
https://github.com/ollama/ollama/issues/1173
1,999,313,317
I_kwDOJ0Z1Ps53Kxml
1,173
Provide model metadata with `ollama show`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2023-11-17T14:55:48
2024-07-24T21:07:13
2024-07-24T21:07:13
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
`ollama show` should provide metadata like: * Context size * Parameter count * Quantization
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1173/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1173/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5427
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5427/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5427/comments
https://api.github.com/repos/ollama/ollama/issues/5427/events
https://github.com/ollama/ollama/issues/5427
2,385,310,905
I_kwDOJ0Z1Ps6OLPS5
5,427
通过Modelfile构建的模型run不起来
{ "login": "yinjianjie", "id": 54103299, "node_id": "MDQ6VXNlcjU0MTAzMjk5", "avatar_url": "https://avatars.githubusercontent.com/u/54103299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yinjianjie", "html_url": "https://github.com/yinjianjie", "followers_url": "https://api.github.com/use...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
2024-07-02T06:04:03
2024-07-08T19:52:19
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? NAME ID SIZE MODIFIED glm-4-9b-chat:latest 5356a47a9286 6.3 GB 3 minutes ago llama3:latest 71a106a91016 4.7 GB 2 months ago llava:latest 8dd30f6b0cb1 4.7 GB 2 months ago mi...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5427/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7683
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7683/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7683/comments
https://api.github.com/repos/ollama/ollama/issues/7683/events
https://github.com/ollama/ollama/issues/7683
2,661,305,171
I_kwDOJ0Z1Ps6eoEtT
7,683
Does ollama support batching generate?
{ "login": "Wu-tn", "id": 54966661, "node_id": "MDQ6VXNlcjU0OTY2NjYx", "avatar_url": "https://avatars.githubusercontent.com/u/54966661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wu-tn", "html_url": "https://github.com/Wu-tn", "followers_url": "https://api.github.com/users/Wu-tn/follow...
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
3
2024-11-15T09:00:12
2024-11-17T12:18:35
2024-11-17T12:18:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7683/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7960
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7960/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7960/comments
https://api.github.com/repos/ollama/ollama/issues/7960/events
https://github.com/ollama/ollama/pull/7960
2,721,539,864
PR_kwDOJ0Z1Ps6EPp9N
7,960
Update OpenAI docs to reflect tool use functionality
{ "login": "yannickgloster", "id": 19475841, "node_id": "MDQ6VXNlcjE5NDc1ODQx", "avatar_url": "https://avatars.githubusercontent.com/u/19475841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yannickgloster", "html_url": "https://github.com/yannickgloster", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
0
2024-12-05T22:12:49
2024-12-08T06:16:21
2024-12-08T06:16:21
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7960", "html_url": "https://github.com/ollama/ollama/pull/7960", "diff_url": "https://github.com/ollama/ollama/pull/7960.diff", "patch_url": "https://github.com/ollama/ollama/pull/7960.patch", "merged_at": "2024-12-08T06:16:21" }
Tool use while streaming was fixed in #7836 see [comment](https://github.com/ollama/ollama/pull/7836#issuecomment-2521505633)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7960/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7087
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7087/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7087/comments
https://api.github.com/repos/ollama/ollama/issues/7087/events
https://github.com/ollama/ollama/issues/7087
2,563,140,687
I_kwDOJ0Z1Ps6YxmxP
7,087
I would like to able to download, extract and run Ollama on an Intel GPU
{ "login": "xiangyang-95", "id": 18331729, "node_id": "MDQ6VXNlcjE4MzMxNzI5", "avatar_url": "https://avatars.githubusercontent.com/u/18331729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiangyang-95", "html_url": "https://github.com/xiangyang-95", "followers_url": "https://api.github.c...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6677491450, "node_id": ...
closed
false
null
[]
null
1
2024-10-03T05:13:28
2024-10-03T16:14:00
2024-10-03T16:13:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
In order to use Ollama easily on Intel discrete GPU, I would like to able to download the ollama binary that is built with Intel OneAPI SYCL runtime directly. Example: ``` curl -L https://ollama.com/download/ollama-linux-amd64-sycl.tgz -o ollama-linux-amd64-sycl.tgz sudo tar -C /usr -xzf ollama-linux-amd64-sycl.t...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7087/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7750
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7750/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7750/comments
https://api.github.com/repos/ollama/ollama/issues/7750/events
https://github.com/ollama/ollama/pull/7750
2,673,960,240
PR_kwDOJ0Z1Ps6CdLeb
7,750
Disallow Tool Streaming
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
1
2024-11-20T00:15:52
2024-11-22T00:42:17
2024-11-22T00:42:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7750", "html_url": "https://github.com/ollama/ollama/pull/7750", "diff_url": "https://github.com/ollama/ollama/pull/7750.diff", "patch_url": "https://github.com/ollama/ollama/pull/7750.patch", "merged_at": null }
While Tool streaming is scoped to be supported, we currently allow it, which can lead to some weird edge cases. The tool gets added to capabilities and is passed into runner without checking if the behavior should be occurring. https://github.com/ollama/ollama/blob/807ace5b1f4fc9de7347297b3c8a695c566d9fd9/server/...
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7750/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8449
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8449/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8449/comments
https://api.github.com/repos/ollama/ollama/issues/8449/events
https://github.com/ollama/ollama/pull/8449
2,791,807,480
PR_kwDOJ0Z1Ps6H9SJp
8,449
parser: fix parsing Modelfiles with multiple FROM commands
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
[]
closed
false
null
[]
null
0
2025-01-16T06:24:20
2025-01-16T08:14:08
2025-01-16T08:14:07
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8449", "html_url": "https://github.com/ollama/ollama/pull/8449", "diff_url": "https://github.com/ollama/ollama/pull/8449.diff", "patch_url": "https://github.com/ollama/ollama/pull/8449.patch", "merged_at": "2025-01-16T08:14:06" }
Fixes https://github.com/ollama/ollama/issues/8448
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8449/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8008
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8008/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8008/comments
https://api.github.com/repos/ollama/ollama/issues/8008/events
https://github.com/ollama/ollama/issues/8008
2,726,348,426
I_kwDOJ0Z1Ps6igMaK
8,008
Return prompt cache utilization on completion responses
{ "login": "reckart", "id": 1410238, "node_id": "MDQ6VXNlcjE0MTAyMzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1410238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reckart", "html_url": "https://github.com/reckart", "followers_url": "https://api.github.com/users/reckart/...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-12-09T08:28:49
2024-12-09T08:28:49
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Since Ollama has prompt caching now (right?), it would be great if the utilization of the cache could be returned in requests. E.g. the OpenAI-compatible API could be extended with the new [`usage/prompt_tokens_details/cached_tokens`](https://platform.openai.com/docs/guides/prompt-caching). A similar field in the...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8008/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8008/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6292
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6292/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6292/comments
https://api.github.com/repos/ollama/ollama/issues/6292/events
https://github.com/ollama/ollama/issues/6292
2,458,680,597
I_kwDOJ0Z1Ps6SjH0V
6,292
Docs: tfs_z description incorrect
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-08-09T21:20:07
2024-08-09T21:20:07
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? In the Modelfile docs (https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter) tfs_z is defined as: ``` Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disabl...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6292/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6292/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4/comments
https://api.github.com/repos/ollama/ollama/issues/4/events
https://github.com/ollama/ollama/issues/4
1,777,852,025
I_kwDOJ0Z1Ps5p9955
4
blinking cursor is ambiguous
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.git...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2023-06-27T22:43:08
2023-07-10T10:14:47
2023-07-10T10:14:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I see a question, i just see a blinking cursor. Is the model loading? is it thinking? is there something else going on? Would be nice to see some sort of status to see what it is doing. do I need to kill the app?
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2601
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2601/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2601/comments
https://api.github.com/repos/ollama/ollama/issues/2601/events
https://github.com/ollama/ollama/pull/2601
2,143,236,469
PR_kwDOJ0Z1Ps5nVavx
2,601
add faqs for memory pre-loading and the keep_alive setting
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
[]
closed
false
null
[]
null
0
2024-02-19T22:31:17
2024-02-19T22:45:26
2024-02-19T22:45:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2601", "html_url": "https://github.com/ollama/ollama/pull/2601", "diff_url": "https://github.com/ollama/ollama/pull/2601.diff", "patch_url": "https://github.com/ollama/ollama/pull/2601.patch", "merged_at": "2024-02-19T22:45:25" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2601/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/735/comments
https://api.github.com/repos/ollama/ollama/issues/735/events
https://github.com/ollama/ollama/issues/735
1,931,667,129
I_kwDOJ0Z1Ps5zIua5
735
What is the supported context length? llama2-chinese:13b-chat-q6_K
{ "login": "Friedrich-hue", "id": 61929816, "node_id": "MDQ6VXNlcjYxOTI5ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/61929816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Friedrich-hue", "html_url": "https://github.com/Friedrich-hue", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
2
2023-10-08T06:25:26
2023-10-30T22:25:08
2023-10-30T22:25:07
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/735/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/341
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/341/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/341/comments
https://api.github.com/repos/ollama/ollama/issues/341/events
https://github.com/ollama/ollama/pull/341
1,849,795,598
PR_kwDOJ0Z1Ps5X4kKR
341
do not regenerate embeddings
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
[]
closed
false
null
[]
null
0
2023-08-14T13:37:55
2023-08-15T19:10:25
2023-08-15T19:10:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/341", "html_url": "https://github.com/ollama/ollama/pull/341", "diff_url": "https://github.com/ollama/ollama/pull/341.diff", "patch_url": "https://github.com/ollama/ollama/pull/341.patch", "merged_at": "2023-08-15T19:10:23" }
- re-use previously evaluated embeddings when possible - change embeddings digest identifier to be based on model name and embedded file path This change opens previously generated embeddings for the same model/file and re-uses them when possible. This means that running create on the same file will not generate th...
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/Br...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/341/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8095
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8095/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8095/comments
https://api.github.com/repos/ollama/ollama/issues/8095/events
https://github.com/ollama/ollama/issues/8095
2,739,795,525
I_kwDOJ0Z1Ps6jTfZF
8,095
Using structured output with tools always produces empty tool_calls array
{ "login": "grabbou", "id": 2464966, "node_id": "MDQ6VXNlcjI0NjQ5NjY=", "avatar_url": "https://avatars.githubusercontent.com/u/2464966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/grabbou", "html_url": "https://github.com/grabbou", "followers_url": "https://api.github.com/users/grabbou/...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q...
open
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers"...
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/...
null
2
2024-12-14T11:39:33
2025-01-20T06:57:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? With OpenAI API, you can pass both tools and response_format. In case model wants to call tools, message will be `null` and tools will be called. With Ollama, it appears that when response_format is present as JSON schema, the tool calls is an empty array, despite model wanting to call the to...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8095/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8095/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6551
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6551/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6551/comments
https://api.github.com/repos/ollama/ollama/issues/6551/events
https://github.com/ollama/ollama/issues/6551
2,494,058,273
I_kwDOJ0Z1Ps6UqE8h
6,551
Need cli ollama stop
{ "login": "HomunMage", "id": 144320229, "node_id": "U_kgDOCJom5Q", "avatar_url": "https://avatars.githubusercontent.com/u/144320229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HomunMage", "html_url": "https://github.com/HomunMage", "followers_url": "https://api.github.com/users/HomunM...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-08-29T10:30:04
2024-09-02T00:02:57
2024-09-02T00:02:57
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
we need ollama stop that can kill ollama server, no using systemctl because need use this in suck python and c++ subprocess or thread to handle
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6551/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8605
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8605/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8605/comments
https://api.github.com/repos/ollama/ollama/issues/8605/events
https://github.com/ollama/ollama/issues/8605
2,812,486,439
I_kwDOJ0Z1Ps6noyMn
8,605
Error fetching ANY model locally
{ "login": "devroopsaha744", "id": 130696540, "node_id": "U_kgDOB8pFXA", "avatar_url": "https://avatars.githubusercontent.com/u/130696540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devroopsaha744", "html_url": "https://github.com/devroopsaha744", "followers_url": "https://api.github.c...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw...
open
false
null
[]
null
4
2025-01-27T09:25:12
2025-01-28T17:14:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? This is the Error message that I am getting: pulling manifest pulling 6e9f90f02bb3... 0% ▕ ▏ 0 B/9.0 GB Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8605/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/762
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/762/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/762/comments
https://api.github.com/repos/ollama/ollama/issues/762/events
https://github.com/ollama/ollama/issues/762
1,938,820,901
I_kwDOJ0Z1Ps5zkA8l
762
Support for HuggingFaceH4/zephyr-7b-alpha
{ "login": "shauryr", "id": 12604876, "node_id": "MDQ6VXNlcjEyNjA0ODc2", "avatar_url": "https://avatars.githubusercontent.com/u/12604876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shauryr", "html_url": "https://github.com/shauryr", "followers_url": "https://api.github.com/users/shaury...
[]
closed
false
null
[]
null
3
2023-10-11T22:14:43
2023-10-12T13:34:24
2023-10-11T23:09:28
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha : zephyr-7b-alpha model outperforms ChatLlama 70B It would be great to have this run inside Ollama!
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/762/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7113
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7113/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7113/comments
https://api.github.com/repos/ollama/ollama/issues/7113/events
https://github.com/ollama/ollama/issues/7113
2,569,653,831
I_kwDOJ0Z1Ps6ZKc5H
7,113
llama runner process has terminated: error loading model: error loading model vocabulary: invalid string position
{ "login": "ImValll", "id": 107722816, "node_id": "U_kgDOBmu4QA", "avatar_url": "https://avatars.githubusercontent.com/u/107722816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ImValll", "html_url": "https://github.com/ImValll", "followers_url": "https://api.github.com/users/ImValll/foll...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
2024-10-07T07:54:24
2024-10-21T04:13:21
2024-10-09T09:00:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I finetuned the gemma 2 model and converted it in GGUF, I try to run it with this code but it isn't working. Do you have any idea? import ollama import asyncio from ollama import AsyncClient async def chat(human_message): message = {'role': 'human', 'content': human_message} ...
{ "login": "ImValll", "id": 107722816, "node_id": "U_kgDOBmu4QA", "avatar_url": "https://avatars.githubusercontent.com/u/107722816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ImValll", "html_url": "https://github.com/ImValll", "followers_url": "https://api.github.com/users/ImValll/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7113/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1011
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1011/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1011/comments
https://api.github.com/repos/ollama/ollama/issues/1011/events
https://github.com/ollama/ollama/pull/1011
1,978,293,843
PR_kwDOJ0Z1Ps5epKE2
1,011
Updated README.md. Added a new feature to the ollama project: GitHub Codespaces integration.
{ "login": "TouchstoneTheDev", "id": 101004444, "node_id": "U_kgDOBgU0nA", "avatar_url": "https://avatars.githubusercontent.com/u/101004444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TouchstoneTheDev", "html_url": "https://github.com/TouchstoneTheDev", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
2
2023-11-06T04:59:29
2023-11-06T16:36:20
2023-11-06T16:19:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1011", "html_url": "https://github.com/ollama/ollama/pull/1011", "diff_url": "https://github.com/ollama/ollama/pull/1011.diff", "patch_url": "https://github.com/ollama/ollama/pull/1011.patch", "merged_at": null }
This pull request adds a new feature to the ollama project: GitHub Codespaces integration. With this feature, you can easily create a cloud-based development environment for ollama with just one click. You can edit, debug, test, and deploy your code from anywhere, using any device.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1011/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/731
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/731/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/731/comments
https://api.github.com/repos/ollama/ollama/issues/731/events
https://github.com/ollama/ollama/issues/731
1,931,602,155
I_kwDOJ0Z1Ps5zIejr
731
Wrong with Chinese spelling
{ "login": "1linguowei", "id": 31962248, "node_id": "MDQ6VXNlcjMxOTYyMjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/31962248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1linguowei", "html_url": "https://github.com/1linguowei", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
1
2023-10-08T02:22:57
2023-12-22T03:37:42
2023-12-22T03:37:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Here is a case where I used Chinese input: User: ollama run llama2-chinese:7b-chat-q4_0 "你会Swift编程吗" Assitant: 您好,我是一个AI语言模型。我能够回答类似于人类的问题,包括指导如何使用Swift编程。如果您想知道Swift编程的基本概念或者是如何编写一个简单的应用程序,我会提供相关的建议和指导。请告诉我您需要了解的Swift编程方面,以便更好地帮助您。 User: ollama run llama2-chinese:7b-chat-q4_0 User: 你会Swift编程吗 Assistant: 我是AI...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/731/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2249
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2249/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2249/comments
https://api.github.com/repos/ollama/ollama/issues/2249/events
https://github.com/ollama/ollama/pull/2249
2,104,528,878
PR_kwDOJ0Z1Ps5lRjM1
2,249
Add README.md
{ "login": "Yuan-ManX", "id": 68322456, "node_id": "MDQ6VXNlcjY4MzIyNDU2", "avatar_url": "https://avatars.githubusercontent.com/u/68322456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yuan-ManX", "html_url": "https://github.com/Yuan-ManX", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
0
2024-01-29T02:45:30
2024-02-22T19:03:44
2024-02-22T19:03:44
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2249", "html_url": "https://github.com/ollama/ollama/pull/2249", "diff_url": "https://github.com/ollama/ollama/pull/2249.diff", "patch_url": "https://github.com/ollama/ollama/pull/2249.patch", "merged_at": "2024-02-22T19:03:44" }
null
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2249/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8165
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8165/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8165/comments
https://api.github.com/repos/ollama/ollama/issues/8165/events
https://github.com/ollama/ollama/pull/8165
2,748,924,641
PR_kwDOJ0Z1Ps6FtV9r
8,165
server: add options to dry run and debug for chat and generate
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[]
open
false
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/...
[ { "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "htt...
null
12
2024-12-18T23:20:47
2025-01-02T20:07:40
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8165", "html_url": "https://github.com/ollama/ollama/pull/8165", "diff_url": "https://github.com/ollama/ollama/pull/8165.diff", "patch_url": "https://github.com/ollama/ollama/pull/8165.patch", "merged_at": null }
- Doesn't actually load the model - No tokenization or context length clipping - Barebones implementation of the `chatPrompt` function Precursor to enabling tokenization endpoints: https://github.com/ollama/ollama/pull/8106
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8165/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8080
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8080/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8080/comments
https://api.github.com/repos/ollama/ollama/issues/8080/events
https://github.com/ollama/ollama/pull/8080
2,737,444,182
PR_kwDOJ0Z1Ps6FGath
8,080
Ollama docker usage for jetson devices aded to documentation
{ "login": "openzeka-birol-kuyumcu", "id": 174419215, "node_id": "U_kgDOCmVtDw", "avatar_url": "https://avatars.githubusercontent.com/u/174419215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/openzeka-birol-kuyumcu", "html_url": "https://github.com/openzeka-birol-kuyumcu", "followers_url...
[]
open
false
null
[]
null
0
2024-12-13T05:31:23
2024-12-13T05:31:23
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8080", "html_url": "https://github.com/ollama/ollama/pull/8080", "diff_url": "https://github.com/ollama/ollama/pull/8080.diff", "patch_url": "https://github.com/ollama/ollama/pull/8080.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8080/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1429
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1429/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1429/comments
https://api.github.com/repos/ollama/ollama/issues/1429/events
https://github.com/ollama/ollama/issues/1429
2,031,782,067
I_kwDOJ0Z1Ps55Goiz
1,429
Can you explain the difference between query and complete? Why one versus the other? Thanks!
{ "login": "OpenSpacesAndPlaces", "id": 30755002, "node_id": "MDQ6VXNlcjMwNzU1MDAy", "avatar_url": "https://avatars.githubusercontent.com/u/30755002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OpenSpacesAndPlaces", "html_url": "https://github.com/OpenSpacesAndPlaces", "followers_url": ...
[]
closed
false
null
[]
null
7
2023-12-08T01:30:34
2023-12-09T01:16:29
2023-12-09T01:16:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
e.g. query_engine = index.as_query_engine() retrieved_nodes = query_engine.query("What is the price of apples?") vs. prompt ="What is the price of apples?"; response = llm.complete(prompt) ---- I saw this example dogfooding the query into the complete? Why might you want to-do that vs. just query? http...
{ "login": "OpenSpacesAndPlaces", "id": 30755002, "node_id": "MDQ6VXNlcjMwNzU1MDAy", "avatar_url": "https://avatars.githubusercontent.com/u/30755002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OpenSpacesAndPlaces", "html_url": "https://github.com/OpenSpacesAndPlaces", "followers_url": ...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1429/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6641
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6641/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6641/comments
https://api.github.com/repos/ollama/ollama/issues/6641/events
https://github.com/ollama/ollama/pull/6641
2,506,123,890
PR_kwDOJ0Z1Ps56b_Dx
6,641
Add curl to container
{ "login": "nopoz", "id": 460545, "node_id": "MDQ6VXNlcjQ2MDU0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/460545?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nopoz", "html_url": "https://github.com/nopoz", "followers_url": "https://api.github.com/users/nopoz/followers"...
[]
closed
false
null
[]
null
3
2024-09-04T19:31:13
2024-11-21T10:39:23
2024-11-21T09:52:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6641", "html_url": "https://github.com/ollama/ollama/pull/6641", "diff_url": "https://github.com/ollama/ollama/pull/6641.diff", "patch_url": "https://github.com/ollama/ollama/pull/6641.patch", "merged_at": null }
Adds curl package to the container for the purpose of creating a custom healthcheck in user-side docker compose files. This is a compromise to the denied PR in: https://github.com/ollama/ollama/pull/1909 - instead of adding an intergrated health check, just add the tool so users can create one themselves in their l...
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/us...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6641/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6641/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3535/comments
https://api.github.com/repos/ollama/ollama/issues/3535/events
https://github.com/ollama/ollama/issues/3535
2,230,550,428
I_kwDOJ0Z1Ps6E83-c
3,535
tid in log always be the same
{ "login": "mofanke", "id": 54242816, "node_id": "MDQ6VXNlcjU0MjQyODE2", "avatar_url": "https://avatars.githubusercontent.com/u/54242816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mofanke", "html_url": "https://github.com/mofanke", "followers_url": "https://api.github.com/users/mofank...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-04-08T08:12:32
2024-07-25T15:43:30
2024-07-25T15:43:30
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? https://github.com/ggerganov/llama.cpp/issues/6534 ### What did you expect to see? i understand that 'tid' represents thread ID, which should change upon restarting, but I've noticed that 'tid':'0x1fc50fac0' keeps appearing repeatedly. I've also noticed some other values, but I'm not sure wh...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3535/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7155
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7155/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7155/comments
https://api.github.com/repos/ollama/ollama/issues/7155/events
https://github.com/ollama/ollama/pull/7155
2,576,955,384
PR_kwDOJ0Z1Ps5-IaIs
7,155
fix vendoring attribute
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[]
closed
false
null
[]
null
0
2024-10-09T21:04:20
2024-10-09T21:21:05
2024-10-09T21:21:02
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7155", "html_url": "https://github.com/ollama/ollama/pull/7155", "diff_url": "https://github.com/ollama/ollama/pull/7155.diff", "patch_url": "https://github.com/ollama/ollama/pull/7155.patch", "merged_at": "2024-10-09T21:21:02" }
Expand out the file extensions for vendored code so git reports the status correctly e.g.: ``` % git check-attr -a -- ./llama/ggml.c ./llama/ggml.c: text: auto ./llama/ggml.c: linguist-vendored: set ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7155/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6278
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6278/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6278/comments
https://api.github.com/repos/ollama/ollama/issues/6278/events
https://github.com/ollama/ollama/pull/6278
2,457,259,095
PR_kwDOJ0Z1Ps536d4D
6,278
cmd: print proxy info when OLLAMA_DEBUG is true
{ "login": "zhangyunhao116", "id": 18065074, "node_id": "MDQ6VXNlcjE4MDY1MDc0", "avatar_url": "https://avatars.githubusercontent.com/u/18065074?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangyunhao116", "html_url": "https://github.com/zhangyunhao116", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
4
2024-08-09T07:08:07
2024-12-24T07:55:32
2024-12-24T03:56:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6278", "html_url": "https://github.com/ollama/ollama/pull/6278", "diff_url": "https://github.com/ollama/ollama/pull/6278.diff", "patch_url": "https://github.com/ollama/ollama/pull/6278.patch", "merged_at": null }
This PR prints proxy information when OLLAMA_DEBUG is true. I've noticed that users often encounter issues with HTTP proxy in their environment(like https://github.com/ollama/ollama/issues/6195 https://github.com/ollama/ollama/issues/4834), but setting OLLAMA_DEBUG to true doesn't provide additional debugging infos ...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6278/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4041
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4041/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4041/comments
https://api.github.com/repos/ollama/ollama/issues/4041/events
https://github.com/ollama/ollama/issues/4041
2,270,745,686
I_kwDOJ0Z1Ps6HWNRW
4,041
Chat2DB-SQL-7B
{ "login": "akan", "id": 170169, "node_id": "MDQ6VXNlcjE3MDE2OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/170169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akan", "html_url": "https://github.com/akan", "followers_url": "https://api.github.com/users/akan/followers", ...
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-04-30T07:45:43
2024-04-30T07:45:43
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://huggingface.co/bartowski/Chat2DB-SQL-7B-exl2
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4041/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4732
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4732/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4732/comments
https://api.github.com/repos/ollama/ollama/issues/4732/events
https://github.com/ollama/ollama/issues/4732
2,326,637,374
I_kwDOJ0Z1Ps6Kras-
4,732
Unable to Change Ollama Models Directory on Linux (Rocky 9)
{ "login": "pykeras", "id": 52103105, "node_id": "MDQ6VXNlcjUyMTAzMTA1", "avatar_url": "https://avatars.githubusercontent.com/u/52103105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pykeras", "html_url": "https://github.com/pykeras", "followers_url": "https://api.github.com/users/pykera...
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
24
2024-05-30T22:41:42
2025-01-06T09:22:10
2024-09-08T06:47:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am following every instruction on the documentation and any other suggestions from previous issues. However, I am unable to change the Ollama models directory to another directory on RockyLinux 9. I have more than 100GB of models that I don't want to download again. **Steps to Reproduce*...
{ "login": "pykeras", "id": 52103105, "node_id": "MDQ6VXNlcjUyMTAzMTA1", "avatar_url": "https://avatars.githubusercontent.com/u/52103105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pykeras", "html_url": "https://github.com/pykeras", "followers_url": "https://api.github.com/users/pykera...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4732/reactions", "total_count": 11, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4732/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2015
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2015/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2015/comments
https://api.github.com/repos/ollama/ollama/issues/2015/events
https://github.com/ollama/ollama/pull/2015
2,084,043,315
PR_kwDOJ0Z1Ps5kMsc3
2,015
fix: differentiate floats/ints properly (resolve: #2011)
{ "login": "Robitx", "id": 8431097, "node_id": "MDQ6VXNlcjg0MzEwOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8431097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Robitx", "html_url": "https://github.com/Robitx", "followers_url": "https://api.github.com/users/Robitx/foll...
[]
closed
false
null
[]
null
1
2024-01-16T14:06:12
2024-01-16T17:37:51
2024-01-16T17:37:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2015", "html_url": "https://github.com/ollama/ollama/pull/2015", "diff_url": "https://github.com/ollama/ollama/pull/2015.diff", "patch_url": "https://github.com/ollama/ollama/pull/2015.patch", "merged_at": null }
The parsing might deserve bigger refactor, but for now - all numbers are falling into the `case float64`: branch so I put a differentiation in there. #2011
{ "login": "Robitx", "id": 8431097, "node_id": "MDQ6VXNlcjg0MzEwOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8431097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Robitx", "html_url": "https://github.com/Robitx", "followers_url": "https://api.github.com/users/Robitx/foll...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2015/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3498
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3498/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3498/comments
https://api.github.com/repos/ollama/ollama/issues/3498/events
https://github.com/ollama/ollama/issues/3498
2,226,833,658
I_kwDOJ0Z1Ps6Eusj6
3,498
Teflon (a new part of Mesa on Linux) NPU delegate support
{ "login": "leaf-node", "id": 342930, "node_id": "MDQ6VXNlcjM0MjkzMA==", "avatar_url": "https://avatars.githubusercontent.com/u/342930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leaf-node", "html_url": "https://github.com/leaf-node", "followers_url": "https://api.github.com/users/leaf...
[]
open
false
null
[]
null
1
2024-04-05T02:24:28
2024-11-21T10:33:39
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What are you trying to do? [Teflon](https://docs.mesa3d.org/teflon.html) is a [new](https://www.phoronix.com/news/Gallium3D-Teflon-Merged) front end library for NPU acceleration part of the latest versions of [Mesa](https://www.mesa3d.org/) on Linux. It's in early stages, but more NPU drivers may be added in the f...
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3498/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3498/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1705
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1705/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1705/comments
https://api.github.com/repos/ollama/ollama/issues/1705/events
https://github.com/ollama/ollama/issues/1705
2,055,556,977
I_kwDOJ0Z1Ps56hU9x
1,705
generating embeddings with OllamaEmbeddings taking forever
{ "login": "lorenzoromani1983", "id": 24575445, "node_id": "MDQ6VXNlcjI0NTc1NDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/24575445?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorenzoromani1983", "html_url": "https://github.com/lorenzoromani1983", "followers_url": "https...
[]
closed
false
null
[]
null
1
2023-12-25T09:34:39
2024-05-10T00:23:35
2024-05-10T00:23:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am trying to generate embeddings with the OllamaEmbeddings class but it takes forever on a Mac M2 Pro I am embedding 22000 posts from a forum's threads; this is my config: from langchain.embeddings import OllamaEmbeddings from llama_index.llms.ollama import Ollama llm = Ollama(model=...
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmor...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1705/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1705/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/835
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/835/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/835/comments
https://api.github.com/repos/ollama/ollama/issues/835/events
https://github.com/ollama/ollama/issues/835
1,949,294,149
I_kwDOJ0Z1Ps50L95F
835
Improve GPU scheduling
{ "login": "slychief", "id": 831947, "node_id": "MDQ6VXNlcjgzMTk0Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/831947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slychief", "html_url": "https://github.com/slychief", "followers_url": "https://api.github.com/users/slychie...
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 6430601766, "node_id": ...
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.gi...
null
11
2023-10-18T09:32:58
2024-07-03T10:26:32
2024-03-12T15:31:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, we have several GPUs in our server and use SLURM to manage the ressources. SLURM uses CUDA_VISIBLE_DEVICES to assign GPUs to jobs/processes. When I run ollama directly from commandline - within a SLURM managed context with 1 GPU assigned - it uses all availables GPUs in the server and ignores CUDA_VISIBLE_DEV...
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhilt...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/835/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5639
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5639/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5639/comments
https://api.github.com/repos/ollama/ollama/issues/5639/events
https://github.com/ollama/ollama/pull/5639
2,404,173,128
PR_kwDOJ0Z1Ps51Jhyn
5,639
do not automatically aggregate system messages
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-07-11T21:40:26
2024-07-12T00:48:52
2024-07-12T00:48:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5639", "html_url": "https://github.com/ollama/ollama/pull/5639", "diff_url": "https://github.com/ollama/ollama/pull/5639.diff", "patch_url": "https://github.com/ollama/ollama/pull/5639.patch", "merged_at": "2024-07-12T00:48:50" }
add a helper for aggregating system prompts revert embedded templates to use prompt/response templates for better compatibility
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5639/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7246
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7246/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7246/comments
https://api.github.com/repos/ollama/ollama/issues/7246/events
https://github.com/ollama/ollama/pull/7246
2,595,840,485
PR_kwDOJ0Z1Ps5_Bec7
7,246
Reuse type InvalidModelNameErrMsg, unify the const parameters.
{ "login": "zhanluxianshen", "id": 161462588, "node_id": "U_kgDOCZ-5PA", "avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhanluxianshen", "html_url": "https://github.com/zhanluxianshen", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
0
2024-10-17T21:59:30
2024-12-18T21:47:40
2024-12-18T21:47:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7246", "html_url": "https://github.com/ollama/ollama/pull/7246", "diff_url": "https://github.com/ollama/ollama/pull/7246.diff", "patch_url": "https://github.com/ollama/ollama/pull/7246.patch", "merged_at": null }
Reuse type InvalidModelNameErrMsg, unify the const parameters.
{ "login": "zhanluxianshen", "id": 161462588, "node_id": "U_kgDOCZ-5PA", "avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhanluxianshen", "html_url": "https://github.com/zhanluxianshen", "followers_url": "https://api.github.c...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7246/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5753
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5753/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5753/comments
https://api.github.com/repos/ollama/ollama/issues/5753/events
https://github.com/ollama/ollama/pull/5753
2,414,282,944
PR_kwDOJ0Z1Ps51ranz
5,753
parse tool call as individual objects
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
[]
closed
false
null
[]
null
0
2024-07-17T18:22:04
2024-07-17T18:47:55
2024-07-17T18:47:54
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5753", "html_url": "https://github.com/ollama/ollama/pull/5753", "diff_url": "https://github.com/ollama/ollama/pull/5753.diff", "patch_url": "https://github.com/ollama/ollama/pull/5753.patch", "merged_at": "2024-07-17T18:47:54" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/follower...
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5753/timeline
null
null
true