Junlin Zhou
jlzhou
AI & ML interests
None yet
Organizations
None yet
jlzhou's activity
Base model
4
#2 opened about 1 month ago
by
Stark2008
![](https://cdn-avatars.huggingface.co/v1/production/uploads/644d602653ad80c659399ff7/a062gUTf_BXg_W6PPenvF.jpeg)
How to download the dataset in bulk?
1
#7 opened 3 months ago
by
Chinglin
what different between the-stack-v2-train-full-ids and the-stack-v2-dedup
5
#2 opened 5 months ago
by
shawn0wang
Actual dataset size?
3
#4 opened 2 months ago
by
jlzhou
Instruct version please
1
#5 opened 3 months ago
by
rjmehta
What does low_cpu_mem_usage do?
1
#8 opened 3 months ago
by
omgwenxx
Problem Running Model
13
#3 opened 9 months ago
by
bezale
It seems that this model sometimes ignores user instruction
3
#12 opened 4 months ago
by
jlzhou
Start an API for falcon-180B
6
#22 opened 10 months ago
by
DrLuttapi
Add `chat_template` in tokenizer config
2
#3 opened 4 months ago
by
jlzhou
Please create google Gemma-7b (8.5b) based version
12
#4 opened 5 months ago
by
rombodawg
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642cc1c253e76b4c2286c58e/fGtQ_QeTjUgBhIT89dpUt.jpeg)
Dose HF-TGI support this GGUF version?
1
#2 opened 4 months ago
by
gpt3eth
How to convert 4bit model back to fp16 data format?
3
#52 opened 5 months ago
by
tremblingbrain
Add `chat_template` in tokenizer config
1
#11 opened 5 months ago
by
jlzhou
Poor Model Performance with Recommended Quantized Model
1
#21 opened 6 months ago
by
nlpsingh
13b in the future?
9
#21 opened 10 months ago
by
deleted
No memory within model?
5
#3 opened 7 months ago
by
jdc4429
fix: missing suffix for system message
1
#1 opened 8 months ago
by
jlzhou
Problem with streaming support
5
#17 opened 9 months ago
by
mattma1970
fix: quantize param in TGI example
1
#8 opened 9 months ago
by
jlzhou
any idea how to test this for inferencing using vllm?
3
#1 opened 10 months ago
by
silvacarl
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1660775548920-628d998a2b60ec0f336cc1eb.png)
Failed to run this model on A6000 48GB VRAM Machine
2
#3 opened 10 months ago
by
Leegohi
CPU or GPU
1
#76 opened 11 months ago
by
lalit34
How to quantise the model?
2
#2 opened 12 months ago
by
szbigcat
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6463539fc615cbc12446b65d/At-weRzpELN7LCGgvFXXK.png)
Does it increase inference speed on the same gpu?
2
#1 opened 12 months ago
by
aibarito-ua
Getting HTTP Error Code: 422 when using Inference API
2
#96 opened about 1 year ago
by
reetkat
Model sometimes generates '</s>'
1
#63 opened about 1 year ago
by
jlzhou