Robert Shaw
robertgshaw2
AI & ML interests
None yet
Recent Activity
New activity
4 days ago
neuralmagic/Sparse-Llama-3.1-8B-2of4:Can I apply a LoRA?
New activity
4 days ago
nm-testing/Llama-3.3-70B-Instruct-FP8-dynamic:Nice model, any info on scripts used to quantize?
updated
a model
9 days ago
nm-testing/llama-3-fp8-2of4-dynamic-uncompressed
Organizations
robertgshaw2's activity
Can I apply a LoRA?
2
#1 opened 4 days ago
by
RonanMcGovern
Nice model, any info on scripts used to quantize?
1
#1 opened 4 days ago
by
RonanMcGovern
How to download the model with transformer library
5
#6 opened about 2 months ago
by
Rick10
Update README.md
3
#25 opened 2 months ago
by
robertgshaw2
Issue running on vLLM using FP8
2
#3 opened 2 months ago
by
ffleandro
vllm says the requested model does not exist
2
#1 opened 4 months ago
by
shivams101
Storage format differs from other w4a16 models
2
#2 opened 4 months ago
by
timdettmers
Model weights are not loaded
4
#3 opened 4 months ago
by
MarvelousMouse
Can not be inferenced with vllm openai server
1
#1 opened 5 months ago
by
jjqsdq
Code example request with vllm
2
#1 opened 5 months ago
by
ShiningJazz
4bit quantisation does not reduce vram usage.
1
#2 opened 6 months ago
by
fu-man
How to run Meta-Llama-3-70B-Instruct-FP8 using several devices?
5
#3 opened 6 months ago
by
Fertel
Reproduction
2
#792 opened 6 months ago
by
robertgshaw2
Fails to run with nm-vllm
1
#1 opened 7 months ago
by
clintonruairi
Update chart template
#2 opened 9 months ago
by
robertgshaw2
Update chart template
#2 opened 9 months ago
by
robertgshaw2