bdellabe's picture
Update README.md
2de23b5 verified
metadata
license: mit
base_model:
  - deepseek-ai/DeepSeek-R1
  - nvidia/DeepSeek-R1-NVFP4

Model Overview

Description:

Model created from the nvidia/DeepSeek-R1-NVFP4 checkpoint by:

  • converting all layers targeted by modelopt NVFP4 format to compressed-tensors format
  • applying FP8_BLOCK quantization to targeted attention layers

More information at https://github.com/vllm-project/llm-compressor/pull/2228

Runs successfully on 4 B200s:

from vllm import LLM, SamplingParams

prompts = ["The Swiss Alps are", "Brad Marchand is", "The Toronto Maple Leafs are"]

# Create a sampling params object for greedy sampling
sampling_params = SamplingParams(
    temperature=0.80, top_p=0.95, max_tokens=40, min_tokens=10
)
llm = LLM(
    "inference-optimization/DeepSeek-R1-NVFP4-FP8-BLOCK",
    tensor_parallel_size=4,
    max_model_len=4096,
    enforce_eager=True,
)
output = llm.generate(prompts, sampling_params)
for out in output:
    print(out.outputs[0].text)