schroneko's picture
8e31078ae936890fd34d3c13bcfef40351c64e99e7a04f604aecd582088320be
6955be2 verified
|
raw
history blame
1.12 kB
metadata
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
datasets:
  - nvidia/HelpSteer2
language:
  - en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
  - nvidia
  - llama3.1
  - mlx
inference: false
fine-tuning: false

mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF

The Model mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF was converted to MLX format from nvidia/Llama-3.1-Nemotron-70B-Instruct-HF using mlx-lm version 0.19.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Llama-3.1-Nemotron-70B-Instruct-HF")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)