Text Generation
Transformers
NeMo
Safetensors
mistral
text-generation-inference
Inference Endpoints
srvm commited on
Commit
4ea0f55
1 Parent(s): 6628063

Update README

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -53,7 +53,7 @@ We can now run inference on this model:
53
 
54
  ```python
55
  import torch
56
- from transformers import AutoTokenizer, LlamaForCausalLM
57
 
58
  # Load the tokenizer and model
59
  model_path = "nvidia/Mistral-NeMo-Minitron-8B-Base"
 
53
 
54
  ```python
55
  import torch
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
 
58
  # Load the tokenizer and model
59
  model_path = "nvidia/Mistral-NeMo-Minitron-8B-Base"