Text Generation
Transformers
Safetensors
15 languages
gemma
Inference Endpoints
text-generation-inference
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -128,8 +128,7 @@ response = tokenizer.batch_decode(outputs)
128
  # Inference with HuggingFace
129
 
130
  ```python3
131
- from peft import AutoModelForCausalLM
132
- from transformers import AutoTokenizer
133
  import torch
134
 
135
  model = AutoModelForCausalLM.from_pretrained(
 
128
  # Inference with HuggingFace
129
 
130
  ```python3
131
+ from transformers import AutoTokenizer, AutoModelForCausalLM
 
132
  import torch
133
 
134
  model = AutoModelForCausalLM.from_pretrained(