Text Generation
Transformers
GGUF
English
llama
text-generation-inference
TheBloke commited on
Commit
aa3cc17
1 Parent(s): 9659e35

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -180,7 +180,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
180
  from ctransformers import AutoModelForCausalLM
181
 
182
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
183
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/orca_mini_v3_7B-GGML", model_file="orca_mini_v3_7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
184
 
185
  print(llm("AI is going to"))
186
  ```
 
180
  from ctransformers import AutoModelForCausalLM
181
 
182
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
183
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/orca_mini_v3_7B-GGUF", model_file="orca_mini_v3_7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
184
 
185
  print(llm("AI is going to"))
186
  ```