Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
8a76971
1 Parent(s): a643e81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -97,7 +97,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
97
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
98
  model_basename=model_basename,
99
  use_safetensors=True,
100
- trust_remote_code=True,
101
  device="cuda:0",
102
  use_triton=use_triton,
103
  quantize_config=None)
 
97
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
98
  model_basename=model_basename,
99
  use_safetensors=True,
100
+ trust_remote_code=False,
101
  device="cuda:0",
102
  use_triton=use_triton,
103
  quantize_config=None)