study-hjt commited on
Commit
5735d44
1 Parent(s): c03bd06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -61,15 +61,15 @@ KeyError: 'qwen2'
61
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
62
 
63
  ```python
64
- from modelscope import AutoModelForCausalLM, AutoTokenizer
65
  device = "cuda" # the device to load the model onto
66
 
67
  model = AutoModelForCausalLM.from_pretrained(
68
- "huangjintao/Qwen1.5-110B-Chat-GPTQ-Int4",
69
  torch_dtype="auto",
70
  device_map="auto"
71
  )
72
- tokenizer = AutoTokenizer.from_pretrained("huangjintao/Qwen1.5-110B-Chat-GPTQ-Int4")
73
 
74
  prompt = "Give me a short introduction to large language model."
75
  messages = [
 
61
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
62
 
63
  ```python
64
+ from transformers import AutoModelForCausalLM, AutoTokenizer
65
  device = "cuda" # the device to load the model onto
66
 
67
  model = AutoModelForCausalLM.from_pretrained(
68
+ "study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4",
69
  torch_dtype="auto",
70
  device_map="auto"
71
  )
72
+ tokenizer = AutoTokenizer.from_pretrained("study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4")
73
 
74
  prompt = "Give me a short introduction to large language model."
75
  messages = [