yintongl commited on
Commit
393410e
1 Parent(s): d0f9be1

Update README.md

Browse files

add itrex inference eg

Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -6,6 +6,25 @@ license: apache-2.0
6
  This model is an int4 model with group_size 64 of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) generated by [intel/auto-round](https://github.com/intel/auto-round), because there is an issue when evaluating with group_size 128.
7
 
8
  ## How To Use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  ### INT4 Inference with AutoGPTQ
11
 
 
6
  This model is an int4 model with group_size 64 of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) generated by [intel/auto-round](https://github.com/intel/auto-round), because there is an issue when evaluating with group_size 128.
7
 
8
  ## How To Use
9
+ ### INT4 Inference with ITREX on CPU
10
+ Install the latest [intel-extension-for-transformers](
11
+ https://github.com/intel/intel-extension-for-transformers)
12
+ ```python
13
+ from intel_extension_for_transformers.transformers import AutoModelForCausalLM
14
+ from transformers import AutoTokenizer
15
+ quantized_model_dir = "Intel/falcon-7b-int4-inc"
16
+ model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
17
+ device_map="auto",
18
+ trust_remote_code=False,
19
+ use_neural_speed=False,
20
+ )
21
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
22
+ print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))
23
+ """
24
+ There is a girl who likes adventure, and she is a girl who likes to be a hero. She is a girl who likes to be a hero. She is a girl who likes to be a hero. She is a girl who likes to be a hero. She is a girl who
25
+ """
26
+ ```
27
+
28
 
29
  ### INT4 Inference with AutoGPTQ
30