whcao commited on
Commit
3db11ab
1 Parent(s): a059820

fix readme

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -40,7 +40,7 @@ Trying the following codes, you can perform the batched offline inference with t
40
  ```python
41
  from lmdeploy import pipeline, TurbomindEngineConfig
42
  engine_config = TurbomindEngineConfig(model_format='awq')
43
- pipe = pipeline("internlm/internlm2-chat-20b-4bits", engine_config)
44
  response = pipe(["Hi, pls intro yourself", "Shanghai is"])
45
  print(response)
46
  ```
 
40
  ```python
41
  from lmdeploy import pipeline, TurbomindEngineConfig
42
  engine_config = TurbomindEngineConfig(model_format='awq')
43
+ pipe = pipeline("internlm/internlm2-chat-20b-4bits", backend_config=engine_config)
44
  response = pipe(["Hi, pls intro yourself", "Shanghai is"])
45
  print(response)
46
  ```