x54-729 commited on
Commit
3e6b81c
1 Parent(s): d5fadd8

fix eos & update README for tech report

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -113,7 +113,7 @@ You can run batch inference locally with the following python code:
113
 
114
  ```python
115
  import lmdeploy
116
- pipe = lmdeploy.pipeline("internlm/internlm-chat-7b")
117
  response = pipe(["Hi, pls intro yourself", "Shanghai is"])
118
  print(response)
119
  ```
@@ -264,7 +264,7 @@ pip install lmdeploy
264
 
265
  ```python
266
  import lmdeploy
267
- pipe = lmdeploy.pipeline("internlm/internlm-chat-7b")
268
  response = pipe(["Hi, pls intro yourself", "Shanghai is"])
269
  print(response)
270
  ```
 
113
 
114
  ```python
115
  import lmdeploy
116
+ pipe = lmdeploy.pipeline("internlm/internlm2-chat-7b")
117
  response = pipe(["Hi, pls intro yourself", "Shanghai is"])
118
  print(response)
119
  ```
 
264
 
265
  ```python
266
  import lmdeploy
267
+ pipe = lmdeploy.pipeline("internlm/internlm2-chat-7b")
268
  response = pipe(["Hi, pls intro yourself", "Shanghai is"])
269
  print(response)
270
  ```