x54-729 commited on
Commit
2a32eb6
1 Parent(s): 190c1be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -26
README.md CHANGED
@@ -58,19 +58,21 @@ We conducted a comprehensive evaluation of InternLM using the open-source evalua
58
  ### Import from Transformers
59
  To load the InternLM 7B Chat model using Transformers, use the following code:
60
  ```python
61
- >>> from transformers import AutoTokenizer, AutoModelForCausalLM
62
- >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
63
- >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", trust_remote_code=True).cuda()
64
- >>> model = model.eval()
65
- >>> inputs = tokenizer(["A beautiful flower"], return_tensors="pt")
66
- >>> for k,v in inputs.items():
67
- inputs[k] = v.cuda()
68
- >>> gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
69
- >>> output = model.generate(**inputs, **gen_kwargs)
70
- >>> output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
71
- >>> print(output)
72
- <s> A beautiful flower box made of white rose wood. It is a perfect gift for weddings, birthdays and anniversaries.
73
- All the roses are from our farm Roses Flanders. Therefor you know that these flowers last much longer than those in store or online!</s>
 
 
74
  ```
75
 
76
  ## Open Source License
@@ -109,19 +111,21 @@ InternLM ,即书生·浦语大模型,包含面向实用场景的70亿参数
109
  ### 通过 Transformers 加载
110
  通过以下的代码加载 InternLM 7B Chat 模型
111
  ```python
112
- >>> from transformers import AutoTokenizer, AutoModelForCausalLM
113
- >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
114
- >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", trust_remote_code=True).cuda()
115
- >>> model = model.eval()
116
- >>> inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
117
- >>> for k,v in inputs.items():
118
- inputs[k] = v.cuda()
119
- >>> gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
120
- >>> output = model.generate(**inputs, **gen_kwargs)
121
- >>> output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
122
- >>> print(output)
123
- 来到美丽的大自然,我们发现各种各样的花千奇百怪。有的颜色鲜艳亮丽,使人感觉生机勃勃;有的是红色的花瓣儿粉嫩嫩的像少女害羞的脸庞一样让人爱不释手.有的小巧玲珑; 还有的花瓣粗大看似枯黄实则暗藏玄机!
124
- 不同的花卉有不同的“脾气”,它们都有着属于自己的故事和人生道理.这些鲜花都是大自然中最为原始的物种,每一朵都绽放出别样的美令人陶醉、着迷!
 
 
125
  ```
126
 
127
  ## 开源许可证
 
58
  ### Import from Transformers
59
  To load the InternLM 7B Chat model using Transformers, use the following code:
60
  ```python
61
+ import torch
62
+ from transformers import AutoTokenizer, AutoModelForCausalLM
63
+ tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
64
+ # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
65
+ model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
66
+ model = model.eval()
67
+ inputs = tokenizer(["A beautiful flower"], return_tensors="pt")
68
+ for k,v in inputs.items():
69
+ inputs[k] = v.cuda()
70
+ gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
71
+ output = model.generate(**inputs, **gen_kwargs)
72
+ output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
73
+ print(output)
74
+ # <s> A beautiful flower box made of white rose wood. It is a perfect gift for weddings, birthdays and anniversaries.
75
+ # All the roses are from our farm Roses Flanders. Therefor you know that these flowers last much longer than those in store or online!</s>
76
  ```
77
 
78
  ## Open Source License
 
111
  ### 通过 Transformers 加载
112
  通过以下的代码加载 InternLM 7B Chat 模型
113
  ```python
114
+ import torch
115
+ from transformers import AutoTokenizer, AutoModelForCausalLM
116
+ tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
117
+ # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,有可能导致显存不足
118
+ model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
119
+ model = model.eval()
120
+ inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
121
+ for k,v in inputs.items():
122
+ inputs[k] = v.cuda()
123
+ gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
124
+ output = model.generate(**inputs, **gen_kwargs)
125
+ output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
126
+ print(output)
127
+ # 来到美丽的大自然,我们发现各种各样的花千奇百怪。有的颜色鲜艳亮丽,使人感觉生机勃勃;有的是红色的花瓣儿粉嫩嫩的像少女害羞的脸庞一样让人爱不释手.有的小巧玲珑; 还有的花瓣粗大看似枯黄实则暗藏玄机!
128
+ # 不同的花卉有不同的“脾气”,它们都有着属于自己的故事和人生道理.这些鲜花都是大自然中最为原始的物种,每一朵都绽放出别样的美令人陶醉、着迷!
129
  ```
130
 
131
  ## 开源许可证