cminja commited on
Commit
ca7df26
1 Parent(s): 3a6a7c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -3
README.md CHANGED
@@ -1,3 +1,112 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ ### Inference with Huggingface's Transformers
6
+ You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
7
+
8
+ #### Code Completion
9
+ ```python
10
+ from transformers import AutoTokenizer, AutoModelForCausalLM
11
+ import torch
12
+ tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
13
+ model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
14
+ input_text = "#write a quick sort algorithm"
15
+ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
16
+ outputs = model.generate(**inputs, max_length=128)
17
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
18
+ ```
19
+
20
+ #### Code Insertion
21
+ ```python
22
+ from transformers import AutoTokenizer, AutoModelForCausalLM
23
+ import torch
24
+ tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
25
+ model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
26
+ input_text = """<|fim▁begin|>def quick_sort(arr):
27
+ if len(arr) <= 1:
28
+ return arr
29
+ pivot = arr[0]
30
+ left = []
31
+ right = []
32
+ <|fim▁hole|>
33
+ if arr[i] < pivot:
34
+ left.append(arr[i])
35
+ else:
36
+ right.append(arr[i])
37
+ return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
38
+ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
39
+ outputs = model.generate(**inputs, max_length=128)
40
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
41
+ ```
42
+
43
+ #### Chat Completion
44
+
45
+ ```python
46
+ from transformers import AutoTokenizer, AutoModelForCausalLM
47
+ import torch
48
+ tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True)
49
+ model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
50
+ messages=[
51
+ { 'role': 'user', 'content': "write a quick sort algorithm in python."}
52
+ ]
53
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
54
+ # tokenizer.eos_token_id is the id of <|EOT|> token
55
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
56
+ print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
57
+ ```
58
+
59
+
60
+
61
+ The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
62
+
63
+ An example of chat template is as belows:
64
+
65
+ ```bash
66
+ <|begin▁of▁sentence|>User: {user_message_1}
67
+
68
+ Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
69
+
70
+ Assistant:
71
+ ```
72
+
73
+ You can also add an optional system message:
74
+
75
+ ```bash
76
+ <|begin▁of▁sentence|>{system_message}
77
+
78
+ User: {user_message_1}
79
+
80
+ Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
81
+
82
+ Assistant:
83
+ ```
84
+
85
+ ### Inference with vLLM (recommended)
86
+ To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.
87
+
88
+ ```python
89
+ from transformers import AutoTokenizer
90
+ from vllm import LLM, SamplingParams
91
+
92
+ max_model_len, tp_size = 8192, 1
93
+ model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"
94
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
95
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
96
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
97
+
98
+ messages_list = [
99
+ [{"role": "user", "content": "Who are you?"}],
100
+ [{"role": "user", "content": "write a quick sort algorithm in python."}],
101
+ [{"role": "user", "content": "Write a piece of quicksort code in C++."}],
102
+ ]
103
+
104
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
105
+
106
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
107
+
108
+ generated_text = [output.outputs[0].text for output in outputs]
109
+ print(generated_text)
110
+ ```
111
+
112
+