4n3mone commited on
Commit
25227b5
1 Parent(s): b99cc5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -35
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -17,61 +18,61 @@ readme coming soon
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
  - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
 
 
 
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
- [More Information Needed]
 
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
 
76
  ## Training Details
77
 
 
1
  ---
2
  library_name: transformers
3
+ language:
4
+ - ko
5
  ---
6
 
7
  # Model Card for Model ID
 
18
 
19
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
 
21
+ - **Developed by:** 4n3mone (YongSang Yoo)
22
+ - **Model type:** chatglm
23
+ - **Language(s) (NLP):** Korean
24
+ - **License:** glm-4
25
+ - **Finetuned from model [optional]:** THUDM/glm-4-9b-chat
 
 
26
 
27
  ### Model Sources [optional]
28
 
29
  <!-- Provide the basic links for the model. -->
30
 
31
+ - **Repository:** THUDM/glm-4-9b-chat
32
  - **Paper [optional]:** [More Information Needed]
33
  - **Demo [optional]:** [More Information Needed]
34
 
 
35
 
 
36
 
37
+ ## How to Get Started with the Model
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
+ Use the code below to get started with the model.
40
+ ```python
41
+ from transformers import AutoTokenizer
42
+ from vllm import LLM, SamplingParams
43
 
 
44
 
45
+ # GLM-4-9B-Chat
46
+ # If you encounter OOM (Out of Memory) issues, it is recommended to reduce max_model_len or increase tp_size.
47
+ max_model_len, tp_size = 131072, 1
48
+ model_name = "4n3mone/glm-4-ko-9b-chat-preview"
49
+ prompt = [{"role": "user", "content": "피카츄랑 아구몬 중에서 누가 더 귀여워?"}]
50
 
51
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
52
+ llm = LLM(
53
+ model=model_name,
54
+ tensor_parallel_size=tp_size,
55
+ max_model_len=max_model_len,
56
+ trust_remote_code=True,
57
+ enforce_eager=True,
58
+ # If you encounter OOM (Out of Memory) issues, it is recommended to enable the following parameters.
59
+ # enable_chunked_prefill=True,
60
+ # max_num_batched_tokens=8192
61
+ )
62
+ stop_token_ids = [151329, 151336, 151338]
63
+ sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids)
64
 
65
+ inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
66
+ outputs = llm.generate(prompts=inputs, sampling_params=sampling_params)
67
 
68
+ print(outputs[0].outputs[0].text)
69
 
70
+ model.generate(prompt)
71
 
72
+ ```
73
 
 
74
 
 
75
 
 
76
 
77
  ## Training Details
78