language:
- ko
library_name: transformers
Model Card for Model ID
readme coming soon
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: 4n3mone (YongSang Yoo)
- Model type: chatglm
- Language(s) (NLP): Korean
- License: glm-4
- Finetuned from model [optional]: THUDM/glm-4-9b-chat
Model Sources [optional]
- Repository: THUDM/glm-4-9b-chat
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
# GLM-4-9B-Chat
# If you encounter OOM (Out of Memory) issues, it is recommended to reduce max_model_len or increase tp_size.
max_model_len, tp_size = 131072, 1
model_name = "4n3mone/glm-4-ko-9b-chat"
prompt = [{"role": "user", "content": "피카츄랑 아구몬 중에서 누가 더 귀여워?"}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
llm = LLM(
model=model_name,
tensor_parallel_size=tp_size,
max_model_len=max_model_len,
trust_remote_code=True,
enforce_eager=True,
# If you encounter OOM (Out of Memory) issues, it is recommended to enable the following parameters.
# enable_chunked_prefill=True,
# max_num_batched_tokens=8192
)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids)
inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
outputs = llm.generate(prompts=inputs, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
model.generate(prompt)
logicor benchmark(1-shot)
Category | Single turn | Multi turn |
---|---|---|
추론(Reasoning) | 6.00 | 5.57 |
수학(Math) | 5.71 | 3.00 |
코딩(Coding) | 6.00 | 5.71 |
이해(Understanding) | 7.71 | 8.71 |
글쓰기(Writing) | 8.86 | 7.57 |
문법(Grammar) | 2.86 | 3.86 |
Category | Score |
---|---|
Single turn | 6.19 |
Multi turn | 5.74 |
Overall | 5.96 |
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]