|
--- |
|
language: |
|
- ko |
|
library_name: transformers |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
readme coming soon |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** 4n3mone (YongSang Yoo) |
|
- **Model type:** chatglm |
|
- **Language(s) (NLP):** Korean |
|
- **License:** glm-4 |
|
- **Finetuned from model [optional]:** THUDM/glm-4-9b-chat |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** THUDM/glm-4-9b-chat |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
```python |
|
from transformers import AutoTokenizer |
|
from vllm import LLM, SamplingParams |
|
|
|
|
|
# GLM-4-9B-Chat |
|
# If you encounter OOM (Out of Memory) issues, it is recommended to reduce max_model_len or increase tp_size. |
|
max_model_len, tp_size = 131072, 1 |
|
model_name = "4n3mone/glm-4-ko-9b-chat" |
|
prompt = [{"role": "user", "content": "피카츄랑 아구몬 중에서 누가 더 귀여워?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
|
llm = LLM( |
|
model=model_name, |
|
tensor_parallel_size=tp_size, |
|
max_model_len=max_model_len, |
|
trust_remote_code=True, |
|
enforce_eager=True, |
|
# If you encounter OOM (Out of Memory) issues, it is recommended to enable the following parameters. |
|
# enable_chunked_prefill=True, |
|
# max_num_batched_tokens=8192 |
|
) |
|
stop_token_ids = [151329, 151336, 151338] |
|
sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids) |
|
|
|
inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) |
|
outputs = llm.generate(prompts=inputs, sampling_params=sampling_params) |
|
|
|
print(outputs[0].outputs[0].text) |
|
|
|
model.generate(prompt) |
|
|
|
``` |
|
|
|
## logicor benchmark(1-shot) |
|
| Category | Single turn | Multi turn | |
|
|---|---|---| |
|
| 추론(Reasoning) | 6.00 | 5.57 | |
|
| 수학(Math) | 5.71 | 3.00 | |
|
| 코딩(Coding) | 6.00 | 5.71 | |
|
| 이해(Understanding) | 7.71 | 8.71 | |
|
| 글쓰기(Writing) | 8.86 | 7.57 | |
|
| 문법(Grammar) | 2.86 | 3.86 | |
|
|
|
| Category | Score | |
|
|---|---| |
|
| Single turn | 6.19 | |
|
| Multi turn | 5.74 | |
|
| Overall | 5.96 | |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
[More Information Needed] |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Preprocessing [optional] |
|
|
|
[More Information Needed] |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
[More Information Needed] |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
[More Information Needed] |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
|
|
## Model Examination [optional] |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
[More Information Needed] |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** [More Information Needed] |
|
- **Hours used:** [More Information Needed] |
|
- **Cloud Provider:** [More Information Needed] |
|
- **Compute Region:** [More Information Needed] |
|
- **Carbon Emitted:** [More Information Needed] |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|
|
### Compute Infrastructure |
|
|
|
[More Information Needed] |
|
|
|
#### Hardware |
|
|
|
[More Information Needed] |
|
|
|
#### Software |
|
|
|
[More Information Needed] |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
|
|
[More Information Needed] |
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] |