Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ language:
|
|
17 |
- **Model Optimizations:**
|
18 |
- **Weight quantization:** FP8
|
19 |
- **Activation quantization:** FP8
|
20 |
-
- **Intended Use Cases:** Intended for commercial and research use in
|
21 |
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
22 |
- **Release Date:** 7/23/2024
|
23 |
- **Version:** 1.0
|
@@ -132,6 +132,7 @@ oneshot(
|
|
132 |
|
133 |
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command.
|
134 |
A modified version of ARC-C and GSM8k-cot was used for evaluations, in line with Llama 3.1's prompting. It can be accessed on the [Neural Magic fork of the lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct).
|
|
|
135 |
```
|
136 |
lm_eval \
|
137 |
--model vllm \
|
|
|
17 |
- **Model Optimizations:**
|
18 |
- **Weight quantization:** FP8
|
19 |
- **Activation quantization:** FP8
|
20 |
+
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
|
21 |
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
22 |
- **Release Date:** 7/23/2024
|
23 |
- **Version:** 1.0
|
|
|
132 |
|
133 |
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command.
|
134 |
A modified version of ARC-C and GSM8k-cot was used for evaluations, in line with Llama 3.1's prompting. It can be accessed on the [Neural Magic fork of the lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct).
|
135 |
+
Additional evaluations that were collected for the original Llama 3.1 models will be added in the future.
|
136 |
```
|
137 |
lm_eval \
|
138 |
--model vllm \
|