Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,6 @@ datasets:
|
|
9 |
|
10 |
DeciLM-7B-instruct is a model for short-form instruction following. It is built by LoRA fine-tuning on the [SlimOrca dataset](https://huggingface.co/datasets/Open-Orca/SlimOrca).
|
11 |
|
12 |
-
### 🔥 Click [here](https://console.deci.ai/infery-llm-demo) for a live demo of DeciLM-7B + Infery!
|
13 |
|
14 |
## Model Details
|
15 |
|
@@ -127,7 +126,7 @@ Below are DeciLM-7B and DeciLM-7B-instruct's evaluation results.
|
|
127 |
| Infery-LLM | A10 | 2048 | 2048 | **599** | 32 | 128 |
|
128 |
|
129 |
- In order to replicate the results of the Hugging Face benchmarks, you can use this [code example](https://huggingface.co/Deci/DeciLM-7B/blob/main/benchmark_hf_model.py).
|
130 |
-
- Infery-LLM, Deci's inference engine, features a suite of optimization algorithms, including selective quantization, optimized beam search, continuous batching, and custom CUDA kernels. To
|
131 |
|
132 |
## Ethical Considerations and Limitations
|
133 |
|
|
|
9 |
|
10 |
DeciLM-7B-instruct is a model for short-form instruction following. It is built by LoRA fine-tuning on the [SlimOrca dataset](https://huggingface.co/datasets/Open-Orca/SlimOrca).
|
11 |
|
|
|
12 |
|
13 |
## Model Details
|
14 |
|
|
|
126 |
| Infery-LLM | A10 | 2048 | 2048 | **599** | 32 | 128 |
|
127 |
|
128 |
- In order to replicate the results of the Hugging Face benchmarks, you can use this [code example](https://huggingface.co/Deci/DeciLM-7B/blob/main/benchmark_hf_model.py).
|
129 |
+
- Infery-LLM, Deci's inference engine, features a suite of optimization algorithms, including selective quantization, optimized beam search, continuous batching, and custom CUDA kernels. To explore the full capabilities of Infery-LLM, [schedule a live demo](https://deci.ai/infery-llm-book-a-demo/?utm_campaign=DeciLM%207B%20Launch&utm_source=HF&utm_medium=decilm7b-model-card&utm_term=infery-demo).
|
130 |
|
131 |
## Ethical Considerations and Limitations
|
132 |
|