Instructions to use MrezaPRZ/codestral_high_quality_sft with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MrezaPRZ/codestral_high_quality_sft with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MrezaPRZ/codestral_high_quality_sft")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MrezaPRZ/codestral_high_quality_sft") model = AutoModelForCausalLM.from_pretrained("MrezaPRZ/codestral_high_quality_sft") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MrezaPRZ/codestral_high_quality_sft with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MrezaPRZ/codestral_high_quality_sft" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MrezaPRZ/codestral_high_quality_sft", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/MrezaPRZ/codestral_high_quality_sft
- SGLang
How to use MrezaPRZ/codestral_high_quality_sft with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MrezaPRZ/codestral_high_quality_sft" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MrezaPRZ/codestral_high_quality_sft", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MrezaPRZ/codestral_high_quality_sft" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MrezaPRZ/codestral_high_quality_sft", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use MrezaPRZ/codestral_high_quality_sft with Docker Model Runner:
docker model run hf.co/MrezaPRZ/codestral_high_quality_sft
Upload MistralForCausalLM
Browse files- model-00001-of-00009.safetensors +1 -1
- model-00002-of-00009.safetensors +1 -1
- model-00003-of-00009.safetensors +1 -1
- model-00004-of-00009.safetensors +1 -1
- model-00005-of-00009.safetensors +1 -1
- model-00006-of-00009.safetensors +1 -1
- model-00007-of-00009.safetensors +1 -1
- model-00008-of-00009.safetensors +1 -1
- model-00009-of-00009.safetensors +1 -1
model-00001-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4882298776
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1a814cc6433dd592de2e022d717b17c1b42f48b1c589662983d08acbb7ff102
|
| 3 |
size 4882298776
|
model-00002-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4983012160
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3bd95d97dd935f969c4ab6c66d9a7406ed2a9fe5b2919bf61d0879bc8efb11cd
|
| 3 |
size 4983012160
|
model-00003-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4957821336
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1c3816e614655fb9ad3052f8ff96571f7bf36203193d36b4ae49f508b1972331
|
| 3 |
size 4957821336
|
model-00004-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4882323744
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f3067565003424ccd6994739acea2d42c3d0d74304d6fc63a19927fc2ada665c
|
| 3 |
size 4882323744
|
model-00005-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4983012192
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8f93364844502ebccb02ab2455cd09fb27c07425a317f4e55c46ab0cb62d93e9
|
| 3 |
size 4983012192
|
model-00006-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4957821336
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ddc603fecf9ab4506d8243fc076167c0ee8b6fa7b530c76984f045dd887e3bdd
|
| 3 |
size 4957821336
|
model-00007-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4882323744
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1dddfe179ba07aae143fc9fa71e63e49f20cb22cd04a81e3717bf61e4f39b7c
|
| 3 |
size 4882323744
|
model-00008-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4983012192
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0481d4e590d7cd44e585534a881a4ba7081cf8586e95b0211040262942f86cc
|
| 3 |
size 4983012192
|
model-00009-of-00009.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 4982999056
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75bd731eed514666890ad42df1d94548126577dff2537f96b5a75801f295b3d5
|
| 3 |
size 4982999056
|