Text Generation
Transformers
Safetensors
qwen2
llama-factory
full
Generated from Trainer
conversational
text-generation-inference
Instructions to use mlfoundations-dev/ds_no_offload_liger_packing_zero2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mlfoundations-dev/ds_no_offload_liger_packing_zero2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mlfoundations-dev/ds_no_offload_liger_packing_zero2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("mlfoundations-dev/ds_no_offload_liger_packing_zero2") model = AutoModelForCausalLM.from_pretrained("mlfoundations-dev/ds_no_offload_liger_packing_zero2") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use mlfoundations-dev/ds_no_offload_liger_packing_zero2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "mlfoundations-dev/ds_no_offload_liger_packing_zero2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mlfoundations-dev/ds_no_offload_liger_packing_zero2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/mlfoundations-dev/ds_no_offload_liger_packing_zero2
- SGLang
How to use mlfoundations-dev/ds_no_offload_liger_packing_zero2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "mlfoundations-dev/ds_no_offload_liger_packing_zero2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mlfoundations-dev/ds_no_offload_liger_packing_zero2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "mlfoundations-dev/ds_no_offload_liger_packing_zero2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "mlfoundations-dev/ds_no_offload_liger_packing_zero2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use mlfoundations-dev/ds_no_offload_liger_packing_zero2 with Docker Model Runner:
docker model run hf.co/mlfoundations-dev/ds_no_offload_liger_packing_zero2
| {"current_steps": 1, "total_steps": 18, "loss": 1.2885, "lr": 5e-06, "epoch": 0.16666666666666666, "percentage": 5.56, "elapsed_time": "0:01:14", "remaining_time": "0:21:09"} | |
| {"current_steps": 2, "total_steps": 18, "loss": 1.3273, "lr": 1e-05, "epoch": 0.3333333333333333, "percentage": 11.11, "elapsed_time": "0:02:18", "remaining_time": "0:18:31"} | |
| {"current_steps": 3, "total_steps": 18, "loss": 1.2085, "lr": 9.903926402016153e-06, "epoch": 0.5, "percentage": 16.67, "elapsed_time": "0:03:23", "remaining_time": "0:16:55"} | |
| {"current_steps": 4, "total_steps": 18, "loss": 1.1297, "lr": 9.619397662556434e-06, "epoch": 0.6666666666666666, "percentage": 22.22, "elapsed_time": "0:04:27", "remaining_time": "0:15:35"} | |
| {"current_steps": 5, "total_steps": 18, "loss": 1.1569, "lr": 9.157348061512728e-06, "epoch": 0.8333333333333334, "percentage": 27.78, "elapsed_time": "0:05:31", "remaining_time": "0:14:22"} | |
| {"current_steps": 6, "total_steps": 18, "loss": 1.0759, "lr": 8.535533905932739e-06, "epoch": 1.0, "percentage": 33.33, "elapsed_time": "0:06:26", "remaining_time": "0:12:52"} | |
| {"current_steps": 7, "total_steps": 18, "loss": 1.037, "lr": 7.777851165098012e-06, "epoch": 1.1666666666666667, "percentage": 38.89, "elapsed_time": "0:08:45", "remaining_time": "0:13:45"} | |
| {"current_steps": 8, "total_steps": 18, "loss": 1.0009, "lr": 6.913417161825449e-06, "epoch": 1.3333333333333333, "percentage": 44.44, "elapsed_time": "0:09:49", "remaining_time": "0:12:16"} | |
| {"current_steps": 9, "total_steps": 18, "loss": 0.9742, "lr": 5.975451610080643e-06, "epoch": 1.5, "percentage": 50.0, "elapsed_time": "0:10:53", "remaining_time": "0:10:53"} | |
| {"current_steps": 10, "total_steps": 18, "loss": 0.9315, "lr": 5e-06, "epoch": 1.6666666666666665, "percentage": 55.56, "elapsed_time": "0:11:57", "remaining_time": "0:09:33"} | |
| {"current_steps": 11, "total_steps": 18, "loss": 0.9095, "lr": 4.02454838991936e-06, "epoch": 1.8333333333333335, "percentage": 61.11, "elapsed_time": "0:13:01", "remaining_time": "0:08:17"} | |
| {"current_steps": 12, "total_steps": 18, "loss": 0.9142, "lr": 3.0865828381745515e-06, "epoch": 2.0, "percentage": 66.67, "elapsed_time": "0:13:55", "remaining_time": "0:06:57"} | |
| {"current_steps": 13, "total_steps": 18, "loss": 0.8895, "lr": 2.2221488349019903e-06, "epoch": 2.1666666666666665, "percentage": 72.22, "elapsed_time": "0:16:17", "remaining_time": "0:06:15"} | |
| {"current_steps": 14, "total_steps": 18, "loss": 0.87, "lr": 1.4644660940672628e-06, "epoch": 2.3333333333333335, "percentage": 77.78, "elapsed_time": "0:17:21", "remaining_time": "0:04:57"} | |
| {"current_steps": 15, "total_steps": 18, "loss": 0.8943, "lr": 8.426519384872733e-07, "epoch": 2.5, "percentage": 83.33, "elapsed_time": "0:18:25", "remaining_time": "0:03:41"} | |
| {"current_steps": 16, "total_steps": 18, "loss": 0.8858, "lr": 3.8060233744356634e-07, "epoch": 2.6666666666666665, "percentage": 88.89, "elapsed_time": "0:19:29", "remaining_time": "0:02:26"} | |
| {"current_steps": 17, "total_steps": 18, "loss": 0.8833, "lr": 9.607359798384785e-08, "epoch": 2.8333333333333335, "percentage": 94.44, "elapsed_time": "0:20:33", "remaining_time": "0:01:12"} | |
| {"current_steps": 18, "total_steps": 18, "loss": 0.8752, "lr": 0.0, "epoch": 3.0, "percentage": 100.0, "elapsed_time": "0:21:28", "remaining_time": "0:00:00"} | |
| {"current_steps": 18, "total_steps": 18, "epoch": 3.0, "percentage": 100.0, "elapsed_time": "0:23:48", "remaining_time": "0:00:00"} | |