Edit model card

Safetensors conversion of Xwin-LM-70B-V0.1 (https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1/tree/main), FP32, to be used directly on transformers or to quant with exllamav2.

FP16 made by firelzrd here https://huggingface.co/firelzrd/Xwin-LM-70B-V0.1-fp16-safetensors

Downloads last month
13
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.