This is a 6-bit EXL2 quantization of Aurelian v0.1alpha 70B 32K for testing & feedback. See that page for more details.
This quantization fits in 48GB+24GB (36/24 split) or 3x24GB (16/17/20 split) using Exllamav2 @ 32k context.
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.