Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
MoE'd up:
Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway.
Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.