Edit model card

miiqu-105b-v1.0

Developed by Infinimol AI GmbH

Also Available:

8th place on EQ-Bench, beating Qwen1.5-72B-Chat, miqudev/miqu-1-70b, mistral-medium and claude-3-sonnet-20240229. All without fine-tuning or additional training.

Thanks for support from: turboderp, silphendio, sqrkl, and ngxson!

Model Details

  • Max Context: 32768 tokens
  • Layers: 105

Prompt template: ChatML or Mistral

chatml:

<|im_start|><|user|>\n<|user-message|><|im_end|>\n<|im_start|><|bot|>\n<|bot-message|><|im_end|>\n

mistral:

[INST] <|user|><|user-message|>[/INST]<|bot|><|bot-message|></s>
Downloads last month
74
Safetensors
Model size
90.4B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Infinimol/miiqu-f16

Quantizations
2 models