Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Test model.

Under testing...

Recipe:

base_model: /content/InfinityRP
gate_mode: random
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
## (optional)
experts_per_token: 2
experts:
  - source_model: /content/WestLake
    positive_prompts: []
  - source_model: /content/Kuno
    positive_prompts: []
  - source_model: /content/InfinityRP
    positive_prompts: []
  - source_model: /content/LemonadeRP
    positive_prompts: []
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.