Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

BigWeave v18 108b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Mistral, Vicuna and Alpaca.

Merge process

This is a self-merge of 152334H/miqu-1-70b-sf. By conducting exl2 measurements, we identify the most relevant layers. The most important layers are extended with layers in-between to create longer series of consecutive layers.

Merge configuration:

slices:
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [0,5]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [1,9]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [5,33]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [16,51]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [34,77]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [75,79]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [77,80]
merge_method: passthrough
dtype: float16
Downloads last month
331
Safetensors
Model size
108B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including llmixer/BigWeave-v18-108b