MaziyarPanahi's picture
Update README.md (#1)
581cdd7 verified
|
raw
history blame
587 Bytes
metadata
license: apache-2.0
base_model: v2ray/Mixtral-8x22B-v0.1
inference: false
model_creator: MaziyarPanahi
model_name: Mixtral-8x22B-v0.1-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
  - quantized
  - 2-bit
  - 3-bit
  - 4-bit
  - 5-bit
  - 6-bit
  - 8-bit
  - 16-bit
  - GGUF
  - mixtral
  - moe

Mixtral-8x22B-v0.1-GGUF

in progress ...

Load sharded model

llama_load_model_from_file will detect the number of files and will load additional tensors from the rest of files.

main --model Mixtral-8x22B-v0.1.fp16-00001-of-00005.gguf -ngl 64