TobDeBer/arco-Q4_K_M-GGUF

This model was converted to Big Endian Q4_K_M GGUF format from appvoid/arco using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Container Repository for CPU adaptations of Inference code

Variants for Inference

Slim container

  • run std binaries

CPUdiffusion

  • inference diffusion models on CPU
  • include CUDAonCPU stack

Diffusion container

  • run diffusion app.py variants
  • support CPU and CUDA
  • include Flux

Slim CUDA container

  • run CUDA binaries

Variants for Build

Llama.cpp build container

  • build llama-cli-static
  • build llama-server-static

sd build container

  • build sd
  • optional: build sd-server

CUDA build container

  • build cuda binaries
  • support sd_cuda
Downloads last month
30
GGUF
Model size
514M params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for TobDeBer/myContainers

Base model

appvoid/arco
Quantized
(7)
this model