Echo-3B-GGUF / README.md
afrideva's picture
Upload README.md with huggingface_hub
ac9ba68
metadata
base_model: euclaise/Echo-3B
datasets:
  - pankajmathur/lima_unchained_v1
  - CheshireAI/guanaco-unchained
  - totally-not-an-llm/sharegpt-hyperfiltered-3k
  - totally-not-an-llm/EverythingLM-data-V3
  - LDJnr/Verified-Camel
  - CollectiveCognition/chats-data-2023-10-16
  - Norquinal/claude_multiround_chat_30k
  - euclaise/WritingPromptsX
  - euirim/goodwiki
  - euclaise/MiniCoT
  - euclaise/SciCoT
  - euclaise/symtune_mini
  - euclaise/mathoverflow-accepted
  - lemonilia/LimaRP
inference: false
model_creator: euclaise
model_name: Echo-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

euclaise/Echo-3B-GGUF

Quantized GGUF model files for Echo-3B from euclaise

Name Quant method Size
echo-3b.fp16.gguf fp16 5.59 GB
echo-3b.q2_k.gguf q2_k 1.20 GB
echo-3b.q3_k_m.gguf q3_k_m 1.39 GB
echo-3b.q4_k_m.gguf q4_k_m 1.71 GB
echo-3b.q5_k_m.gguf q5_k_m 1.99 GB
echo-3b.q6_k.gguf q6_k 2.30 GB
echo-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card: