--- license: apache-2.0 language: - en metrics: - accuracy - code_eval library_name: transformers pipeline_tag: text-generation tags: - rag - context obedient - TroyDoesAI - Mermaid - Flow - Diagram - Sequence - Map - Context - Accurate - Summarization - Story - Code - Coder - Architecture - Retrieval - Augmented - Generation - AI - LLM - Mistral - LLama - Large Language Model - Retrieval Augmented Generation - Troy Andrew Schultz - LookingForWork - OpenForHire - IdoCoolStuff - Knowledge Graph - Knowledge - Graph - Accelerator - Enthusiast - Chatbot - Personal Assistant - Copilot - lol - tags - Pruned - efficient - smaller - small - local - open - source - open source - quant - quantize - ablated - Ablation - 'uncensored ' - unaligned - 'bad ' - alignment --- For those trying to shoe horn this large model on your machine every GB of saved memory counts when offloading to System RAM! Here is a pruned down the 22.2 Billion parameter model by 2 junk layers to make a 21.5B that doesnt appear to lose any sense of quality. https://huggingface.co/mistralai/Codestral-22B-v0.1