--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Jellyfish-13B --- Quantizations of https://huggingface.co/NECOUDBFM/Jellyfish-13B ### Experiment Quants **ending in "_X"** are experimental quants. These quants are the same as normal quants, but their token embedding weights are set to Q8_0 except for Q6_K and Q8_0 which are set to F16. The change will make these experimental quants larger but ***in theory***, should result in improved performance. List of experimental quants: * Q2_K_X * Q4_K_M_X * Q5_K_M_X * Q6_K_X * Q8_0_X --- ### Inference Clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [JanAI](https://github.com/janhq/jan) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [ollama](https://github.com/ollama/ollama) --- # From original readme ## Model Details Jellyfish-13B is a large language model equipped with 13 billion parameters. It's tailored specifically for data preprocessing tasks, including entity matching, data imputation, error detection, and schema matching. ## Prompt Template ``` ### Instruction: (without the <>) ### Response: ```