duyntnet's picture
Update README.md
6409b9b verified
|
raw
history blame
1.26 kB
metadata
license: other
language:
  - en
pipeline_tag: text-generation
inference: false
tags:
  - transformers
  - gguf
  - imatrix
  - Jellyfish-13B

Quantizations of https://huggingface.co/NECOUDBFM/Jellyfish-13B

Experiment

Quants ending in "_X" are experimental quants. These quants are the same as normal quants, but their token embedding weights are set to Q8_0 except for Q6_K and Q8_0 which are set to F16. The change will make these experimental quants larger but in theory, should result in improved performance.

List of experimental quants:

  • Q2_K_X
  • Q4_K_M_X
  • Q5_K_M_X
  • Q6_K_X
  • Q8_0_X

Inference Clients/UIs


From original readme

Model Details

Jellyfish-13B is a large language model equipped with 13 billion parameters. It's tailored specifically for data preprocessing tasks, including entity matching, data imputation, error detection, and schema matching.

Prompt Template

### Instruction:

<prompt> (without the <>)

### Response: