Edit model card

Quantizations of https://huggingface.co/NECOUDBFM/Jellyfish-13B

Experiment

Quants ending in "_X" are experimental quants. These quants are the same as normal quants, but their token embedding weights are set to Q8_0 except for Q6_K and Q8_0 which are set to F16. The change will make these experimental quants larger but in theory, should result in improved performance.

List of experimental quants:

  • Q2_K_X
  • Q4_K_M_X
  • Q5_K_M_X
  • Q6_K_X
  • Q8_0_X

Inference Clients/UIs


From original readme

Model Details

Jellyfish-13B is a large language model equipped with 13 billion parameters. It's tailored specifically for data preprocessing tasks, including entity matching, data imputation, error detection, and schema matching.

Prompt Template

### Instruction:

<prompt> (without the <>)

### Response:
Downloads last month
1,782
GGUF
Model size
13B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.