Quantization made by Richard Erkhov.
Hebrew-Mistral-7B - GGUF
- Model creator: https://huggingface.co/yam-peleg/
- Original model: https://huggingface.co/yam-peleg/Hebrew-Mistral-7B/
Name | Quant method | Size |
---|---|---|
Hebrew-Mistral-7B.Q2_K.gguf | Q2_K | 2.67GB |
Hebrew-Mistral-7B.IQ3_XS.gguf | IQ3_XS | 2.96GB |
Hebrew-Mistral-7B.IQ3_S.gguf | IQ3_S | 3.12GB |
Hebrew-Mistral-7B.Q3_K_S.gguf | Q3_K_S | 3.1GB |
Hebrew-Mistral-7B.IQ3_M.gguf | IQ3_M | 3.21GB |
Hebrew-Mistral-7B.Q3_K.gguf | Q3_K | 3.43GB |
Hebrew-Mistral-7B.Q3_K_M.gguf | Q3_K_M | 3.43GB |
Hebrew-Mistral-7B.Q3_K_L.gguf | Q3_K_L | 3.71GB |
Hebrew-Mistral-7B.IQ4_XS.gguf | IQ4_XS | 3.84GB |
Hebrew-Mistral-7B.Q4_0.gguf | Q4_0 | 4.0GB |
Hebrew-Mistral-7B.IQ4_NL.gguf | IQ4_NL | 4.04GB |
Hebrew-Mistral-7B.Q4_K_S.gguf | Q4_K_S | 4.03GB |
Hebrew-Mistral-7B.Q4_K.gguf | Q4_K | 4.24GB |
Hebrew-Mistral-7B.Q4_K_M.gguf | Q4_K_M | 4.24GB |
Hebrew-Mistral-7B.Q4_1.gguf | Q4_1 | 4.42GB |
Hebrew-Mistral-7B.Q5_0.gguf | Q5_0 | 4.84GB |
Hebrew-Mistral-7B.Q5_K_S.gguf | Q5_K_S | 4.84GB |
Hebrew-Mistral-7B.Q5_K.gguf | Q5_K | 4.96GB |
Hebrew-Mistral-7B.Q5_K_M.gguf | Q5_K_M | 4.96GB |
Hebrew-Mistral-7B.Q5_1.gguf | Q5_1 | 5.26GB |
Hebrew-Mistral-7B.Q6_K.gguf | Q6_K | 5.74GB |
Hebrew-Mistral-7B.Q8_0.gguf | Q8_0 | 7.43GB |
Original model description:
license: apache-2.0 language: - en - he library_name: transformers
Hebrew-Mistral-7B
Hebrew-Mistral-7B is an open-source Large Language Model (LLM) pretrained in hebrew and english pretrained with 7B billion parameters, based on Mistral-7B-v1.0 from Mistral.
It has an extended hebrew tokenizer with 64,000 tokens and is continuesly pretrained from Mistral-7B on tokens in both English and Hebrew.
The resulting model is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation.
Usage
Below are some code snippets on how to get quickly started with running the model.
First make sure to pip install -U transformers
, then copy the snippet from the section that is relevant for your usecase.
Running on CPU
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B")
input_text = "ืฉืืื! ืื ืฉืืืื ืืืื?"
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Running on GPU
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B", device_map="auto")
input_text = "ืฉืืื! ืื ืฉืืืื ืืืื?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Running with 4-Bit precision
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B", quantization_config = BitsAndBytesConfig(load_in_4bit=True))
input_text = "ืฉืืื! ืื ืฉืืืื ืืืื?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0])
Notice
Hebrew-Mistral-7B is a pretrained base model and therefore does not have any moderation mechanisms.
Authors
- Trained by Yam Peleg.
- In collaboration with Jonathan Rouach and Arjeo, inc.
- Downloads last month
- 3,561