Hebrew-Mistral-7B-GGUF

Model Description

Hebrew-Mistral-7B is an open-source Large Language Model (LLM) pretrained in hebrew and english pretrained with 7B billion parameters, based on Mistral-7B-v1.0 from Mistral.

It has an extended hebrew tokenizer with 64,000 tokens and is continuesly pretrained from Mistral-7B on tokens in both English and Hebrew.

The resulting model is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation.

Notice

Hebrew-Mistral-7B is a pretrained base model and therefore does not have any moderation mechanisms.

Authors of Original Model

  • Trained by Yam Peleg.
  • In collaboration with Jonathan Rouach and Arjeo, inc.
Downloads last month
160
GGUF
Model size
7.5B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Hebrew-Mistral-7B-GGUF

Quantized
(12)
this model