Hebrew-Mistral-7B-GGUF
- This is quantized version of yam-peleg/Hebrew-Mistral-7B created using llama.cpp
Model Description
Hebrew-Mistral-7B is an open-source Large Language Model (LLM) pretrained in hebrew and english pretrained with 7B billion parameters, based on Mistral-7B-v1.0 from Mistral.
It has an extended hebrew tokenizer with 64,000 tokens and is continuesly pretrained from Mistral-7B on tokens in both English and Hebrew.
The resulting model is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation.
Notice
Hebrew-Mistral-7B is a pretrained base model and therefore does not have any moderation mechanisms.
Authors of Original Model
- Trained by Yam Peleg.
- In collaboration with Jonathan Rouach and Arjeo, inc.
- Downloads last month
- 322
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/Hebrew-Mistral-7B-GGUF
Base model
yam-peleg/Hebrew-Mistral-7B