File size: 1,551 Bytes
e95d5a9
 
97c89fa
 
 
 
 
 
 
 
 
e95d5a9
97c89fa
725f853
97c89fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
pipeline_tag: text-generation
language:
  - en
  - he
tags:
- pretrained
inference:
  parameters:
    temperature: 0.7
---

[<img src="https://i.ibb.co/5Lbwyr1/dicta-logo.jpg" width="300px"/>](https://dicta.org.il)

# Model Card for DictaLM-2.0

The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text. 

For full details of this model please read our [release blog post](https://dicta.org.il/dicta-lm).

This is the base model designed for completion (not for chat!) in the GGUF format for use with llama.cpp. 

There are two versions available - float16 precision (`*.F16.gguf`) and 4-bit quantized precision (`*.Q4_K_M.gguf`).

You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27).

## Model Architecture

DictaLM-2.0 is based on the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model with the following changes:
- An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.  
- Continued pretraining on over 190B tokens of naturally occuring text, 50% Hebrew and 50% English.

## Notice

DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.

## Citation

If you use this model, please cite:

```bibtex
[Will be added soon]
```