barius commited on
Commit
5e1322a
·
1 Parent(s): 98758b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -18,6 +18,7 @@ You can find the decrypt code on https://github.com/LianjiaTech/BELLE/tree/main/
18
 
19
  ## Welcome
20
  4-bit quantized version using [llama.cpp](https://github.com/ggerganov/llama.cpp) of [BELLE-LLaMA-7B-2M](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-enc)
 
21
  If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
22
 
23
 
@@ -32,7 +33,8 @@ Should you accept our license and acknowledged the limitations, download the mod
32
 
33
 
34
  ## Model Usage
35
- This is a quantized version made for offline on-devices inferencing.
 
36
  You can use this model with ChatBELLE, a minimal, cross-platform LLM chat app powered by [BELLE](https://github.com/LianjiaTech/BELLE)
37
  using quantized on-device offline models and Flutter UI, running on macOS (done), Windows, Android,
38
  iOS(see [Known Issues](#known-issues)) and more.
 
18
 
19
  ## Welcome
20
  4-bit quantized version using [llama.cpp](https://github.com/ggerganov/llama.cpp) of [BELLE-LLaMA-7B-2M](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-enc)
21
+
22
  If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
23
 
24
 
 
33
 
34
 
35
  ## Model Usage
36
+ This is a quantized version of [BELLE-LLaMA-7B-2M](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-enc) made for offline on-devices inferencing.
37
+
38
  You can use this model with ChatBELLE, a minimal, cross-platform LLM chat app powered by [BELLE](https://github.com/LianjiaTech/BELLE)
39
  using quantized on-device offline models and Flutter UI, running on macOS (done), Windows, Android,
40
  iOS(see [Known Issues](#known-issues)) and more.