Pelochus commited on
Commit
7dc79f1
1 Parent(s): 4921edd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -21,6 +21,7 @@ Right now, only converted the following models:
21
  | LLM | Parameters | Link |
22
  | --------------------- | ----------- | ---------------------------------------------------------- |
23
  | Qwen Chat | 1.8B | https://huggingface.co/Pelochus/qwen-1_8B-rk3588 |
 
24
  | Microsoft Phi-2 | 2.7B | https://huggingface.co/Pelochus/phi-2-rk3588 |
25
  | Microsoft Phi-3 Mini | 3.8B | https://huggingface.co/Pelochus/phi-3-mini-rk3588 |
26
  | Llama 2 7B | 7B | https://huggingface.co/Pelochus/llama2-chat-7b-hf-rk3588 |
@@ -28,7 +29,7 @@ Right now, only converted the following models:
28
  | Qwen 1.5 Chat | 4B | https://huggingface.co/Pelochus/qwen1.5-chat-4B-rk3588 |
29
  | TinyLlama v1 (broken) | 1.1B | https://huggingface.co/Pelochus/tinyllama-v1-rk3588 |
30
 
31
- However, RKLLM also supports Qwen 2 (supossedly). Llama 2 was converted using Azure servers.
32
  For reference, converting Phi-2 peaked at about 15 GBs of RAM + 25 GBs of swap (counting OS, but that was using about 2 GBs max).
33
  Converting Llama 2 7B peaked at about 32 GBs of RAM + 50 GB of swap.
34
 
 
21
  | LLM | Parameters | Link |
22
  | --------------------- | ----------- | ---------------------------------------------------------- |
23
  | Qwen Chat | 1.8B | https://huggingface.co/Pelochus/qwen-1_8B-rk3588 |
24
+ | Gemma | 2B | https://huggingface.co/Pelochus/gemma-2b-rk3588 |
25
  | Microsoft Phi-2 | 2.7B | https://huggingface.co/Pelochus/phi-2-rk3588 |
26
  | Microsoft Phi-3 Mini | 3.8B | https://huggingface.co/Pelochus/phi-3-mini-rk3588 |
27
  | Llama 2 7B | 7B | https://huggingface.co/Pelochus/llama2-chat-7b-hf-rk3588 |
 
29
  | Qwen 1.5 Chat | 4B | https://huggingface.co/Pelochus/qwen1.5-chat-4B-rk3588 |
30
  | TinyLlama v1 (broken) | 1.1B | https://huggingface.co/Pelochus/tinyllama-v1-rk3588 |
31
 
32
+ Llama 2 was converted using Azure servers.
33
  For reference, converting Phi-2 peaked at about 15 GBs of RAM + 25 GBs of swap (counting OS, but that was using about 2 GBs max).
34
  Converting Llama 2 7B peaked at about 32 GBs of RAM + 50 GB of swap.
35