kalle07 commited on
Commit
0ea6c62
·
verified ·
1 Parent(s): ca93599

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -30,11 +30,12 @@ They work more or less (sometimes the results are more truthful if the “chat w
30
  → some models can not hande large TXT files (maybe only 200pages - hints below)
31
  <br>
32
  <b>My short impression:</b>
 
33
  <li>nomic-embed-text</li>
34
  <li>mxbai-embed-large</li>
35
  <li>mug-b-1.6</li>
36
  <li>Ger-RAG-BGE-M3 (german)</li>
37
-
38
  Working well, all other its up to you!
39
  <br>
40
 
@@ -47,11 +48,11 @@ but in ALLM its cutting all in 1024 character parts, so aprox two times or bit m
47
  You can receive 14-snippets a 1024t (14336t) from your document ~10000words and 1600t left for the answer ~1000words (2 pages)
48
 
49
  You can play and set for your needs, eg 8-snippets a 2048t, or 28-snippets a 512t ...
50
-
51
  <li>8000t (~6000words) ~0.8GB VRAM usage</li>
52
  <li>16000t (~12000words) ~1.5GB VRAM usage</li>
53
  <li>32000t (~24000words) ~3GB VRAM usage</li>
54
-
55
  <br>
56
  ...
57
 
@@ -91,7 +92,7 @@ on discord (sevenof9)
91
 
92
  ...
93
 
94
- <ul style="line-height: 0.8;">
95
  <li>avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI (German, English) - 600pages and more </li>
96
  <li>maidalun1020/bce-embedding-base_v1 (English and Chinese) - only ~200pages </li>
97
  <li>maidalun1020/bce-reranker-base_v1 (English, Chinese, Japanese and Korean) - only ~200pages</li>
 
30
  &rarr; some models can not hande large TXT files (maybe only 200pages - hints below)
31
  <br>
32
  <b>My short impression:</b>
33
+ <ul style="line-height: 1;">
34
  <li>nomic-embed-text</li>
35
  <li>mxbai-embed-large</li>
36
  <li>mug-b-1.6</li>
37
  <li>Ger-RAG-BGE-M3 (german)</li>
38
+ </ul>
39
  Working well, all other its up to you!
40
  <br>
41
 
 
48
  You can receive 14-snippets a 1024t (14336t) from your document ~10000words and 1600t left for the answer ~1000words (2 pages)
49
 
50
  You can play and set for your needs, eg 8-snippets a 2048t, or 28-snippets a 512t ...
51
+ <ul style="line-height: 1;">
52
  <li>8000t (~6000words) ~0.8GB VRAM usage</li>
53
  <li>16000t (~12000words) ~1.5GB VRAM usage</li>
54
  <li>32000t (~24000words) ~3GB VRAM usage</li>
55
+ </ul>
56
  <br>
57
  ...
58
 
 
92
 
93
  ...
94
 
95
+ <ul style="line-height: 1;">
96
  <li>avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI (German, English) - 600pages and more </li>
97
  <li>maidalun1020/bce-embedding-base_v1 (English and Chinese) - only ~200pages </li>
98
  <li>maidalun1020/bce-reranker-base_v1 (English, Chinese, Japanese and Korean) - only ~200pages</li>