Weyaxi commited on
Commit
f2aaff3
1 Parent(s): 4be16cb

unfortunately my guess was incorrect, and we are missing 13B with a minor RAM insufficiency. :(

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -92,7 +92,7 @@ You can learn more about LoRa here:
92
 
93
  This space is loading the model to RAM without performing any quantization, so the required RAM is high.
94
 
95
- You can merge models up to 13B. (If your adapter weights are too large, it might not work.)
96
  """
97
 
98
 
 
92
 
93
  This space is loading the model to RAM without performing any quantization, so the required RAM is high.
94
 
95
+ You can merge models up to 7B. (If your adapter weights are too large, it might not work.)
96
  """
97
 
98