How to use provided model?
1
#9 opened about 2 months ago
by
Vader20FF
maxContextLength of just 64 tokens
#8 opened 5 months ago
by
ronaldmannak
Unable to load model in SwitChat example
6
#7 opened 6 months ago
by
ltouati
M1 8G RAM macbook air run at 0.01token/second, something is wrong right?
5
#6 opened 9 months ago
by
DanielCL
When i try to load the model
4
#5 opened 10 months ago
by
SriBalaaji
Understanding CoreML conversion of llama 2 7b
14
#4 opened 10 months ago
by
kharish89