fenixlam PRO
FenixInDarkSolo
·
AI & ML interests
None yet
Organizations
None yet
FenixInDarkSolo's activity
Simply cannot run the example code.
1
#8 opened about 1 month ago
by
FenixInDarkSolo
convert into gguf?
#2 opened about 1 month ago
by
FenixInDarkSolo
config and tokenizer
1
#1 opened 4 months ago
by
matthewdi
Leaderboard extremely slow to load
19
#648 opened 7 months ago
by
FenixInDarkSolo
It is censorship!
3
#1 opened 11 months ago
by
FenixInDarkSolo
Do you think it can be superhot?
1
#1 opened over 1 year ago
by
FenixInDarkSolo
A 30B of this would be A+++
2
#2 opened over 1 year ago
by
vdruts
Cannot load the model in Koboldcpp 1.28
7
#1 opened over 1 year ago
by
FenixInDarkSolo
Cannot run on llama.cpp and koboldcpp
3
#1 opened over 1 year ago
by
FenixInDarkSolo
Failed to run at koboldcpp and llama.cpp
4
#1 opened over 1 year ago
by
FenixInDarkSolo
Hello - how to run it?
8
#1 opened over 1 year ago
by
mirek190
ggml-gpt-fourchannel-q4_0.bin does not work on llama.cpp
2
#1 opened over 1 year ago
by
FenixInDarkSolo
Can it load on CPU mode?
6
#4 opened over 1 year ago
by
FenixInDarkSolo
Can it load on CPU mode?
6
#4 opened over 1 year ago
by
FenixInDarkSolo