How do i run old model OpenAssistant/oasst-sft-6-llama-30b version?
#67 opened 9 months ago
by
Meatmms
What happened? This model was removed from chat?
3
#66 opened 9 months ago
by
iNeverLearnedHowToRead
can someone clarify if the original weights for which version of llama model this requires?
1
#64 opened 10 months ago
by
kbmlcoding
Is this model censored in any capacity?
1
#63 opened about 1 year ago
by
ReXommendation
What am I supposed to do here ?
#61 opened about 1 year ago
by
RahimTS
Integrating LLAMA in Python for generating texts using prompt
#58 opened about 1 year ago
by
khorg0sh
commercial use
1
#57 opened about 1 year ago
by
Hans347
Change max_length (characters count) from 2040 to 4096
2
#53 opened about 1 year ago
by
aman4vr
New token key requirements? How do I call to get a token here:
1
#51 opened about 1 year ago
by
awacke1
Update README.md
#49 opened about 1 year ago
by
elizeubjunior
Problem with HF LLaMA conversion script step 5
#48 opened about 1 year ago
by
DevVipin
🙏Thanks for this awesome model set and datasets - love this community! 🎓💡
#45 opened about 1 year ago
by
awacke1
Answers coming up just a few words short, literally
1
#44 opened about 1 year ago
by
Groundarts
Node js transformation
#43 opened about 1 year ago
by
sunil23391
ggml models
#41 opened about 1 year ago
by
Yahir
Update README.md
#39 opened about 1 year ago
by
kraaghavin
System Requirement for oasst-sft-6-llama-30b-xor
#38 opened about 1 year ago
by
Sosothegreat
Checksums
6
#36 opened about 1 year ago
by
abdulsalambande
How to chat with model via API?
7
#33 opened about 1 year ago
by
InsafQ
The josn file seems to be garbled when it is opened
1
#32 opened about 1 year ago
by
Critejon
training code
1
#31 opened about 1 year ago
by
winglian
Non-Commericial
2
#30 opened about 1 year ago
by
jwr1015
It can't answer a lot of questions right.
3
#29 opened about 1 year ago
by
alexchenyu
OSError: It looks like the config file at './data/config.json' is not a valid JSON file.
6
#27 opened about 1 year ago
by
Akhalee
can we use this model to finetune it on our specific dataset , like how other models hosted on hugging face is done.
2
#26 opened about 1 year ago
by
MukeshSharma
Model returns empty prompt.
5
#24 opened about 1 year ago
by
shmalex
HuggingChat hardware requirements
2
#23 opened about 1 year ago
by
InferencetrainingAI
checksum does not match
11
#22 opened about 1 year ago
by
YannisTevissen
Video needed :)
5
#20 opened about 1 year ago
by
RedwanEl
XOR conversion fails: missing params.json
2
#19 opened about 1 year ago
by
ccsum
using pipeline
1
#18 opened about 1 year ago
by
halow87
Windows coversion almost successful
3
#17 opened about 1 year ago
by
DontLike
Where to find the original weights of LLama?
5
#15 opened about 1 year ago
by
coiler
How is everyone running/interacting the model?
1
#10 opened about 1 year ago
by
ByteSized
How can I do inference with this model?
5
#5 opened about 1 year ago
by
Grambel
Trying to convert LlaMa weights to HF and running out of RAM, but don't want to buy more RAM?
14
#4 opened about 1 year ago
by
daryl149
Converting to ggml and quantizing with llama.cpp
5
#2 opened about 1 year ago
by
akiselev
Dependency Versions
2
#1 opened about 1 year ago
by
ByteSized