Adding Evaluation Results
#22 opened 8 months ago
by
leaderboard-pr-bot
Adding `safetensors` variant of this model
#20 opened 11 months ago
by
SFconvertbot
How to use this model in Langchain and instruct the model ?
#17 opened 12 months ago
by
Talhaz
LlamaTokenizer missing
2
#14 opened about 1 year ago
by
cosmopax
tokenizer = AutoTokenizer.from_pretrained("ehartford/dolphin-2.1-mistral-7b") results in unk error related to tokens greater than 32000
2
#13 opened about 1 year ago
by
LaferriereJC
I can only get this to work at 8192 context? In Oobabooga. I heard it could do more? Is that false?
2
#12 opened about 1 year ago
by
Goldenblood56
In Oobabooga what Instruction template do I select for Dolphin Mistral 2.1?
1
#11 opened about 1 year ago
by
Goldenblood56
Database Searches and Links
#10 opened about 1 year ago
by
NovaMentor
Did you use flan1m (1M GPT-4 completions) or flan5m (5M GPT-3.5 completions) as dolphin dataset?
1
#9 opened about 1 year ago
by
AlexiaJM
finetuning args
#8 opened about 1 year ago
by
lvkaokao
This LLM can reason through stubborness, censorship and alignment, unlike any other I tested.
1
#7 opened about 1 year ago
by
deleted
How do I try this out?
4
#6 opened about 1 year ago
by
henke443
Is the dataset public?
2
#4 opened about 1 year ago
by
luiscosio
Some questions and potential suggestions
4
#3 opened about 1 year ago
by
polymer