rombo dawg
rombodawg
AI & ML interests
Datasets, Finetuning ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ
Please check out my Org: https://huggingface.co/Replete-AI and my new dataset at Replete-AI/code_bagel
Organizations
rombodawg's activity
I have created the other half of your bagel 😋
#2 opened 1 day ago
by
rombodawg
Update README.md
#1 opened 1 day ago
by
rombodawg
Thank you for making this!
3
#1 opened 3 days ago
by
rombodawg
[bot] Conversion to Parquet
#1 opened 3 days ago
by
parquet-converter
Context windows is only 8k???
1
#1 opened 3 days ago
by
rombodawg
correct license
#2 opened 7 days ago
by
Walmart-the-bag
Passes "snake in python test" after only 3 attempts
2
#1 opened 8 days ago
by
rombodawg
Does the model actually work?
4
#1 opened 9 days ago
by
rombodawg
ITS NOT REAL
4
#15 opened 10 days ago
by
rombodawg
ITS NOT REAL
8
#11 opened 10 days ago
by
rombodawg
C_H_U_N_K_Y-L_L_A_M_A
1
#1 opened 11 days ago
by
rombodawg
Very good model for its size (7b and 14b are gonna be amazing)
4
#43 opened 16 days ago
by
rombodawg
What is 256k?
16
#1 opened 14 days ago
by
supercharge19
Mind adding description for this dataset?
2
#2 opened 14 days ago
by
Leon-Leee
Good model but Bullshit chart and inaccurate numbers
24
#20 opened 19 days ago
by
rombodawg
Fix name from 8b to 70b
1
#1 opened 18 days ago
by
rombodawg
Not at all uncensored
4
#2 opened 20 days ago
by
rombodawg
Can you train llama-3-8b-instruct? next
1
#2 opened 20 days ago
by
rombodawg
failed run Llama-3-11.5B-v2
1
#698 opened 20 days ago
by
rombodawg
Bad example image
#1 opened 21 days ago
by
rombodawg
I made a similar dataset with more code if you are interested
#1 opened 21 days ago
by
rombodawg
What merge method was used?
1
#1 opened 21 days ago
by
rombodawg
What kind of dataset?
#1 opened 22 days ago
by
rombodawg
Mistral-evolved-11b version??
2
#4 opened 22 days ago
by
rombodawg
Failed run
1
#108 opened 22 days ago
by
rombodawg
What datasets were these trained on?
5
#2 opened 23 days ago
by
rombodawg
help me understand the point of such scaling
1
#1 opened 23 days ago
by
Samvanity
FP16 files created for your convenience
#5 opened 23 days ago
by
rombodawg
Can code snake in python, first model at this size after deekseekcoder-instruct-6.7b (With caveat)
2
#7 opened 25 days ago
by
rombodawg
(Rebuked: this claim proven false) "Fake coding scores" .73 at best
8
#4 opened 26 days ago
by
rombodawg
is 14b coming?
4
#3 opened 26 days ago
by
rombodawg
You should do this with Gemma-7b
#1 opened 27 days ago
by
rombodawg
is this a single expert of all the experts merged?
2
#5 opened about 1 month ago
by
rombodawg
We are working on creating a single 22b from this model
15
#5 opened about 1 month ago
by
rombodawg
7B or 8B?
4
#24 opened 3 months ago
by
amgadhasan
Are you gonna finetune the code version of this model on your own dataset?
1
#1 opened about 1 month ago
by
rombodawg
Thank you for posting this
1
#1 opened about 1 month ago
by
rombodawg
This model card is a blessing
#2 opened about 1 month ago
by
rombodawg
Number of parameters
7
#9 opened about 1 month ago
by
HugoLaurencon
Why is this completely broken?
2
#11 opened about 1 month ago
by
rombodawg
Fine tune on top of the new stable code model
1
#1 opened about 1 month ago
by
rombodawg
Train Mistral 7B 0.2
9
#2 opened 4 months ago
by
mosama
What was the order of training?
1
#2 opened about 1 month ago
by
rombodawg
what kind of dataset is Tess?
2
#1 opened about 1 month ago
by
rombodawg
Update README.md
#8 opened about 1 month ago
by
rombodawg
How did you increase the size to 11b?
1
#5 opened about 1 month ago
by
froggeric
Update README.md
#23 opened about 1 month ago
by
rombodawg
Coding performance of base model?
4
#11 opened about 1 month ago
by
rombodawg
Hey thanks for using my model 😉
2
#1 opened about 1 month ago
by
rombodawg
I submitted my model a while ago and it never got benchmarked
2
#644 opened about 2 months ago
by
rombodawg
Question
19
#1 opened about 2 months ago
by
WesPro
Create model card
2
#1 opened about 2 months ago
by
rombodawg
I made the next gen mistral model. Id love to see you train it in the same way!
2
#4 opened about 2 months ago
by
rombodawg
Submit Mistral-Evolved-11b-v0.1 to the HuggingFaceH4 Open LLM LeaderBoard
3
#4 opened about 2 months ago
by
Joseph717171
Seperate larger from smaller model benchmarking.
1
#640 opened about 2 months ago
by
rombodawg
Try my model next
#4 opened about 2 months ago
by
rombodawg
Add `chat_template` in tokenizer config
2
#3 opened about 2 months ago
by
jlzhou
Please create google Gemma-7b (8.5b) based version
12
#4 opened 3 months ago
by
rombodawg
Update README.md
#1 opened about 2 months ago
by
rombodawg