elinas

elinas

AI & ML interests

LLMs & Finetuning

Recent Activity

updated a model about 2 months ago
ZeusLabs/Chronos-Platinum-72B
View all activity

Organizations

Zeus Labs's profile picture KoboldAI's profile picture Social Post Explorers's profile picture Cognitive Computations's profile picture LLMExperiments's profile picture

Posts 1

view post
Post
2050
We conducted an experiment in an effort to revive LLaMA 1 33B as it had unique prose and a lack of "GPT-isms" and "slop" in its pretraining data, as well as being one of the favorites at the time. With multiple finetune runs, we were able to extend the model from it's pretrained base of 2048 to ~12,000 tokens adding approx. 500M tokens in the process. The effective length is 16,384 but it's better to keep it on the lower range. It writes well and in multiple formats. In the future, we have some ideas like implementing GQA. Please take a look and we would love to hear your feedback!

ZeusLabs/Chronos-Divergence-33B

datasets

None public yet