Bram Vanroy PRO

BramVanroy

AI & ML interests

Artificial intelligence, natural language processing, computational linguistics

Organizations

Posts 4

view post
Post
πŸ“£ DPO Dutch model release + datasets

After teasing for a while, I am finally releasing **GEITje 7B Ultra**, building upon the great GEITje 7B by @Rijgersberg . New contributions include: large new datasets for SFT (instruction/chat), two datasets for DPO training (i.e. RLAIF), and an SFT and DPO version of GEITje. The READMEs describe everything well (I hope), and I'll also share more info on social medias tomorrow.

For me this is a huge release, the datasets more so than the models. I'm especially pleased with UltraChat, which I created with the intent of having a diverse dataset - the model must be able to communicate with different types of users. So the user questions are created as if they were written by different personas, e.g. language learners, young children, experts, critics, etc. The focus with this is "building a good communication bot that is accessible and can handle different kinds of user input".

I wish I could find the time to also write a paper to get some "academic recognition" but that'll have to wait for now. I just want to bring it to the public so that others can play with it and use it to build new, cool stuff!

I hope that you can all appreciate the work. Let's build some cool stuff with it!

Models:
- Demo: BramVanroy/GEITje-7B-ultra
- DPO Model: BramVanroy/GEITje-7B-ultra
- SFT model (not recommended): BramVanroy/GEITje-7B-ultra-sft

Datasets with GPT-4 turbo completions:
- No robots (~10k instructions): BramVanroy/no_robots_dutch
- UltraChat (~200k instructions): BramVanroy/ultrachat_200k_dutch
- UltraFeedback (DPO with GPT4+GEITje chat, ~50k): BramVanroy/ultra_feedback_dutch
- Orca DPO Pairs (DPO with GPT4+GEITje chat, ~10k): BramVanroy/orca_dpo_pairs_dutch
view post
Post
πŸ”Ž DPO hyperparameter search update!

In my previous post (https://huggingface.co/posts/BramVanroy/633544255876795), I indicated how despite high reward accuracies and low losses, my model would sometimes just output repeating random tokens (/*****/). There were some useful brainstorms in that thread. I think the dataset is relatively easy for the model, leading it to quickly overfit when the beta is very small, which allows the model to step away further from its initially outputs.

So, I ran a hyperparameter search for learning rate (1e-7 v 5e-7), batch size (32, 64, 96, 128) and most importantly, beta (0.01, 0.1, 0.2, 0.5). You can have a look at the results for yourself here: https://wandb.ai/bramvanroy/dpo-geitje-ultra-hyperparams

Interpreting the result, I'd think that the beta=0.5 is the better choice for this dataset. Reasons:

- markedly higher rewards margins compared to all other betas
- better balance between positive chosen and negative rejected rewards
- log probabilities are not as superbly low as for beta=0.01, which seems too low for this dataset

Of course, that is just purely looking at numbers without running any benchmarks. However, I am hesitant to evaluate all the models on benchmarks and, therefore, literally optimising my hyperparameters on a test set (which is very bad!). So I will just play with some of the most promising models and see which one feels "best" qualitatively.

If you have other insights, thoughts, or opinions, let me know!