Edit model card

GPT2 - Einstein EPFL Light

Quantized Fine-tuned version of well known GPT2 model using Llama3 Einstein dataset for the Supervised Fine Tuning (SFT) and a dataset composed of EPFL type questions (and more...) for Direct Preference Optimization (DPO).

Authors: Azza Jenane (contact), David Schroeter (contact) & Paulo Ribeiro (contact)

Downloads last month
7
Safetensors
Model size
431M params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.