Edit model card

The primary interest was evaluate the available framework for fine tuning and understand the process & flow.

The version of the model fine tuned Lit-llama with Lora on unstructured EU-Law data.

The model has been trained on 37,304 samples generated from 55 EU-law files and 4,145 samples.

Lit-Llama is an open source implementation of the original llama model based on nano-gpt.

The fine tuned checkpoint was converted to Huggingface format and published.

Downloads last month
27
Safetensors
Model size
1.98B params
Tensor type
F32
BF16
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.