Edit model card

Statscoder

This model is a fine-tuned version of bigcode/santacoder on R and SAS language repositories in the stack dataset.

Training procedure

The model was finetuned using the code adapted from loubnabnl/santacoder-finetuning. Adapted to handle multiple subsets of datasets and it is here.

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • optimizer: adafactor
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1600
  • seq_length: 1024
  • no_fp16
Downloads last month
3
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train infinitylogesh/statscoder