Text Generation
Transformers
Safetensors
mistral
code
text-generation-inference
Inference Endpoints
jtatman's picture
Filled model card
b12ecb6 verified
metadata
license: apache-2.0
datasets:
  - jtatman/python-code-dataset-500k
  - jtatman/python-github-code-instruct-filtered-5k
  - jtatman/pile_python_instruct_format
library_name: transformers
tags:
  - code

Model Card for tinymistral-v2-pycoder-instruct-248m

This modelcard is for tinymistral-v2-pycoder-instruct, a python-specific code generation model on top of Locutusque/TinyMistral-248M-v2-Instruct.

Model Details

This instruct model follows the original in using ChatML format.

An empty prompt will return various information from the base model, but using the instruct format will deliver python code of varying quality.

Model Description

Model is in active development, base model is in active development, and all should be treated with caution.

Uses

Generate python code.

Direct Use

Probably could be fine tuned with a more comprehensive dataset. Experiments are in progress.

How to Get Started with the Model

Use the prompt format below to get started with the model.

<|im_start|>user Write a function for multiplying two numbers, from variables 'a' and 'b'.<|im_end|> <|im_start|>assistant

Training Details

Training Data

Custom formatted existing python data from:

Training Procedure

Repeat training depending on compute budget.

Preprocessing

Conversion to alpaca/instruct format.

Training Hyperparameters

  • Training regime: fp16, merge of parameter fine-tune adapters when necessary and helpful.

Evaluation

Metrics

Latest metrics:

  • epoch: 4.87
  • global_step: 220
  • learning_rate: 0.00006713780918727916
  • loss: 2.3736