Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ScikitLLM is an LLM finetuned on writing references and code for the Scikit-Learn documentation.

Features of ScikitLLM includes:

  • Support for RAG (three chunks)
  • Sources and quotations using a modified version of the wiki syntax ("")
  • Code samples and examples based on the code quoted in the chunks.
  • Expanded knowledge/familiarity with the Scikit-Learn concepts and documentation.

Training

ScikitLLM is based on Mistral-OpenHermes 7B, a pre-existing finetune version of Mistral 7B. OpenHermes already include many desired capacities for the end use, including instruction tuning, source analysis, and native support for the chatML syntax.

As a fine-tune of a fine-tune, ScikitLLM has been trained with a lower learning rate than is commonly used in fine-tuning projects.

Downloads last month
13
Safetensors
Model size
7.24B params
Tensor type
BF16
·