Edit model card

Aurora-12B

Aurora-12B is a powerful language model fine-tuned from the Mistral-Nemo-Instruct-2407 base model, designed specifically for code generation and understanding tasks. This model leverages the extensive iamtarun/code_instructions_120k_alpaca dataset to provide high-quality code-related instructions and solutions.

Overview

Aurora-12B is tailored for developers and researchers who need advanced code completion, code understanding, and programming assistance. It understands a wide variety of programming languages and can provide accurate and contextually relevant suggestions.

Features

  • Code Generation: Generate code snippets in various programming languages.
  • Code Completion: Complete partial code fragments.
  • Code Understanding: Explain code functionality and provide comments.
  • Instruction Following: Follow and execute code-related instructions with high accuracy.

Model Details

  • Base Model: Mistral-Nemo-Instruct-2407
  • Fine-Tuned On: iamtarun/code_instructions_120k_alpaca
  • Parameters: 12 billion

Dataset

The model was fine-tuned on the iamtarun/code_instructions_120k_alpaca dataset, which includes 120,000 examples of code instructions and solutions. This diverse dataset ensures that the model can handle a wide range of coding scenarios.

Downloads last month
21
GGUF
Model size
12.2B params
Architecture
llama

8-bit

16-bit

32-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for XeroCodes/aurora-12b-gguf

Dataset used to train XeroCodes/aurora-12b-gguf