YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

NanoCoder is a Fill-in-the-Middle (FIM) language model specifically designed for React frontend development and coding assistance. It helps users with intelligent code autocompletion and context-aware generation. The model was fine-tuned using Unsloth on the Qwen 3 0.6B base model, leveraging a high-quality FIM dataset curated from GitHub repositories to enhance coding capabilities and developer productivity.

🧠 Datasets

We trained NanoCoder using a high-quality Fill-in-the-Middle (FIM) dataset curated from GitHub repositories:
srisree/nextjs_typescript_fim_dataset on Hugging Face.

This dataset focuses on React/Next.js and TypeScript projects, providing rich, real-world coding examples that help the model understand frontend architecture, component composition, and React ecosystem patterns.

By leveraging this dataset, NanoCoder learns to:

  • Predict and fill missing code intelligently using FIM objectives.
  • Understand React component structures and TypeScript typing patterns.
  • Generate clean, production-grade frontend code snippets.

⚙️ FIM Training Colab Script

We’re preparing an interactive Google Colab notebook for reproducing the Fill-in-the-Middle (FIM) fine-tuning process used to train NanoCoder with Unsloth on the Qwen 3 0.6B base model.

The Colab script will include:

  • ✅ Environment setup with Unsloth and Qwen 3 0.6B
  • ✅ Loading and preprocessing the Next.js TypeScript FIM Dataset
  • ✅ Training configuration (LoRA, batch size, sequence length, etc.)
  • ✅ Evaluation and inference examples

🚀 Coming soon... Stay tuned for the full release!


⚙️ Setup and Run NanoCoder Locally with Ollama in VS Code

Step-by-step guide to install, configure, and use NanoCoder for intelligent React frontend code completion with the Continue VS Code extension.


🧠 Prerequisites

Before getting started, ensure you have the following installed:


🧩 Step 1: Install Ollama

If you haven’t already, download and install Ollama:

Once installed, open your terminal and verify the installation.

💾 Step 2: Pull NanoCoder Model

ollama pull srisree/nanocoder

⚡ Step 3: Run NanoCoder with Ollama

Once downloaded, you can test NanoCoder directly in the terminal:

ollama run nanocoder

Read more Continue Docs

Downloads last month
373
GGUF
Model size
596M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support