K8s-Operator (Fine-tuned LLaMA)

Hugging Face Model Card

Author: Nir Adler Model Type: Fine-tuned LLaMA / Unsloth-based Model
Domain: Kubernetes (kubectl commands)
License: Apache 2.0 Repository: Model Page


πŸš€ Overview

This model is a fine-tuned LLaMA/Unsloth-based model designed to generate accurate and efficient kubectl commands for managing Kubernetes clusters. It understands Kubernetes concepts, CLI usage, and best practices, making it a valuable assistant for DevOps engineers, SREs, and platform teams.

It can help with:
βœ… Constructing kubectl commands based on natural language queries.
βœ… Explaining Kubernetes commands and best practices.
βœ… Providing structured responses with safe and efficient command execution.


πŸ“– Model Details

  • Base Model: llama-3.2-3b-instruct-unsloth-bnb-4bit
  • Fine-tuned on:
    • ComponentSoft/k8s-kubectl (General k8s commands)
    • ComponentSoft/k8s-kubectl-35k (Expanded dataset)
    • ComponentSoft/k8s-kubectl-cot-20k (Chain of Thought explanations)
  • Training Framework: Unsloth (optimized for efficient training)
  • Format: ShareGPT-style chat template
  • Dataset Size: ~55K Kubernetes-related command pairs

πŸ”§ Usage

1️⃣ Load the Model from Hugging Face

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "niradler/k8s_operator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

3️⃣ Example Output

## **πŸ”Ή Instruction:**  
Retrieve logs from a running pod named `web-server`.  

## **πŸ”Ή Recommended `kubectl` Command:**  
```sh
kubectl logs web-server

πŸ“Œ Limitations & Considerations

πŸ”Ή The model is not aware of Kubernetes cluster states, so verify command outputs.
πŸ”Ή It may generate destructive commandsβ€”always review before running.
πŸ”Ή Fine-tuning with specific Kubernetes versions might be necessary for up-to-date command compatibility.


πŸ”— Resources & Links

Downloads last month
55
GGUF
Model size
3.21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support