YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

gemma3-270m-leetcode-gguf

Original model: Codingstark/gemma3-270m-leetcode Format: GGUF Quantization: bf16

This is a GGUF conversion of the Codingstark/gemma3-270m-leetcode model, optimized for use with applications like LM Studio, Ollama, and other GGUF-compatible inference engines.

Usage

Load this model in any GGUF-compatible application by referencing the .gguf file.

Model Details

  • Original Repository: Codingstark/gemma3-270m-leetcode
  • Converted Format: GGUF
  • Quantization Level: bf16
  • Compatible With: LM Studio, Ollama, llama.cpp, and other GGUF inference engines

Conversion Process

This model was converted using the llama.cpp conversion scripts with the following settings:

  • Input format: Hugging Face Transformers
  • Output format: GGUF
  • Quantization: bf16

License

Please refer to the original model's license terms.

Downloads last month
52
GGUF
Model size
268M params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support