YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
base_model: unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf license: apache-2.0 language:
- en
title: "Uploaded Model"
developer: "NoirZangetsu"
finetuned_from: "unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit"
description: |
This model is fine-tuned from the base model
unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit
using Unsloth and Hugging Face's TRL library.
Key Details:
- Developed by: NoirZangetsu
- License: Apache-2.0
- Finetuned from model: unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit
Model Details:
This Qwen2 model was trained 2x faster with Unsloth and Hugging Face's TRL library. It leverages 4-bit quantization for efficient inference and is available in GGUF format for broad compatibility across inference platforms.
Features:
- Fast fine-tuning process (2x speed-up)
- Low resource consumption with 4-bit quantization
- GGUF format for diverse inference environments
- Suitable for code generation, auto-completion, text generation, summarization, and translation tasks
Usage Areas:
- Code Generation: Enhancing software development with auto-completion and code suggestions.
- Text Generation: Creating, summarizing, and translating text for various NLP tasks.
- Research and Development: An ideal solution for testing and deploying advanced language model applications in both academic and industrial projects.
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 15
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.