Threatthriver/phi4-finetuned-16bit Model

Developed by: Threatthriver

License: Apache-2.0

Fine-tuned from model: unsloth/phi-4-unsloth-bnb-4bit (https://huggingface.co/unsloth/phi-4-unsloth-bnb-4bit)

Model Description:

This PHI-4 model, named Threatthriver/phi4-finetuned-16bit, was fine-tuned by Threatthriver, potentially for applications in cybersecurity, threat intelligence, or related domains. It was trained using Unsloth (https://github.com/unslothai/unsloth) and Hugging Face's TRL library, which allowed for a 2x faster training process. The model is based on the unsloth/phi-4-unsloth-bnb-4bit base model. It was fine-tuned and saved in 16-bit precision.

Intended Use:

This model is intended for research and development purposes. Specifically, it may be suitable for:

  • Text generation related to cybersecurity topics.
  • Experimentation with threat intelligence analysis.
  • Applications involving security automation.

Please adapt this section based on the actual intended use of your model.

Training Details:

The model was fine-tuned using the Unsloth library, known for its efficiency in training large language models. The base model unsloth/phi-4-unsloth-bnb-4bit was initially quantized to 4-bit using bitsandbytes (bnb) for reduced memory footprint and faster training. This fine-tuned version, Threatthriver/phi4-finetuned-16bit, was then saved in 16-bit precision.

How to Use:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Threatthriver/phi4-finetuned-16bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_text = "Example input text related to cybersecurity..."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") # If using GPU

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Replace input_text with your specific use case.

Disclaimer:

This model is provided as-is, and no guarantees are made regarding its performance or suitability for any specific task. Use it at your own risk.

Acknowledgements:

Trained 2x faster with Unsloth (https://github.com/unslothai/unsloth) and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Threatthriver/phi4-finetuned-16bit

Base model

microsoft/phi-4
Finetuned
(213)
this model