Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

GoPhos Quantized Model

Overview

This repository hosts the quantized version of the GoPhos model, specifically optimized for interpreting Sophos logs exported from Splunk. The model is available for easy integration and usage through the mlx-lm library, facilitating seamless log interpretation tasks.

Model Description

The GoPhos model has been quantized to improve its efficiency and reduce memory footprint while retaining its interpretational capabilities for Sophos logs. Through quantization, the model achieves faster inference times and reduced resource consumption, making it ideal for deployment in resource-constrained environments.

Usage

To utilize the quantized GoPhos model, follow these simple steps:

  1. Install the mlx-lm library:
pip install mlx-lm
  1. Load the model and tokenizer:
from mlx_lm import load, generate

model, tokenizer = load("SadokBarbouche/gophos-quantized")
  1. Generate log interpretations:
response = generate(model, tokenizer, prompt="hello", verbose=True)

Evaluation

The quantized GoPhos model has been evaluated for its interpretational accuracy and efficiency, demonstrating performance comparable to the original model while achieving faster inference times and reduced memory usage.

Acknowledgements

We extend our gratitude to the creators of the original GoPhos model for their pioneering work in log interpretation. Additionally, we thank the developers of the mlx-lm library for providing a convenient interface for model loading and generation.

Downloads last month
1
Safetensors
Model size
834M params
Tensor type
FP16
·
U32
·