Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Gophos - Sophos Log Interpreter - Gemma 2B-IT Fine-tuned Model

Overview

This repository contains a fine-tuned version of the Gemma 2B-IT model, tailored specifically for interpreting Sophos logs exported from Splunk. The model is hosted on Hugging Face for easy integration and usage in various applications requiring interpretation and analysis of Sophos logs.

Model Description

The Gemma 2B-IT model, has been fine-tuned using a dataset of Sophos logs extracted from Splunk. Through this fine-tuning process, the model has been optimized to effectively interpret and extract meaningful information from Sophos logs, facilitating tasks such as threat detection, security analysis, and incident response.

Usage

To utilize the model, simply install the Hugging Face transformers library and load the model using its unique identifier or name:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the fine-tuned Gemma 2B-IT model
model = AutoModelForSequenceClassification.from_pretrained("SadokBarbouche/gophos")

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("SadokBarbouche/gophos")

Data Preparation

The fine-tuning of the Gemma 2B-IT model was conducted using a dataset of Sophos logs exported from Splunk. The dataset was preprocessed to ensure compatibility with the model architecture and to optimize training performance.

Acknowledgements

We would like to acknowledge the creators of the Gemma 2B-IT model for their pioneering work in natural language understanding. Additionally, we extend our gratitude to the contributors of the Hugging Face transformers library for their valuable tools and resources.

Downloads last month
3
Safetensors
Model size
2.51B params
Tensor type
FP16
·
Inference API
Input a message to start chatting with SadokBarbouche/gophos.
This model can be loaded on Inference API (serverless).