SadokBarbouche commited on
Commit
6197931
1 Parent(s): 5af88ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -9
README.md CHANGED
@@ -1,21 +1,33 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - mlx
5
- ---
6
 
7
- # SadokBarbouche/gophos-quantized
8
- This model was converted to MLX format from [`SadokBarbouche/gophos`]() using mlx-lm version **0.5.0**.
9
- Refer to the [original model card](https://huggingface.co/SadokBarbouche/gophos) for more details on the model.
10
- ## Use with mlx
11
 
 
 
 
 
 
 
 
12
  ```bash
13
  pip install mlx-lm
14
  ```
15
 
 
16
  ```python
17
  from mlx_lm import load, generate
18
 
19
  model, tokenizer = load("SadokBarbouche/gophos-quantized")
 
 
 
 
20
  response = generate(model, tokenizer, prompt="hello", verbose=True)
21
  ```
 
 
 
 
 
 
 
1
+ # GoPhos Quantized Model
 
 
 
 
2
 
3
+ ## Overview
4
+ This repository hosts the quantized version of the GoPhos model, specifically optimized for interpreting Sophos logs exported from Splunk. The model is available for easy integration and usage through the `mlx-lm` library, facilitating seamless log interpretation tasks.
 
 
5
 
6
+ ## Model Description
7
+ The GoPhos model has been quantized to improve its efficiency and reduce memory footprint while retaining its interpretational capabilities for Sophos logs. Through quantization, the model achieves faster inference times and reduced resource consumption, making it ideal for deployment in resource-constrained environments.
8
+
9
+ ## Usage
10
+ To utilize the quantized GoPhos model, follow these simple steps:
11
+
12
+ 1. Install the `mlx-lm` library:
13
  ```bash
14
  pip install mlx-lm
15
  ```
16
 
17
+ 2. Load the model and tokenizer:
18
  ```python
19
  from mlx_lm import load, generate
20
 
21
  model, tokenizer = load("SadokBarbouche/gophos-quantized")
22
+ ```
23
+
24
+ 3. Generate log interpretations:
25
+ ```python
26
  response = generate(model, tokenizer, prompt="hello", verbose=True)
27
  ```
28
+
29
+ ## Evaluation
30
+ The quantized GoPhos model has been evaluated for its interpretational accuracy and efficiency, demonstrating performance comparable to the original model while achieving faster inference times and reduced memory usage.
31
+
32
+ ## Acknowledgements
33
+ We extend our gratitude to the creators of the original GoPhos model for their pioneering work in log interpretation. Additionally, we thank the developers of the `mlx-lm` library for providing a convenient interface for model loading and generation.