Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- code
|
7 |
+
- cybersecurity
|
8 |
+
- penetration testing
|
9 |
+
- hacking
|
10 |
+
- code
|
11 |
+
- uncensored
|
12 |
+
---
|
13 |
+
# Prox-Llama-3-8B
|
14 |
+
|
15 |
+
By [OpenVoid](https://openvoid.ai)
|
16 |
+
|
17 |
+
<img src="https://cdn.openvoid.ai/images/prox-llama3.png" width="500" />
|
18 |
+
|
19 |
+
## Model Description
|
20 |
+
|
21 |
+
Prox-Llama-3-8B is a uncensored fine-tune of Meta-Llama-3-8B-Instruct, tailored for specialized applications in code generation and cybersecurity.
|
22 |
+
|
23 |
+
## Intended Uses & Limitations
|
24 |
+
|
25 |
+
Designed for tasks related to hacking and coding:
|
26 |
+
|
27 |
+
- Code generation
|
28 |
+
- Code explanation and documentation
|
29 |
+
- Answering questions on hacking techniques and cybersecurity
|
30 |
+
- Providing coding project insights
|
31 |
+
|
32 |
+
Review and verify outputs carefully, especially for critical applications. Expert validation is recommended to avoid biased or inconsistent content. Use responsibly and ethically, complying with applicable laws and regulations to prevent misuse for malicious purposes.
|
33 |
+
|
34 |
+
## Training Data
|
35 |
+
|
36 |
+
The model was fine-tuned on a proprietary dataset from OpenVoid, featuring high-quality text data related to coding, cybersecurity, and hacking. Extensive filtering and preprocessing ensured data quality and relevance.
|
37 |
+
|
38 |
+
## Evaluation
|
39 |
+
|
40 |
+
- **HumanEval v1.0**: pass@1: 0.561
|
41 |
+
- **EvalPlus v1.1**: pass@1: 0.500
|
42 |
+
- **AGIEval**: 40.74
|
43 |
+
- **GPT4All+**: 70.17
|
44 |
+
- **TruthfulQA+**: 51.15
|
45 |
+
- **Bigbench+**: 44.12
|
46 |
+
- **Average**: 51.55
|
47 |
+
|
48 |
+
## How to Use the Model
|
49 |
+
|
50 |
+
### Using Transformers
|
51 |
+
|
52 |
+
Example of using Prox-Llama-3-8B with the Transformers library:
|
53 |
+
|
54 |
+
```python
|
55 |
+
import transformers
|
56 |
+
import torch
|
57 |
+
|
58 |
+
model_id = "openvoid/Prox-Llama-3-8B"
|
59 |
+
|
60 |
+
pipeline = transformers.pipeline(
|
61 |
+
"text-generation",
|
62 |
+
model=model_id,
|
63 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
64 |
+
device_map="auto",
|
65 |
+
)
|
66 |
+
|
67 |
+
messages = [
|
68 |
+
{"role": "system", "content": "You are Prox."},
|
69 |
+
{"role": "user", "content": "Who are you?"},
|
70 |
+
]
|
71 |
+
|
72 |
+
terminators = [
|
73 |
+
pipeline.tokenizer.eos_token_id,
|
74 |
+
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
75 |
+
]
|
76 |
+
|
77 |
+
outputs = pipeline(
|
78 |
+
messages,
|
79 |
+
max_new_tokens=256,
|
80 |
+
eos_token_id=terminators,
|
81 |
+
do_sample=True,
|
82 |
+
temperature=0.6,
|
83 |
+
top_p=0.9,
|
84 |
+
)
|
85 |
+
print(outputs[0]["generated_text"][-1])
|
86 |
+
```
|