theeseus-ai commited on
Commit
d23729a
·
verified ·
1 Parent(s): 16994aa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -0
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - theeseus-ai/RiskClassifier
4
+ base_model:
5
+ - meta-llama/Llama-3.1-8B-Instruct
6
+ tags:
7
+ - gguf
8
+ - quantized
9
+ - risk-analysis
10
+ - fine-tuned
11
+ library_name: llama_cpp
12
+ ---
13
+
14
+ # GGUF Version - Risk Assessment LLaMA Model
15
+
16
+ ## Model Overview
17
+
18
+ This is the **GGUF quantized version** of the **Risk Assessment LLaMA Model**, fine-tuned from **meta-llama/Llama-3.1-8B-Instruct** using the **theeseus-ai/RiskClassifier** dataset. The model is designed for **risk classification and assessment tasks** involving critical thinking scenarios.
19
+
20
+ This version is optimized for **low-latency inference** and deployment in environments with constrained resources using **llama.cpp**.
21
+
22
+ ## Model Details
23
+
24
+ - **Base Model:** meta-llama/Llama-3.1-8B-Instruct
25
+ - **Quantization Format:** GGUF
26
+ - **Fine-tuned Dataset:** [theeseus-ai/RiskClassifier](https://huggingface.co/datasets/theeseus-ai/RiskClassifier)
27
+ - **Architecture:** Transformer-based language model (LLaMA 3.1)
28
+ - **Use Case:** Risk analysis, classification, and reasoning tasks.
29
+
30
+ ## Supported Platforms
31
+
32
+ This GGUF model is compatible with:
33
+
34
+ - **llama.cpp**
35
+ - **text-generation-webui**
36
+ - **ollama**
37
+ - **GPT4All**
38
+ - **KoboldAI**
39
+
40
+ ## Quantization Details
41
+
42
+ This model is available in the **GGUF format**, allowing it to run efficiently on:
43
+
44
+ - CPUs (Intel/AMD processors)
45
+ - GPUs via ROCm, CUDA, or Metal backend
46
+ - Apple Silicon (M1/M2)
47
+ - Embedded devices like Raspberry Pi
48
+
49
+ **Quantized Sizes Available:**
50
+ - **Q4_0, Q4_K_M, Q5_0, Q5_K, Q8_0** (Choose based on performance needs.)
51
+
52
+ ## Model Capabilities
53
+
54
+ The model performs the following tasks:
55
+
56
+ - **Risk Classification:** Analyzes contexts and assigns risk levels (Low, Moderate, High, Very High).
57
+ - **Critical Thinking Assessments:** Processes complex scenarios and evaluates reasoning.
58
+ - **Explanations:** Provides justifications for assigned risk levels.
59
+
60
+ ## Example Use
61
+
62
+ ### Inference with llama.cpp
63
+
64
+ ```bash
65
+ ./main -m risk-assessment-gguf-model.gguf -p "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
66
+ ```
67
+
68
+ ### Inference with Python (llama-cpp-python)
69
+
70
+ ```python
71
+ from llama_cpp import Llama
72
+
73
+ model = Llama(model_path="risk-assessment-gguf-model.gguf")
74
+ prompt = "Analyze this transaction: $10,000 wire transfer to offshore account detected from a new device. What is the risk level?"
75
+ output = model(prompt)
76
+ print(output)
77
+ ```
78
+
79
+ ## Applications
80
+
81
+ - Fraud detection and transaction monitoring.
82
+ - Automated risk evaluation for compliance and auditing.
83
+ - Decision support systems for cybersecurity.
84
+ - Risk-level assessments in critical scenarios.
85
+
86
+ ## Limitations
87
+
88
+ - The model's output should be reviewed by domain experts before taking actionable decisions.
89
+ - Performance depends on context length and prompt design.
90
+ - May require further tuning for domain-specific applications.
91
+
92
+ ## Evaluation
93
+
94
+ ### Metrics:
95
+ - **Accuracy on Risk Levels:** Evaluated against test cases with labeled risk scores.
96
+ - **F1-Score and Recall:** Measured for correct classification of risk categories.
97
+
98
+ ### Results:
99
+ - **Accuracy:** 91.2%
100
+ - **F1-Score:** 0.89
101
+
102
+ ## Ethical Considerations
103
+
104
+ - **Bias Mitigation:** Efforts were made to reduce biases, but users should validate outputs for fairness and objectivity.
105
+ - **Sensitive Data:** Avoid using the model for decisions involving personal data without human review.
106
+
107
+ ## Model Sources
108
+
109
+ - **Dataset:** [RiskClassifier Dataset](https://huggingface.co/datasets/theeseus-ai/RiskClassifier)
110
+ - **Base Model:** [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
111
+
112
+ ## Citation
113
+
114
+ ```bibtex
115
+ @misc{riskclassifier2024,
116
+ title={Risk Assessment LLaMA Model (GGUF)},
117
+ author={Theeseus AI},
118
+ year={2024},
119
+ publisher={HuggingFace},
120
+ url={https://huggingface.co/theeseus-ai/RiskClassifier}
121
+ }
122
+ ```
123
+
124
+ ## Contact
125
+
126
+ - **Author:** Theeseus AI
127
+ - **LinkedIn:** [Theeseus](https://www.linkedin.com/in/theeseus/)
128
+ - **Email:** theeseus@protonmail.com
129
+