atharvapawar
commited on
Commit
•
efe1985
1
Parent(s):
485abb3
v1
Browse files
README.md
CHANGED
@@ -1,30 +1,74 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
-
|
5 |
-
|
6 |
tags:
|
7 |
-
-
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
---
|
12 |
-
## Training procedure
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
-
|
16 |
-
- quant_method: bitsandbytes
|
17 |
-
- load_in_8bit: False
|
18 |
-
- load_in_4bit: True
|
19 |
-
- llm_int8_threshold: 6.0
|
20 |
-
- llm_int8_skip_modules: None
|
21 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
22 |
-
- llm_int8_has_fp16_weight: False
|
23 |
-
- bnb_4bit_quant_type: nf4
|
24 |
-
- bnb_4bit_use_double_quant: False
|
25 |
-
- bnb_4bit_compute_dtype: float16
|
26 |
-
### Framework versions
|
27 |
|
|
|
28 |
|
29 |
-
- PEFT 0.6.0.dev0
|
30 |
-
---
|
|
|
1 |
---
|
2 |
+
title: Fine-tuned Model Card
|
3 |
+
authors:
|
4 |
+
- Your Name
|
5 |
+
date: August 2023
|
6 |
tags:
|
7 |
+
- Code Generation
|
8 |
+
- Text2Text Generation
|
9 |
+
- Python
|
10 |
+
- Vulnerability Rule
|
11 |
---
|
12 |
|
13 |
+
## Model Overview
|
14 |
+
|
15 |
+
- **Library:** PEFT
|
16 |
+
- **Language:** English (en)
|
17 |
+
- **Pipeline Tag:** Text2Text Generation
|
18 |
+
- **Tags:** Code Generation (cod)
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
This model has been fine-tuned on the Llama-2 model using a dataset of Python code vulnerability rules.
|
23 |
+
|
24 |
+
## Training Procedure
|
25 |
+
|
26 |
+
The model was trained with a quantization configuration using the `bitsandbytes` quantization method. Some key configurations include:
|
27 |
+
|
28 |
+
- **Quantization Method:** bitsandbytes
|
29 |
+
- **Load in 8-bit:** False
|
30 |
+
- **Load in 4-bit:** True
|
31 |
+
- **LLM Int8 Threshold:** 6.0
|
32 |
+
- **LLM Int8 Skip Modules:** None
|
33 |
+
- **LLM Int8 Enable FP32 CPU Offload:** False
|
34 |
+
- **LLM Int8 Has FP16 Weight:** False
|
35 |
+
- **BNB 4-bit Quant Type:** nf4
|
36 |
+
- **BNB 4-bit Use Double Quant:** False
|
37 |
+
- **BNB 4-bit Compute Dtype:** float16
|
38 |
+
|
39 |
+
## Framework Versions
|
40 |
+
|
41 |
+
- **PEFT:** 0.6.0.dev0
|
42 |
+
|
43 |
---
|
|
|
44 |
|
45 |
+
<!-- This model card has been generated automatically. Please review and complete it as needed. -->
|
46 |
+
|
47 |
+
# Model Details
|
48 |
+
|
49 |
+
This model card provides information about a fine-tuned model using the PEFT library. The model is designed for text-to-text generation tasks, particularly in the field of code generation and vulnerability rule detection.
|
50 |
+
|
51 |
+
## Intended Use
|
52 |
+
|
53 |
+
The model is intended for generating text outputs based on text inputs. It has been fine-tuned specifically for code generation tasks and vulnerability rule detection. Users can input text descriptions, code snippets, or other relevant information to generate corresponding code outputs.
|
54 |
+
|
55 |
+
## Limitations and Considerations
|
56 |
+
|
57 |
+
It's important to note that while the model has been fine-tuned for code generation, its outputs may still require human review and validation. It may not cover all possible code variations or edge cases. Users are advised to thoroughly review generated code outputs before deployment.
|
58 |
+
|
59 |
+
## Training Data
|
60 |
+
|
61 |
+
The model was trained on a dataset of Python code vulnerability rules. The dataset includes examples of code patterns that could potentially indicate vulnerabilities or security risks.
|
62 |
+
|
63 |
+
## Training Procedure
|
64 |
+
|
65 |
+
The model was trained using the PEFT library. The quantization method used was `bitsandbytes`, with specific configurations mentioned earlier. The model underwent multiple training epochs to optimize its performance on code generation tasks.
|
66 |
+
|
67 |
+
## Model Evaluation
|
68 |
+
|
69 |
+
The model's performance has not been explicitly evaluated in this model card. Users are encouraged to evaluate the model's generated outputs for their specific use case and domain.
|
70 |
|
71 |
+
## Framework Versions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
+
- **PEFT:** 0.6.0.dev0
|
74 |
|
|
|
|