--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi-2-prompt-injection-QLoRA results: [] datasets: - HuggingFaceH4/no_robots - Dahoas/synthetic-hh-rlhf-prompts - HuggingFaceH4/ultrachat_200k - Lakera/gandalf_ignore_instructions - imoxto/prompt_injection_cleaned_dataset-v2 - hackaprompt/hackaprompt-dataset - rubend18/ChatGPT-Jailbreak-Prompts - HuggingFaceH4/instruction-dataset --- # phi-2-prompt-injection-QLoRA Weights updated at 03/07/2024. (Training epochs increased, accuracy improved than before) View training code: https://github.com/AIM-Intelligence/phi-2-prompt-injection-QLoRA Try out the model in: https://aim-intelligence.com This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0000 - eval_precision: 1.0 - eval_recall: 1.0 - eval_f1-score: 1.0 - eval_accuracy: 1.0 - eval_runtime: 16.0258 - eval_samples_per_second: 8.424 - eval_steps_per_second: 1.061 - step: 0 ## Model description More information needed ## Intended uses & limitations ``` tokenizer = AutoTokenizer.from_pretrained("ysy970923/phi-2-prompt-injection-QLoRA") model = AutoModelForSequenceClassification.from_pretrained("ysy970923/phi-2-prompt-injection-QLoRA", load_in_4bit=True, torch_dtype=torch.bfloat16, id2label={0: "SAFE", 1: "INJECTION"}) # LABEL_0 is safe, LABEL_1 is prompt_injection ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2