--- license: other library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi-2-FCRL-v0.1 results: [] --- # phi-2-FCRL-v0.1 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on [vicgalle/alpaca-gpt4](https://huggingface.co/vicgalle/alpaca-gpt4), [nRuaif/OpenOrca-GPT3.5](https://huggingface.co/nRuaif/OpenOrca-GPT3.5), [sahil2801/CodeAlpaca-20k](https://huggingface.co/sahil2801/CodeAlpaca-20k) ### Quick start ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("npvinHnivqn/phi-2-FCRL-v0.1", trust_remote_code=True, torch_dtype=torch.float32) tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2-FCRL-v0.1", trust_remote_code=True) inputs = tokenizer(''': You are a very good and helpful chatbot, you can answer almost every questions. <|USER|>: Write a short story about a curious cat <|BOT|>:''', return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=512) text = tokenizer.batch_decode(outputs)[0] print(text) ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Framework versions - PEFT 0.7.1 - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.15.0 - Tokenizers 0.15.0