Quazim0t0 commited on
Commit
f624dbb
·
verified ·
1 Parent(s): bcf9121

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -6
README.md CHANGED
@@ -13,12 +13,38 @@ datasets:
13
  - bespokelabs/Bespoke-Stratos-17k
14
  ---
15
 
16
- # Uploaded model
17
 
18
- - **Developed by:** Quazim0t0
19
- - **License:** apache-2.0
20
- - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
21
 
22
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
23
 
24
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - bespokelabs/Bespoke-Stratos-17k
14
  ---
15
 
16
+ # Phi4 Turn R1Distill LoRA Adapters
17
 
18
+ ## Overview
19
+ Hey! These LoRA adapters are trained using different reasoning datasets that utilize **Thought** and **Solution** for reasoning responses.
20
+ I hope these help jumpstart your project! These adapters have been trained on an **A800 GPU** and should provide a solid base for fine-tuning or merging.
21
 
22
+ Everything on my page is left **public** for Open Source use.
23
 
24
+ ## Available LoRA Adapters
25
+ Here are the links to the available adapters as of **January 30, 2025**:
26
+
27
+ - [Phi4.Turn.R1Distill-Lora1](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora1)
28
+ - [Phi4.Turn.R1Distill-Lora2](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora2)
29
+ - [Phi4.Turn.R1Distill-Lora3](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora3)
30
+ - [Phi4.Turn.R1Distill-Lora4](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora4)
31
+ - [Phi4.Turn.R1Distill-Lora5](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora5)
32
+ - [Phi4.Turn.R1Distill-Lora6](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora6)
33
+ - [Phi4.Turn.R1Distill-Lora7](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora7)
34
+ - [Phi4.Turn.R1Distill-Lora8](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora8)
35
+
36
+ ## Usage
37
+ These adapters can be loaded and used with `peft` and `transformers`. Here’s a quick example:
38
+
39
+ ```python
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+ from peft import PeftModel
42
+
43
+ base_model = "microsoft/Phi-4"
44
+ lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
47
+ model = AutoModelForCausalLM.from_pretrained(base_model)
48
+ model = PeftModel.from_pretrained(model, lora_adapter)
49
+
50
+ model.eval()