abideen commited on
Commit
a082853
1 Parent(s): ad59d57

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
5
+ - abideen/Heimer-dpo-TinyLlama-1.1B
6
+ - abideen/Heimer-kto-TinyLlama-1.1B
7
+ - Intel/orca_dpo_pairs
8
+ language:
9
+ - en
10
+ datasets:
11
+ - Intel/orca_dpo_pairs
12
+ library_name: transformers
13
+ ---
14
+
15
+ # Heimer-ipo-TinyLlama-1.1B
16
+
17
+
18
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/a7joKICVpqGElN3mh2MpS.jpeg)
19
+
20
+
21
+ # WandB Experiment Tracking
22
+
23
+ Check out the experiment details in this [report](https://api.wandb.ai/links/zaiinn440/dqlt70dc)
24
+
25
+
26
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/HLhrSUFD1-e6f31F3WGK2.png)
27
+
28
+ # 🧩 DPO adaptation hyperparameters
29
+
30
+ ## LoRA:
31
+
32
+ r=8
33
+
34
+ lora_alpha=16
35
+
36
+ lora_dropout=0.05
37
+
38
+ bias="none"
39
+
40
+ task_type="CAUSAL_LM"
41
+
42
+ target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
43
+
44
+ ## Training arguments:
45
+
46
+ per_device_train_batch_size=2
47
+
48
+ gradient_accumulation_steps=4
49
+
50
+ gradient_checkpointing=True
51
+
52
+ learning_rate=5e-5
53
+
54
+ lr_scheduler_type="cosine"
55
+
56
+ max_steps=50
57
+
58
+ optim="paged_adamw_32bit"
59
+
60
+ warmup_steps=10
61
+
62
+ ## DPOTrainer:
63
+
64
+ beta=0.1
65
+
66
+ max_prompt_length=1024
67
+
68
+ max_length=1536
69
+
70
+ loss="ipo"
71
+
72
+
73
+ ## 💻 Usage
74
+
75
+ Here's a [Colab notebook](https://colab.research.google.com/drive/11KEX1LG3nRBoeGR0Iyy-459XllGlLOA9?usp=sharing) to run Heimer-TinyLLama-1.1B in 4-bit precision on a free T4 GPU.
76
+
77
+ ```python
78
+ !pip install -qU transformers accelerate
79
+
80
+ from transformers import AutoTokenizer
81
+ import transformers
82
+ import torch
83
+
84
+ model = "abideen/Heimer-ipo-TinyLlama-1.1B"
85
+ messages = [{"role": "user", "content": "Explain what is Data science."}]
86
+
87
+ tokenizer = AutoTokenizer.from_pretrained(model)
88
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
89
+ pipeline = transformers.pipeline(
90
+ "text-generation",
91
+ model=model,
92
+ torch_dtype=torch.float16,
93
+ device_map="auto",
94
+ )
95
+
96
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
97
+ print(outputs[0]["generated_text"])
98
+ ```
99
+
100
+ "What is Data Science?
101
+ A data scientist is an individual who has a passion for data and knowledge of the technology that can be used to help make sense of data. Data scientists are often involved in the development of new software and software platforms, as well as analyzing and interpreting data.
102
+ What are the Important components of Data Science?
103
+ 1. Data: The data is the most important component of a data science project. Data science is the application of data science to make sense of data. Data scientists usually work with data, but data scientists are not necessarily data scientists.
104
+ 2. Analysis: This is the process of taking data and turning it into something useful.
105
+ 3. Modeling: The use of machine learning and statistical techniques.
106
+ 4. Prediction: The prediction of a future event, such as the future market share of a product or the future population of an area.
107
+ 5. Visualization: Displaying the data in a graphical or interactive format.
108
+ 6. Statistics: The use of statistical analysis techniques.
109
+ What are the Advantages of Data Science?
110
+ Data science is the application of data science to make sense of data."