gvij commited on
Commit
e657615
1 Parent(s): 054b7c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -1,3 +1,40 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
1
  ---
2
+ datasets:
3
+ - nampdn-ai/tiny-codes
4
+ library_name: peft
5
+ tags:
6
+ - llama2
7
+ - llama2-7b
8
+ - code-generation
9
+ - code generation
10
+ - tiny-code
11
+ - code
12
+ - instruct
13
+ - instruct-code
14
+ - code-alpaca
15
+ - alpaca-instruct
16
+ - alpaca
17
+ - llama7b
18
+ - gpt2
19
+ ---
20
+
21
+ We finetuned Llama 2 7B model from Meta on Tiny-codes Dataset (nampdn-ai/tiny-codes) for ~ 10,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
22
+
23
+ This dataset has **1.63 million rows** of data and is a collection of short and clear code snippets that can help LLM models learn how to reason with both natural and programming languages. The dataset covers a wide range of programming languages, such as Python, TypeScript, JavaScript, Ruby, Julia, Rust, C++, Bash, Java, C#, and Go. It also includes two database languages: Cypher (for graph databases) and SQL (for relational databases) in order to study the relationship of entities.
24
+
25
+ The finetuning session got completed in 53 hours and costed us ~ `$125` for the entire finetuning run!
26
+
27
+ #### Hyperparameters & Run details:
28
+ - Model Path: meta-llama/Llama-2-7b-hf
29
+ - Dataset: nampdn-ai/tiny-codes
30
+ - Learning rate: 0.0002
31
+ - Number of epochs: 1 (10k steps)
32
+ - Data split: Training: 90% / Validation: 10%
33
+ - Gradient accumulation steps: 1
34
+
35
+ Loss metrics:
36
+ ![training loss](train-loss.png "Training loss")
37
+
38
  ---
39
+ license: apache-2.0
40
+ ---