jaigouk commited on
Commit
8fffc4f
1 Parent(s): a6a15e6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ruby dataset
2
+
3
+ **Custom ruby dataset**
4
+ - rspec_dataset
5
+
6
+ rspec dataset is
7
+
8
+ **Bigcode dataset**
9
+
10
+ - ruby-dataset
11
+ - shell-dataset
12
+ - python-dataset
13
+ - sql-dataset
14
+
15
+ ## rspec dataset
16
+
17
+ I gathers specs for app/services from following repos. Because most of business logics are in app/services.
18
+
19
+
20
+ ```py
21
+ REPO_URLS = [
22
+ 'https://github.com/diaspora/diaspora.git',
23
+ 'https://github.com/mastodon/mastodon.git',
24
+ 'https://github.com/gitlabhq/gitlabhq.git',
25
+ 'https://github.com/discourse/discourse.git',
26
+ 'https://github.com/chatwoot/chatwoot.git',
27
+ 'https://github.com/opf/openproject.git',
28
+ ]
29
+ ```
30
+ output
31
+
32
+ ```sh
33
+ Repository Avg Source Lines Avg Test Lines Test Cases
34
+ diaspora 62 156 12
35
+ mastodon 97 131 59
36
+ gitlabhq 66 154 952
37
+ discourse 188 303 49
38
+ chatwoot 63 107 50
39
+ openproject 86 178 98
40
+ ------------------------------------------------------------
41
+ Total 74 159 1220
42
+ ------------------------------------------------------------
43
+
44
+ # avg_source_lines = [62, 97, 66, 188, 63, 86]
45
+ # avg_test_lines = [156, 131, 154, 303, 107, 178]
46
+ # test_cases = [12, 59, 952, 49, 50, 98]
47
+
48
+ # Assuming an average of 10 tokens per line of code, which is a rough average for programming languages
49
+ # tokens_per_line = 10
50
+
51
+ # Calculating the total tokens for source and test lines
52
+ # total_source_tokens = sum([lines * tokens_per_line for lines in avg_source_lines])
53
+ # total_test_tokens = sum([lines * tokens_per_line for lines in avg_test_lines])
54
+
55
+ # Total tokens
56
+ # total_tokens = total_source_tokens + total_test_tokens
57
+
58
+ # Average tokens per test case
59
+ # avg_tokens_per_test_case = total_tokens / sum(test_cases)
60
+
61
+ # total_tokens, avg_tokens_per_test_case
62
+ # -> (15910, 13.040983606557377)
63
+ ```
64
+
65
+ When you prepare data for training or inference with an LLM, each example (in this case, each test case or code snippet) needs to fit within this context window. The average tokens per test case calculated earlier (approximately 13.04 tokens) is well within the limits of LLMs
66
+
67
+