ammarnasr's picture
Update README.md
190cc9e
|
raw
history blame
1.71 kB
metadata
license: openrail
dataset_info:
  features:
    - name: hexsha
      dtype: string
    - name: size
      dtype: int64
    - name: content
      dtype: string
    - name: avg_line_length
      dtype: float64
    - name: max_line_length
      dtype: int64
    - name: alphanum_fraction
      dtype: float64
  splits:
    - name: train
      num_bytes: 3582248477.9086223
      num_examples: 806789
    - name: test
      num_bytes: 394048264.9973618
      num_examples: 88747
    - name: valid
      num_bytes: 3982797.09401595
      num_examples: 897
  download_size: 1323156008
  dataset_size: 3980279540
task_categories:
  - text-generation
language:
  - code
tags:
  - code
pretty_name: TheStack-Java
size_categories:
  - 1M<n<10M

Dataset 1: TheStack - Java - Cleaned

Description: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Java, a popular statically typed language.

Target Language: Java

Dataset Size:

  • Training: 900,000 files
  • Validation: 50,000 files
  • Test: 50,000 files

Preprocessing:

  1. Selected Java as the target language due to its popularity on GitHub.
  2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
  3. Split files into 90% training, 5% validation, and 5% test sets.

Tokenizer: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.

Training Sequences: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning).