File size: 1,646 Bytes
775f457
 
dd4b70e
10429bf
6450a4a
775f457
 
 
 
89d87ba
775f457
 
15a8360
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Dataset descriptions:

- lichess_6gb: 6GB of 16 million games from lichess's database. 16492151 games, 6486463314 chars.  No elo filtering performed. Comprised of games from lichess 2016-06 and 2017-05.
- lichess_9gb: 9GB of games from lichess's database. No elo filtering performed. Comprised of games from lichess 2017-07 and 2017-08.
- lichess_100mb: 100MB of 300k games from lichess's database. Comprised of games from lichess 2016-01. This is used to train linear probes on a separate dataset from the LLM training dataset.
- Lichess_gt_18k: ~4GB of games from lichess. Per OpenAI's weak to strong generalization paper, filtered to only include games where white is > 1800 ELO.
- Stockfish: 4.5GB of games generated by White playing as Stockfish ELO 3200 against a range of Stockfish ELO 1300-3200 as black.
- Lichess-stockfish mix: a 50 / 50 mix of > 1800 ELO lichess games and stockfish generated games
- Lichess results: lichess, but we include the result before every game. Hopefully, we can then prompt the model with ";1-0#1.", indicating to the model that it's supposed to win this game.
- lichess_200k_elo_bins: We include a maximum of 200k games from every 100 Elo bucket, so the model trains on a more even distribution of Elos.

Blocks dataset include only one column and are used for training. Every cell is a batch I created that is 1024 characters long. Datasets without "blocks" in the name
contain metadata like player skill, result, etc.

This script is used to create the batches of 1024 characters from a file with a bunch of PGNs: https://github.com/adamkarvonen/chess_gpt_eval/blob/dataset_generation/logs/batching.ipynb