File size: 5,810 Bytes
d788aee 9fa8b98 9c82d79 d788aee 9fa8b98 f648741 9ee7b4f f648741 2a3d9a6 77b7065 aa2072f 1f359f5 aa2072f d28e7c4 aa2072f 77b7065 1f359f5 f648741 d28e7c4 1f359f5 9ee7b4f 77b7065 9ee7b4f f648741 77b7065 f648741 77b7065 1f359f5 4c265d2 77b7065 1f359f5 d28e7c4 4050552 d28e7c4 1f359f5 4050552 1f359f5 6e4503c 4050552 6e4503c 4050552 6e4503c d28e7c4 6e4503c d28e7c4 1f359f5 6e4503c 1f359f5 6e4503c 1f359f5 6e4503c 1f359f5 6e4503c d28e7c4 4c265d2 315cb18 4c265d2 6e4503c 4c265d2 315cb18 4c265d2 315cb18 4c265d2 315cb18 4c265d2 315cb18 4c265d2 6e4503c 4c265d2 315cb18 4c265d2 315cb18 4c265d2 315cb18 4c265d2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
---
license: cc0-1.0
tags:
- chess
- stockfish
pretty_name: Lichess Games With Stockfish Analysis
---
# Condensed Lichess Database
This dataset is a condensed version of the Lichess database.
It only includes games for which Stockfish evaluations were available.
Currently, the dataset contains the entire year 2023, which consists of >100M games and >1B positions.
Games are stored in a format that is much faster to process than the original PGN data.
<br>
<br>
Requirements:
```
pip install zstandard python-chess datasets
```
<br>
# Quick Guide
In the following, I explain the data format and how to use the dataset. At the end, you find a complete example script.
### 1. Loading The Dataset
You can stream the data without storing it locally (~100 GB currently). The dataset requires `trust_remote_code=True` to execute the [custom data loading script](https://huggingface.co/datasets/mauricett/lichess_sf/blob/main/lichess_sf.py), which is necessary to decompress the files.
See [HuggingFace's documentation](https://huggingface.co/docs/datasets/main/en/load_hub#remote-code) if you're unsure.
```py
# Load dataset.
dataset = load_dataset(path="mauricett/lichess_sf",
split="train",
streaming=True,
trust_remote_code=True)
```
<br>
### 2. Data Format
After loading the dataset, you can check how the samples look like:
```py
example = next(iter(dataset))
print(example)
```
A single sample from the dataset contains one complete chess game as a dictionary. The dictionary keys are as follows:
1. `example['fens']` --- A list of FENs in a slightly stripped format, missing the halfmove clock and fullmove number (see [definitions on wiki](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation#Definition)). The starting positions have been excluded (no player made a move yet).
2. `example['moves']` --- A list of moves in [UCI format](https://en.wikipedia.org/wiki/Universal_Chess_Interface). `example['moves'][42]` is the move that led to position `example['fens'][42]`, etc.
3. `example['scores']` --- A list of Stockfish evaluations (in centipawns) from the perspective of the player who is next to move. If `example['fens'][42]` is black's turn, `example['scores'][42]` will be from black's perspective. If the game ended with a terminal condition, the last element of the list is a string 'C' (checkmate), 'S' (stalemate) or 'I' (insufficient material). Games with other outcome conditions have been excluded.
4. `example['WhiteElo'], example['BlackElo']` --- Player's Elos.
<br>
Everything but Elos is stored as strings.
<br>
### 3. Shuffle And Preprocess
Use `datasets.shuffle()` to properly shuffle the dataset. Use `datasets.map()` to transform the data to the format you require. This will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader).
```py
# Shuffle and apply your own preprocessing.
dataset = dataset.shuffle(seed=42)
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer,
'score_fn': score_fn})
```
In this example, we're passing two additional arguments to the preprocess function in dataset.map(). You can use the following mock examples for inspiration:
```py
# A mock tokenizer and functions for demonstration.
class Tokenizer:
def __init__(self):
pass
def __call__(self, example):
return example
# Transform Stockfish score and terminal outcomes.
def score_fn(score):
return score
def preprocess(example, tokenizer, score_fn):
# Get number of moves made in the game.
max_ply = len(example['moves'])
pick_random_move = random.randint(0, max_ply-1)
# Get the FEN, move and score for our random choice.
fen = example['fens'][pick_random_move]
move = example['moves'][pick_random_move]
score = example['scores'][pick_random_move]
# Transform data into the format of your choice.
example['fens'] = tokenizer(fen)
example['moves'] = tokenizer(move)
example['scores'] = score_fn(score)
return example
tokenizer = Tokenizer()
```
<br>
<br>
<br>
# Complete Example
You can try pasting this into Colab and it should work fine. Have fun!
```py
import random
from datasets import load_dataset
from torch.utils.data import DataLoader
# A mock tokenizer and functions for demonstration.
class Tokenizer:
def __init__(self):
pass
def __call__(self, example):
return example
def score_fn(score):
# Transform Stockfish score and terminal outcomes.
return score
def preprocess(example, tokenizer, score_fn):
# Get number of moves made in the game.
max_ply = len(example['moves'])
pick_random_move = random.randint(0, max_ply-1)
# Get the FEN, move and score for our random choice.
fen = example['fens'][pick_random_move]
move = example['moves'][pick_random_move]
score = example['scores'][pick_random_move]
# Transform data into the format of your choice.
example['fens'] = tokenizer(fen)
example['moves'] = tokenizer(move)
example['scores'] = score_fn(score)
return example
tokenizer = Tokenizer()
# Load dataset.
dataset = load_dataset(path="mauricett/lichess_sf",
split="train",
streaming=True,
trust_remote_code=True)
# Shuffle and apply your own preprocessing.
dataset = dataset.shuffle(seed=42)
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer,
'score_fn': score_fn})
# PyTorch dataloader
dataloader = DataLoader(dataset, batch_size=256, num_workers=4)
n = 0
for batch in dataloader:
# do stuff
n += 1
print(n)
if n == 50:
break
``` |