|
--- |
|
license: cc0-1.0 |
|
tags: |
|
- chess |
|
- stockfish |
|
pretty_name: Lichess Games With Stockfish Analysis |
|
--- |
|
# Condensed Lichess Database |
|
This dataset is a condensed version of the Lichess database. |
|
It only includes games for which Stockfish evaluations were available. |
|
Currently, the dataset contains the entire year 2023, which consists of >100M games and >1B positions. |
|
Games are stored in a format that is much faster to process than the original PGN data. |
|
<br> |
|
<br> |
|
Requirements: |
|
``` |
|
pip install zstandard python-chess datasets |
|
``` |
|
<br> |
|
|
|
# Quick Guide |
|
In the following, I explain the data format and how to use the dataset. At the end, you find a complete example script. |
|
|
|
### 1. Loading The Dataset |
|
You can stream the data without storing it locally (~100 GB currently). The dataset requires `trust_remote_code=True` to execute the [custom data loading script](https://huggingface.co/datasets/mauricett/lichess_sf/blob/main/lichess_sf.py), which is necessary to decompress the files. |
|
See [HuggingFace's documentation](https://huggingface.co/docs/datasets/main/en/load_hub#remote-code) if you're unsure. |
|
```py |
|
# Load dataset. |
|
dataset = load_dataset(path="mauricett/lichess_sf", |
|
split="train", |
|
streaming=True, |
|
trust_remote_code=True) |
|
``` |
|
<br> |
|
|
|
### 2. Data Format |
|
After loading the dataset, you can check how the samples look like: |
|
```py |
|
example = next(iter(dataset)) |
|
print(example) |
|
``` |
|
|
|
A single sample from the dataset contains one complete chess game as a dictionary. The dictionary keys are as follows: |
|
|
|
1. `example['fens']` --- A list of FENs in a slightly stripped format, missing the halfmove clock and fullmove number (see [definitions on wiki](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation#Definition)). The starting positions have been excluded (no player made a move yet). |
|
2. `example['moves']` --- A list of moves in [UCI format](https://en.wikipedia.org/wiki/Universal_Chess_Interface). `example['moves'][42]` is the move that led to position `example['fens'][42]`, etc. |
|
3. `example['scores']` --- A list of Stockfish evaluations (in centipawns) from the perspective of the player who is next to move. If `example['fens'][42]` is black's turn, `example['scores'][42]` will be from black's perspective. If the game ended with a terminal condition, the last element of the list is a string 'C' (checkmate), 'S' (stalemate) or 'I' (insufficient material). Games with other outcome conditions have been excluded. |
|
4. `example['WhiteElo'], example['BlackElo']` --- Player's Elos. |
|
<br> |
|
|
|
Everything but Elos is stored as strings. |
|
<br> |
|
|
|
### 3. Shuffle And Preprocess |
|
Use `datasets.shuffle()` to properly shuffle the dataset. Use `datasets.map()` to transform the data to the format you require. This will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader). |
|
|
|
|
|
```py |
|
# Shuffle and apply your own preprocessing. |
|
dataset = dataset.shuffle(seed=42) |
|
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer}) |
|
``` |
|
|
|
For a quick working example, you can try to use the following: |
|
```py |
|
class Tokenizer: |
|
def __call__(self, example): |
|
return example |
|
|
|
def preprocess(example, useful_fn): |
|
# Get number of moves made in the game. |
|
max_ply = len(example['moves']) |
|
pick_random_move = random.randint(0, max_ply) |
|
|
|
# Get the FEN, move and score for our random choice. |
|
fen = example['fens'][pick_random_move] |
|
move = example['moves'][pick_random_move] |
|
score = example['scores'][pick_random_move] |
|
|
|
# Transform data into the format of your choice. |
|
example['fens'] = useful_fn(fen) |
|
example['moves'] = useful_fn(move) |
|
example['scores'] = useful_fn(score) |
|
return example |
|
``` |
|
<br> |
|
<br> |
|
<br> |
|
# Complete Example |
|
|
|
```py |
|
import random |
|
import datasets |
|
|
|
# Shuffle and apply your own preprocessing. |
|
dataset = dataset.shuffle(seed=42) |
|
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer}) |
|
``` |
|
|
|
For a quick working example, you can try to use the following: |
|
```py |
|
# A mock tokenizer and preprocess function for demonstration. |
|
class Tokenizer: |
|
def __call__(self, example): |
|
return example |
|
|
|
def preprocess(example, useful_fn): |
|
# Get number of moves made in the game. |
|
max_ply = len(example['moves']) |
|
pick_random_move = random.randint(0, max_ply) |
|
|
|
# Get the FEN, move and score for our random choice. |
|
fen = example['fens'][pick_random_move] |
|
move = example['moves'][pick_random_move] |
|
score = example['scores'][pick_random_move] |
|
|
|
# Transform data into the format of your choice. |
|
example['fens'] = useful_fn(fen) |
|
example['moves'] = useful_fn(move) |
|
example['scores'] = useful_fn(score) |
|
return example |
|
|
|
tokenizer = Tokenizer() |
|
|
|
# Load dataset. |
|
dataset = load_dataset(path="mauricett/lichess_sf", |
|
split="train", |
|
streaming=True, |
|
trust_remote_code=True) |
|
|
|
# Shuffle and apply your own preprocessing. |
|
dataset = dataset.shuffle(seed=42) |
|
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer}) |
|
|
|
for batch in dataset: |
|
# do stuff |
|
break |
|
``` |