Update README.md
Browse files
README.md
CHANGED
@@ -10,16 +10,15 @@ Hello, everyone!
|
|
10 |
<br>
|
11 |
I will give you a quick overview of the data format and a guide on how to use the dataset.
|
12 |
I always appreciate feedback and discussions. You can speak out [here](https://huggingface.co/datasets/mauricett/lichess_sf/discussions).
|
13 |
-
<br>
|
14 |
And now, enjoy...
|
15 |
<br>
|
16 |
<br>
|
17 |
|
18 |
# Condensed Lichess Database
|
19 |
This dataset is a condensed version of the Lichess database.
|
20 |
-
It includes
|
21 |
-
Games are stored in a format that is much faster to process than the original PGN data.
|
22 |
Currently, the dataset contains the entire year 2023, which consists of >100M games and >1B positions.
|
|
|
23 |
<br>
|
24 |
<br>
|
25 |
|
@@ -27,13 +26,13 @@ Currently, the dataset contains the entire year 2023, which consists of >100M ga
|
|
27 |
```
|
28 |
pip install zstandard python-chess
|
29 |
```
|
30 |
-
<br>
|
31 |
|
32 |
-
|
33 |
-
|
34 |
|
35 |
# Quick Quide
|
36 |
-
Using this dataset should be straightforward, but let me give you a quick tour.
|
|
|
37 |
### 1. Loading the dataset
|
38 |
I recommend streaming the data, because the dataset is rather large (~100 GB) and I will expand it in the future.
|
39 |
Note, `trust_remote_code=True` is needed to execute my [custom data loading script](https://huggingface.co/datasets/mauricett/lichess_sf/blob/main/lichess_sf.py), which is necessary to decompress the files.
|
@@ -45,6 +44,15 @@ See [HuggingFace's documentation](https://huggingface.co/docs/datasets/main/en/l
|
|
45 |
streaming=True,
|
46 |
trust_remote_code=True)
|
47 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
### Usage
|
50 |
To use the dataset, apply `datasets.shuffle()` and your own transformations (e.g. tokenizer) using `datasets.map()`. The latter will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader).
|
|
|
10 |
<br>
|
11 |
I will give you a quick overview of the data format and a guide on how to use the dataset.
|
12 |
I always appreciate feedback and discussions. You can speak out [here](https://huggingface.co/datasets/mauricett/lichess_sf/discussions).
|
|
|
13 |
And now, enjoy...
|
14 |
<br>
|
15 |
<br>
|
16 |
|
17 |
# Condensed Lichess Database
|
18 |
This dataset is a condensed version of the Lichess database.
|
19 |
+
It only includes games for which Stockfish evaluations were available.
|
|
|
20 |
Currently, the dataset contains the entire year 2023, which consists of >100M games and >1B positions.
|
21 |
+
Games are stored in a format that is much faster to process than the original PGN data.
|
22 |
<br>
|
23 |
<br>
|
24 |
|
|
|
26 |
```
|
27 |
pip install zstandard python-chess
|
28 |
```
|
|
|
29 |
|
30 |
+
|
31 |
+
|
32 |
|
33 |
# Quick Quide
|
34 |
+
Using this dataset should be straightforward, but let me give you a quick tour. At the end, you find a complete example script.
|
35 |
+
|
36 |
### 1. Loading the dataset
|
37 |
I recommend streaming the data, because the dataset is rather large (~100 GB) and I will expand it in the future.
|
38 |
Note, `trust_remote_code=True` is needed to execute my [custom data loading script](https://huggingface.co/datasets/mauricett/lichess_sf/blob/main/lichess_sf.py), which is necessary to decompress the files.
|
|
|
44 |
streaming=True,
|
45 |
trust_remote_code=True)
|
46 |
```
|
47 |
+
<br>
|
48 |
+
|
49 |
+
# Data Format
|
50 |
+
After loading the dataset, you can already check out how the samples look like:
|
51 |
+
```py
|
52 |
+
example = next(iter(dataset))
|
53 |
+
```
|
54 |
+
A single sample from the dataset contains an entire chess game as a dictionary. The dictionary has the k
|
55 |
+
|
56 |
|
57 |
### Usage
|
58 |
To use the dataset, apply `datasets.shuffle()` and your own transformations (e.g. tokenizer) using `datasets.map()`. The latter will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader).
|