mauricett commited on
Commit
4c265d2
1 Parent(s): d28e7c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -2
README.md CHANGED
@@ -44,7 +44,7 @@ A single sample from the dataset contains one complete chess game as a dictionar
44
 
45
  1. `example['fens']` --- A list of FENs in a slightly stripped format, missing the halfmove clock and fullmove number (see [definitions on wiki](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation#Definition)). The starting positions have been excluded (no player made a move yet).
46
  2. `example['moves']` --- A list of moves in [UCI format](https://en.wikipedia.org/wiki/Universal_Chess_Interface). `example['moves'][42]` is the move that led to position `example['fens'][42]`, etc.
47
- 3. `example['scores']` --- A list of Stockfish evaluations (in centipawns) from the perspective of the player who is next to move. If `example['fens'][42]` is black's move, `example['scores'][42]` will be from black's perspective. If the game ended with a terminal condition, the last element of the list is a string 'C' (checkmate), 'S' (stalemate) or 'I' (insufficient material). Games with other outcome conditions have been excluded.
48
  4. `example['WhiteElo'], example['BlackElo']` --- Player's Elos.
49
  <br>
50
 
@@ -83,4 +83,56 @@ def preprocess(example, useful_fn):
83
  example['scores'] = useful_fn(score)
84
  return example
85
  ```
86
- <br>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  1. `example['fens']` --- A list of FENs in a slightly stripped format, missing the halfmove clock and fullmove number (see [definitions on wiki](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation#Definition)). The starting positions have been excluded (no player made a move yet).
46
  2. `example['moves']` --- A list of moves in [UCI format](https://en.wikipedia.org/wiki/Universal_Chess_Interface). `example['moves'][42]` is the move that led to position `example['fens'][42]`, etc.
47
+ 3. `example['scores']` --- A list of Stockfish evaluations (in centipawns) from the perspective of the player who is next to move. If `example['fens'][42]` is black's turn, `example['scores'][42]` will be from black's perspective. If the game ended with a terminal condition, the last element of the list is a string 'C' (checkmate), 'S' (stalemate) or 'I' (insufficient material). Games with other outcome conditions have been excluded.
48
  4. `example['WhiteElo'], example['BlackElo']` --- Player's Elos.
49
  <br>
50
 
 
83
  example['scores'] = useful_fn(score)
84
  return example
85
  ```
86
+ <br>
87
+ <br>
88
+ <br>
89
+ # Complete Example
90
+
91
+ ```py
92
+ import random
93
+ import datasets
94
+
95
+ # Shuffle and apply your own preprocessing.
96
+ dataset = dataset.shuffle(seed=42)
97
+ dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer})
98
+ ```
99
+
100
+ For a quick working example, you can try to use the following:
101
+ ```py
102
+ # A mock tokenizer and preprocess function for demonstration.
103
+ class Tokenizer:
104
+ def __call__(self, example):
105
+ return example
106
+
107
+ def preprocess(example, useful_fn):
108
+ # Get number of moves made in the game.
109
+ max_ply = len(example['moves'])
110
+ pick_random_move = random.randint(0, max_ply)
111
+
112
+ # Get the FEN, move and score for our random choice.
113
+ fen = example['fens'][pick_random_move]
114
+ move = example['moves'][pick_random_move]
115
+ score = example['scores'][pick_random_move]
116
+
117
+ # Transform data into the format of your choice.
118
+ example['fens'] = useful_fn(fen)
119
+ example['moves'] = useful_fn(move)
120
+ example['scores'] = useful_fn(score)
121
+ return example
122
+
123
+ tokenizer = Tokenizer()
124
+
125
+ # Load dataset.
126
+ dataset = load_dataset(path="mauricett/lichess_sf",
127
+ split="train",
128
+ streaming=True,
129
+ trust_remote_code=True)
130
+
131
+ # Shuffle and apply your own preprocessing.
132
+ dataset = dataset.shuffle(seed=42)
133
+ dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer})
134
+
135
+ for batch in dataset:
136
+ # do stuff
137
+ break
138
+ ```