HaileyStorm commited on
Commit
3bfb3cc
1 Parent(s): 44e42b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -6,4 +6,6 @@ license: mit
6
  I trained an 11M parameter Mamba LM to play chess, starting from code&data by
7
  @a_karvonen. After seeing 18.8M games, it has 37.7% win rate vs Stockfish level 0 - not apples^2, but 25M parameter was <20% after 20M games.
8
 
9
- https://twitter.com/HaileyStormC/status/1764148850384892394
 
 
 
6
  I trained an 11M parameter Mamba LM to play chess, starting from code&data by
7
  @a_karvonen. After seeing 18.8M games, it has 37.7% win rate vs Stockfish level 0 - not apples^2, but 25M parameter was <20% after 20M games.
8
 
9
+ https://twitter.com/HaileyStormC/status/1764148850384892394
10
+
11
+ If you try to use it -- it plays well only as white. The `tokenizer` is in the meta.pkl file... it's easier to just play using the chess eval script here: https://github.com/adamkarvonen/chess_gpt_eval ... you need mamba model scripts such as mamba.py and mamba_lm.py from https://github.com/alxndrTL/mamba.py, and you need my mamba_model.py ... you can also replace the main.py with my chess_eval_main.py which has a human player ready for you to play (terminal, using PGN strings for your moves).