license: mit
I trained an 11M parameter Mamba LM to play chess, starting from code&data by @a_karvonen. After seeing 18.8M games, it has 37.7% win rate vs Stockfish level 0 - not apples^2, but 25M parameter was <20% after 20M games.
https://twitter.com/HaileyStormC/status/1764148850384892394
If you try to use it -- it plays well only as white. The tokenizer
is in the meta.pkl file... it's easier to just play using the chess eval script here: https://github.com/adamkarvonen/chess_gpt_eval ... you need mamba model scripts such as mamba.py and mamba_lm.py from https://github.com/alxndrTL/mamba.py, and you need my mamba_model.py ... you can also replace the main.py with my chess_eval_main.py which has a human player ready for you to play (terminal, using PGN strings for your moves).