File size: 4,009 Bytes
a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 a624c5c 0a4d537 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
library_name: transformers
tags: []
---
# Model Card for ChessGPT_d12 Model
## Model Details
### Model Description
This model is a GPT-2 architecture with 12 layers and 12 attention heads, each with a hidden state dimension of 768. It was trained using Andrey Karpathy's `llm.c` library to predict UCI chess moves. The training data consists of all games played on Lichess.org in January 2024, and the model was validated on games from January 2013. It was designed to assist with tasks related to chess move prediction and analysis.
- **Developed by:** Austin Davis
- **Model type:** GPT-2
- **Language(s):** UCI Chess Notation
- **License:** Apache 2.0
- **Training:** Pre-trained from random initialization
### Model Sources
- **Repository:** [Lichess GPT2 Model](https://huggingface.co/austindavis/ChessGPT_d12)
<!-- - **Demo:** [Lichess GPT2 Demo](https://demo-url.com) -->
## Uses
### Direct Use
The model can be used directly to predict chess moves based on UCI notation.
### Downstream Use
The model can be fine-tuned or adapted for chess analysis, game annotations, or training new models for chess-based tasks.
## Bias, Risks, and Limitations
While the model performs well on chess move prediction, its limitations stem from the scope of the training data. The model was trained on historical Lichess games, and its predictions may reflect common play patterns from these datasets. Users should be cautious about generalizing the model’s performance to other chess platforms or styles of play.
## How to Get Started with the Model
To load and use the model, you can follow the instructions below:
```python
from transformers import GPT2LMHeadModel, AutoTokenizer
from uci_tokenizers import UciTileTokenizer
model = GPT2LMHeadModel.from_pretrained("austindavis/ChessGPT_d12")
tokenizer = UciTileTokenizer()
# Example: Predict the next chess move
inputs = tokenizer("e2e4", return_tensors="pt")
outputs = model.generate(inputs.input_ids)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
The model was trained on all Lichess games played in January 2024. Validation was conducted on games played in January 2013.
### Training Procedure
The model was trained for 541,548 steps, with a final loss of 0.8139. It was trained using a padded vocabulary size of 8192, which was later reduced to 72 tokens to optimize for chess-specific UCI notation. The tokenizer used is based on UCI chess moves and is implemented in `uci_tokenizers.py`.
#### Preprocessing
The tokenizer follows a subword tokenization approach and handles UCI chess tokens. Promotion tokens are represented in uppercase letters (Q, B, R, N), and the vocab includes 64 square tokens (a1 to h8), along with 4 special tokens and a set of special tokens (i.e., BOS, PAD, EOS, UNK).
#### Training Hyperparameters
- **Training regime:** Mixed precision (fp16)
- **Learning rate:** 5e-5
- **Batch size:** 64
- **Steps:** 541,548
- **Final eval loss:** 0.8139
## Evaluation
### Testing Data, Factors & Metrics
The model was validated on a dataset of Lichess games played in January 2013. The key evaluation metric used was validation loss, with a final validation loss of 0.8139 achieved at the end of training.
## Environmental Impact
Training for the model was conducted on a GPU infrastructure, but specific details on the environmental impact, such as the total carbon emissions, were not recorded.
## Technical Specifications
### Model Architecture and Objective
- **Model type:** GPT-2
- **Layers:** 12
- **Attention heads:** 12
- **Hidden size:** 768
- **Vocabulary size:** 72
### Compute Infrastructure
- **Hardware:** NVIDIA RTX 3060 Mobile GPU
- **Software:** Trained using Andrey Karpathy’s [llm.c](https://github.com/karpathy/llm.c) library
## Citation
**BibTeX:**
```bibtex
@misc{chessgpt_d12,
author = {Austin Davis},
title = {ChessGPT_d12 Model for UCI Move Prediction},
year = {2024},
url = {https://huggingface.co/austindavis/ChessGPT_d12},
}
``` |