This is a GPT-2 model trained in llm.c for 100K steps (of 1M batch size) on FineWeb-EDU. A lot more detailed information is here: https://github.com/karpathy/llm.c/discussions/677 . This model is exactly as the post above, except changing `-x 100000` to run 100K steps. The model achieves HellaSwag of 57.7