File size: 2,156 Bytes
add1ca0
 
a80f7a3
 
 
abe5a38
 
add1ca0
 
f0daef2
add1ca0
 
 
 
 
ca27872
a80f7a3
 
 
 
add1ca0
a80f7a3
add1ca0
 
 
 
a80f7a3
add1ca0
 
a80f7a3
add1ca0
 
 
 
a80f7a3
add1ca0
 
a80f7a3
 
 
 
 
add1ca0
 
a80f7a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
add1ca0
 
abe5a38
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
library_name: transformers
license: llama3.1
base_model:
- meta-llama/Llama-3.1-8B-Instruct
tags:
- nlrl
---

# Model Card for Llama-3.1-8B-Instruct-NLRL-TicTacToe-Policy

## Model Details

### Model Description

- **Developed by:** NLRL Team
- **Model type:** Language Policy Model for TicTacToe
- **Language(s):** English
- **License:** MIT
- **Finetuned from model:** LLaMA-3.1-8B-Instruct

This model serves as a language policy in Natural Language Reinforcement Learning (NLRL) framework, specifically trained for the TicTacToe game. It generates actions through chain-of-thought reasoning and outputs move decisions.

## Uses

### Direct Use
This model can be used as a TicTacToe player that explains its strategic thinking through natural language before making moves. The model generates both reasoning chains and final move decisions.

### Out-of-Scope Use
This model is specifically trained for TicTacToe and should not be used for other games or tasks.

## Training Details

### Training Data
Training data consists of state-action pairs collected through NLRL actor-critic learning process, with language-based Monte Carlo value estimates used for policy improvement.

### Training Procedure
- Trained using FSDP (Fully Sharded Data Parallel) across 4 H100 GPUs
- Learning rate: 1e-5
- Training epochs per iteration: 2
- Batch size: 8
- Max sequence length: 1024

## Evaluation
- Tested against deterministic (first-move) and random opponent strategies
- Achieves >90% win rate against both opponent types after convergence

## Model Architecture
- Base model: LLaMA-3.1-8B-Instruct
- Input: Text description of TicTacToe board state
- Output: Chain-of-thought reasoning followed by move decision

## Citation
```bibtex
@misc{nlrl,
      title={Natural Language Reinforcement Learning}, 
      author={Xidong Feng and Ziyu Wan and Haotian Fu and Bo Liu and Mengyue Yang and Girish A. Koushik and Zhiyuan Hu and Ying Wen and Jun Wang},
      year={2024},
      eprint={2411.14251},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2411.14251}, 
}
```

## Model Card Contact
benjaminliu.eecs@gmail.com