Text Generation
Transformers
PyTorch
English
gpt_neox
Inference Endpoints
text-generation-inference
File size: 3,486 Bytes
03a376c
 
 
 
 
00d6848
 
 
 
03a376c
 
00d6848
03a376c
00d6848
03a376c
98fd6f8
 
03a376c
5e2d545
 
03a376c
 
 
 
00d6848
03a376c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25de7a2
 
03a376c
 
 
25de7a2
 
 
 
 
03a376c
 
 
 
 
 
 
 
 
 
 
 
 
 
00d6848
03a376c
00d6848
03a376c
 
 
00d6848
03a376c
00d6848
03a376c
cd84307
 
 
98fd6f8
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: apache-2.0
language:
- en
datasets:
- Waterhorse/chess_data
- anon8231489123/ShareGPT_Vicuna_unfiltered
- OpenAssistant/oasst1
- vicgalle/alpaca-gpt4
---

# Chessgpt-Chat-v1 

Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1.

  - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
  - Chat Version: [Chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)

Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk.

## Model Details
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model in Chess.

## GPU Inference

This requires a GPU with 8GB memory.

```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

MIN_TRANSFORMERS_VERSION = '4.25.1'

# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'

# init
tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-chat-v1")
model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-chat-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')

# infer
# Conversation between two
prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1:"
# Conversation between more than two
#prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1: Sicilian defense.<|endoftext|>Human 2:"

inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
    **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
```

# Uses

Excluded uses are described below.

### Direct Use

`chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.

#### Out-of-Scope Use

`chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.

#### Bias, Risks, and Limitations

Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.

# Evaluation
Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.

# Citation Information
```bash
@article{feng2023chessgpt,
  title={ChessGPT: Bridging Policy Learning and Language Modeling},
  author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
  journal={arXiv preprint arXiv:2306.09200},
  year={2023}
}
```