File size: 7,933 Bytes
2ea1065
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
Code in this folder contains implementation for the CraigslistBargain task in the following paper:

[Decoupling Strategy and Generation in Negotiation Dialogues](https://arxiv.org/abs/1808.09637).
He He, Derek Chen, Anusha Balakrishnan and Percy Liang.
Empirical Methods in Natural Language Processing (EMNLP), 2018.

## Dependencies
Python 2.7, PyTorch 0.4.

Install `cocoa`:
```
cd ..;
python setup.py develop;
```

`pip install -r requirements.txt`

## Dataset
All data is on the Codalab [worksheet](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/).

### Visualize JSON transcripts
All dialogues (either generated by self-play or collected from AMT)
are in the same [JSON format](../README.md#examples-and-datasets).
To visualize the JSON files in HTML, see documentation [here](../README.md#visualize).
For CraigslistBargain dialogues, pass in the additional argument:
- `--img-path`: path to Craigslist posting images; download on [Codalab](https://worksheets.codalab.org/bundles/0xb93730d80e1c4d4cb4c6bf7c9ebef12f/).  

### Collect your own data
If you want to collect your own data, read the following steps.

### Scenario generation
1. Schema: `data/craigslist-schema.json`.
2. Scrape Craigslist posts from different categories:
```
cd scraper;
for cat in car phone bike electronics furniture housing; do \
    scrapy crawl craigslist -o data/negotiation/craigslist_$cat.json -a cache_dir=/tmp/craigslist_cache -a from_cache=False -a num_result_pages=100 -a category=$cat -a image=1; \
done
```
3. Generate scenarios: 
```
PYTHONPATH=. python scripts/generate_scenarios.py --num-scenarios <number> --schema-path data/craigslist-schema.json --scenarios-path data/scenarios.json --scraped-data scraper/data/negotiation --categories furniture housing car phone bike electronics --fractions 1 1 1 1 1 1 --discounts 0.9 0.7 0.5
```
- `--fractions`: fractions to sample from each category.
- `--discounts`: possible targets for the buyer, `discount * listing_price`.

### Set up the website and AMT HITs. 
See [data collection](../README.md#data-collection) in `cocoa` README.

## Building the bot

### Use the modular approach
The modular framework consists of three parts: the parser, the manager, and the generator.

#### <a name=price-tracker>1. Build the price tracker.</a>
The price tracker recognizes price mentions in an utterance.
```
PYTHONPATH=. python core/price_tracker.py --train-examples-path data/train.json --output <path-to-save-price-tracker>
```

#### 2. Parse the training dialogues.
Parse both training and validation data.
```
PYTHONPATH=. python parse_dialogue.py --transcripts data/train.json --price-tracker <path-to-save-price-tracker> --max-examples -1 --templates-output templates.pkl --model-output model.pkl --transcripts-output data/train-parsed.json
PYTHONPATH=. python parse_dialogue.py --transcripts data/dev.json --price-tracker <path-to-save-price-tracker> --max-examples -1 --templates-output templates.pkl --model-output model.pkl --transcripts-output data/dev-parsed.json
```
- Parse utterances into coarse dialogue acts using the rule-based parser (`--transcripts-output`).
- Learn an n-gram model over the dialogue acts (`--model-output`), which will be used by the **hybrid policy**.
- Extract utterance templates (`--templates-output`) for retrieval-based generator.

#### 3. Learning the manager.
We train a seq2seq model over the coarse dialogue acts using parsed data.
```
mkdir -p mappings/lf2lf;
mkdir -p cache/lf2lf;
mkdir -p checkpoint/lf2lf;
PYTHONPATH=. python main.py --schema-path data/craigslist-schema.json --train-examples-paths data/train-parsed.json --test-examples-paths data/dev-parsed.json \
--price-tracker price_tracker.pkl \
--model lf2lf \
--model-path checkpoint/lf2lf --mappings mappings/lf2lf \
--word-vec-size 300 --pretrained-wordvec '' '' \
--rnn-size 300 --rnn-type LSTM --global-attention multibank_general \
--num-context 2 --stateful \
--batch-size 128 --gpuid 0 --optim adagrad --learning-rate 0.01 \
--epochs 15 --report-every 500 \
--cache cache/lf2lf --ignore-cache \
--verbose
```

#### <a name=rl>4. Finetune the manager with reinforcement learning.</a>
Generate self-play dialogues using the above learned policy and
run REINFORCE with a given reward function.

First, let's generate the training and validation scenarios.
We will directly get those from the training and validation data.
```
PYTHONPATH=. python ../scripts/chat_to_scenarios.py --chats data/train.json --scenarios data/train-scenarios.json
PYTHONPATH=. python ../scripts/chat_to_scenarios.py --chats data/dev.json --scenarios data/dev-scenarios.json
```
Now, we can run self-play and REINFORCE with a reward function, e.g. `margin`.
```
mkdir checkpoint/lf2lf-margin;
PYTHONPATH=. python reinforce.py --schema-path data/craigslist-schema.json \
--scenarios-path data/train-scenarios.json \
--valid-scenarios-path data/dev-scenarios.json \
--price-tracker price_tracker.pkl \
--agent-checkpoints checkpoint/lf2lf/model_best.pt checkpoint/lf2lf/model_best.pt \
--model-path checkpoint/lf2lf-margin \
--optim adagrad --learning-rate 0.001 \
--agents pt-neural pt-neural \
--report-every 500 --max-turns 20 --num-dialogues 5000 \
--sample --temperature 0.5 --max-length 20 --reward margin
```
- `--reward`: `margin` (utility), `fair` (fairness), and `length` (length).
- `--agents`: agent types 

### Use the end-to-end approach

#### 1. Build pretrained word embeddings.
First, build the vocabulary. Note that we need the [price tracker](#price-tracker) to bin prices.
```
mkdir -p mappings/seq2seq;
PYTHONPATH=. python main.py --schema-path data/craigslist-schema.json --train-examples-paths scr/data/train.json --mappings mappings/seq2seq --model seq2seq --price-tracker price_tracker.pkl --ignore-cache --vocab-only
```

Get the GloVe embedding.
```
wget http://nlp.stanford.edu/data/glove.840B.300d.zip;
unzip glove.840B.300d.zip;
```

Filter pretrained embedding for the model vocab.
We use separate embeddings for the utterances and the product description specified by `--vocab-type`.
```
PYTHONPATH=. python ../cocoa/neural/embeddings_to_torch.py --emb-file glove.840B.300d.txt --vocab-file mappings/seq2seq/vocab.pkl --output-file mappings/seq2seq/ --vocab-type kb
PYTHONPATH=. python ../cocoa/neural/embeddings_to_torch.py --emb-file glove.840B.300d.txt --vocab-file mappings/seq2seq/vocab.pkl --output-file mappings/seq2seq/ --vocab-type utterance
```

#### 2. Train the seq2seq model.
```
mkdir -p cache/seq2seq;
mkdir -p checkpoint/seq2seq;
PYTHONPATH=. python main.py --schema-path data/craigslist-schema.json --train-examples-paths data/train.json --test-examples-paths data/dev-parsed.json \
--price-tracker price_tracker.pkl \
--model seq2seq \
--model-path checkpoint/seq2seq --mappings mappings/seq2seq \
--pretrained-wordvec mappings/seq2seq/utterance_glove.pt mappings/seq2seq/kb_glove.pt --word-vec-size 300 \
--rnn-size 300 --rnn-type LSTM --global-attention multibank_general \
--enc-layers 2 --dec-layers 2 --num-context 2 \
--batch-size 128 --gpuid 0 --optim adagrad --learning-rate 0.01  \
--report-every 500 \
--epochs 15 \
--cache cache/seq2seq --ignore-cache \
--verbose
```

#### 3. Finetune with RL.
See [finetuning](#rl) in the modular framework.
We just need to change the the model path to `--model-path checkpoint/seq2seq`.

## Chat with the bot
Chat with the bot in the command line interface:
```
PYTHONPATH=. python ../scripts/generate_dataset.py --schema-path data/craigslist-schema.json --scenarios-path data/dev-scenarios.json --results-paths bot-chat-transcripts.json --max-examples 20 --agents <agent-name> cmd --price-tracker price_tracker.pkl --agent-checkpoints <ckpt-file> "" --max-turns 20 --random-seed <seed> --sample --temperature 0.2
```

Chat with the bot in the web interface:
add the bot model to the config file (example: `web/app_params_allsys.json`)
and [launch the website](../README.md#web).