tongzhouw commited on
Commit
4062efb
1 Parent(s): b3dd919

Compatible with PyTorch master

Browse files
Files changed (2) hide show
  1. README.md +7 -7
  2. torchmoji/model_def.py +5 -5
README.md CHANGED
@@ -5,7 +5,7 @@
5
  TorchMoji is a [pyTorch](http://pytorch.org/) implementation of the [DeepMoji](https://github.com/bfelbo/DeepMoji) model developped by Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan and Sune Lehmann.
6
 
7
  This model trained on 1.2 billion tweets with emojis to understand how language is used to express emotions. Through transfer learning the model can obtain state-of-the-art performance on many emotion-related text modeling tasks.
8
-
9
  Try the online demo of DeepMoji [http://deepmoji.mit.edu](http://deepmoji.mit.edu/)! See the [paper](https://arxiv.org/abs/1708.00524), [blog post](https://medium.com/@bjarkefelbo/what-can-we-learn-from-emojis-6beb165a5ea0) or [FAQ](https://www.media.mit.edu/projects/deepmoji/overview/) for more details.
10
 
11
  ## Overview
@@ -15,7 +15,7 @@ Try the online demo of DeepMoji [http://deepmoji.mit.edu](http://deepmoji.mit.ed
15
  * [model/](model) contains the pretrained model and vocabulary.
16
  * [data/](data) contains raw and processed datasets that we include in this repository for testing.
17
  * [tests/](tests) contains unit tests for the codebase.
18
-
19
  To start out with, have a look inside the [examples/](examples) directory. See [score_texts_emojis.py](examples/score_texts_emojis.py) for how to use DeepMoji to extract emoji predictions, [encode_texts.py](examples/encode_texts.py) for how to convert text into 2304-dimensional emotional feature vectors or [finetune_youtube_last.py](examples/finetune_youtube_last.py) for how to use the model for transfer learning on a new dataset.
20
 
21
  Please consider citing the [paper](https://arxiv.org/abs/1708.00524) of DeepMoji if you use the model or code (see below for citation).
@@ -26,12 +26,12 @@ We assume that you're using [Python 2.7-3.5](https://www.python.org/downloads/)
26
 
27
  First you need to install [pyTorch (version 0.2+)](http://pytorch.org/), currently by:
28
  ```bash
29
- conda install pytorch -c soumith
30
  ```
31
  At the present stage the model can't make efficient use of CUDA. See details in the [Hugging Face blog post](https://medium.com/huggingface/understanding-emotions-from-keras-to-pytorch-3ccb61d5a983).
32
 
33
  When pyTorch is installed, run the following in the root directory to install the remaining dependencies:
34
-
35
  ```bash
36
  pip install -e .
37
  ```
@@ -54,21 +54,21 @@ cd tests
54
  nosetests -v
55
  ```
56
 
57
- By default, this will also run finetuning tests. These tests train the model for one epoch and then check the resulting accuracy, which may take several minutes to finish. If you'd prefer to exclude those, run the following instead:
58
 
59
  ```bash
60
  cd tests
61
  nosetests -v -a '!slow'
62
  ```
63
 
64
- ## Disclaimer
65
  This code has been tested to work with Python 2.7 and 3.5 on Ubuntu 16.04 and macOS Sierra machines. It has not been optimized for efficiency, but should be fast enough for most purposes. We do not give any guarantees that there are no bugs - use the code on your own responsibility!
66
 
67
  ## Contributions
68
  We welcome pull requests if you feel like something could be improved. You can also greatly help us by telling us how you felt when writing your most recent tweets. Just click [here](http://deepmoji.mit.edu/contribute/) to contribute.
69
 
70
  ## License
71
- This code and the pretrained model is licensed under the MIT license.
72
 
73
  ## Benchmark datasets
74
  The benchmark datasets are uploaded to this repository for convenience purposes only. They were not released by us and we do not claim any rights on them. Use the datasets at your responsibility and make sure you fulfill the licenses that they were released with. If you use any of the benchmark datasets please consider citing the original authors.
 
5
  TorchMoji is a [pyTorch](http://pytorch.org/) implementation of the [DeepMoji](https://github.com/bfelbo/DeepMoji) model developped by Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan and Sune Lehmann.
6
 
7
  This model trained on 1.2 billion tweets with emojis to understand how language is used to express emotions. Through transfer learning the model can obtain state-of-the-art performance on many emotion-related text modeling tasks.
8
+
9
  Try the online demo of DeepMoji [http://deepmoji.mit.edu](http://deepmoji.mit.edu/)! See the [paper](https://arxiv.org/abs/1708.00524), [blog post](https://medium.com/@bjarkefelbo/what-can-we-learn-from-emojis-6beb165a5ea0) or [FAQ](https://www.media.mit.edu/projects/deepmoji/overview/) for more details.
10
 
11
  ## Overview
 
15
  * [model/](model) contains the pretrained model and vocabulary.
16
  * [data/](data) contains raw and processed datasets that we include in this repository for testing.
17
  * [tests/](tests) contains unit tests for the codebase.
18
+
19
  To start out with, have a look inside the [examples/](examples) directory. See [score_texts_emojis.py](examples/score_texts_emojis.py) for how to use DeepMoji to extract emoji predictions, [encode_texts.py](examples/encode_texts.py) for how to convert text into 2304-dimensional emotional feature vectors or [finetune_youtube_last.py](examples/finetune_youtube_last.py) for how to use the model for transfer learning on a new dataset.
20
 
21
  Please consider citing the [paper](https://arxiv.org/abs/1708.00524) of DeepMoji if you use the model or code (see below for citation).
 
26
 
27
  First you need to install [pyTorch (version 0.2+)](http://pytorch.org/), currently by:
28
  ```bash
29
+ conda install pytorch -c pytorch
30
  ```
31
  At the present stage the model can't make efficient use of CUDA. See details in the [Hugging Face blog post](https://medium.com/huggingface/understanding-emotions-from-keras-to-pytorch-3ccb61d5a983).
32
 
33
  When pyTorch is installed, run the following in the root directory to install the remaining dependencies:
34
+
35
  ```bash
36
  pip install -e .
37
  ```
 
54
  nosetests -v
55
  ```
56
 
57
+ By default, this will also run finetuning tests. These tests train the model for one epoch and then check the resulting accuracy, which may take several minutes to finish. If you'd prefer to exclude those, run the following instead:
58
 
59
  ```bash
60
  cd tests
61
  nosetests -v -a '!slow'
62
  ```
63
 
64
+ ## Disclaimer
65
  This code has been tested to work with Python 2.7 and 3.5 on Ubuntu 16.04 and macOS Sierra machines. It has not been optimized for efficiency, but should be fast enough for most purposes. We do not give any guarantees that there are no bugs - use the code on your own responsibility!
66
 
67
  ## Contributions
68
  We welcome pull requests if you feel like something could be improved. You can also greatly help us by telling us how you felt when writing your most recent tweets. Just click [here](http://deepmoji.mit.edu/contribute/) to contribute.
69
 
70
  ## License
71
+ This code and the pretrained model is licensed under the MIT license.
72
 
73
  ## Benchmark datasets
74
  The benchmark datasets are uploaded to this repository for convenience purposes only. They were not released by us and we do not claim any rights on them. Use the datasets at your responsibility and make sure you fulfill the licenses that they were released with. If you use any of the benchmark datasets please consider citing the original authors.
torchmoji/model_def.py CHANGED
@@ -215,7 +215,7 @@ class TorchMoji(nn.Module):
215
  x = self.embed_dropout(x)
216
 
217
  # Update packed sequence data for RNN
218
- packed_input = PackedSequence(data=x, batch_sizes=packed_input.batch_sizes)
219
 
220
  # skip-connection from embedding to output eases gradient-flow and allows access to lower-level features
221
  # ordering of the way the merge is done is important for consistency with the pretrained model
@@ -223,10 +223,10 @@ class TorchMoji(nn.Module):
223
  lstm_1_output, _ = self.lstm_1(lstm_0_output, hidden)
224
 
225
  # Update packed sequence data for attention layer
226
- packed_input = PackedSequence(data=torch.cat((lstm_1_output.data,
227
- lstm_0_output.data,
228
- packed_input.data), dim=1),
229
- batch_sizes=packed_input.batch_sizes)
230
 
231
  input_seqs, _ = pad_packed_sequence(packed_input, batch_first=True)
232
 
 
215
  x = self.embed_dropout(x)
216
 
217
  # Update packed sequence data for RNN
218
+ packed_input = PackedSequence(x, packed_input.batch_sizes)
219
 
220
  # skip-connection from embedding to output eases gradient-flow and allows access to lower-level features
221
  # ordering of the way the merge is done is important for consistency with the pretrained model
 
223
  lstm_1_output, _ = self.lstm_1(lstm_0_output, hidden)
224
 
225
  # Update packed sequence data for attention layer
226
+ packed_input = PackedSequence(torch.cat((lstm_1_output.data,
227
+ lstm_0_output.data,
228
+ packed_input.data), dim=1),
229
+ packed_input.batch_sizes)
230
 
231
  input_seqs, _ = pad_packed_sequence(packed_input, batch_first=True)
232