Commit History

Download training data from S3
679a7b2

PeteBleackley commited on

Removed unnecessary import
1aab673

PeteBleackley commited on

Removed train_base_models task. Not needed with RoBERTa models
8454a19

PeteBleackley commited on

Added gradio to requirements.txt
5c2caf3

PeteBleackley commited on

Configured gradio app for training
6eb15d0

PeteBleackley commited on

Added HuggingFace Space configuration YAML to README.md
eef6e62

PeteBleackley commited on

Added HuggingFace Space configuration YAML to README.md
1f5b930

PeteBleackley commited on

Created HuggingFace Space
cce945c

PeteBleackley commited on

Removed diagnostic
3340c72

PeteBleackley commited on

Ensure consitency value is 32 bit
452e9a6

PeteBleackley commited on

Removed diagnostics
355ad7d

PeteBleackley commited on

Correct dimension of consistency cosine
14d83dc

PeteBleackley commited on

Wrap CrossEntropyLoss in callable that makes it appplicable to sequences
4d8fdac

PeteBleackley commited on

Using torch.nn.CosineSimilarity to simplify code
798488e

PeteBleackley commited on

Use builtin sum rather than torch.sum for a generator
1d1a876

PeteBleackley commited on

Removed unnecessary parameters
684c1d8

PeteBleackley commited on

Attention mask in decoder
69cf4c5

PeteBleackley commited on

Set use_cache argument
9052370

PeteBleackley commited on

Fix Einstein summation notation
7fe1144

PeteBleackley commited on

Use keepdim option when normalising vectors
738b546

PeteBleackley commited on

Make EPSILON a tensor
cf5f935

PeteBleackley commited on

torch.maximum, not torch.max
98ad67d

PeteBleackley commited on

Unsqueeze attention mask
bc77ce5

PeteBleackley commited on

Unpack BatchEncoding
b5ce6f8

PeteBleackley commited on

Create BatchEncoding in pad
a0c9643

PeteBleackley commited on

Use BatchEncoding for training, not batch_encoding
08197ec

PeteBleackley commited on

Use BatchEncoding for training
80162eb

PeteBleackley commited on

Removed unnecessary on_epoch_end
02e37b9

PeteBleackley commited on

Learning rate scheduler
ae82fc7

PeteBleackley commited on

input_embeddings not needed
a5b7b8e

PeteBleackley commited on

Removed unnecessary parameter
8172944

PeteBleackley commited on

get_input_embeddings() directly from base model
e095479

PeteBleackley commited on

Missing 'from_pretrained'
215b416

PeteBleackley commited on

config didn't need to be a property
0abed2a

PeteBleackley commited on

There's a simpler way of doing this, I hope
858f75e

PeteBleackley commited on

Might be simpler to inherit from RobertaModel rather than PreTrainedModel
f0ad7f1

PeteBleackley commited on

Removed a base model that was causing a loop in model initialisation
87535ff

PeteBleackley commited on

Problems with config
2f6dc26

PeteBleackley commited on

Fixed typo
1d631bc

PeteBleackley commited on

Removed line that would have failed
dbfe7ff

PeteBleackley commited on

Fixed import
acda749

PeteBleackley commited on

Typo
ed62a1c

PeteBleackley commited on

Further changes for compatibility with HuggingFace Pytorch implementation
5b7a8ed

PeteBleackley commited on

Minor error in setting up tokenizer
fb4c0b0

PeteBleackley commited on

PyTorch implementation of HugggingFace PreTrainedModel class does not allow direct setting of base_model. Rejig constructors accordingly
519dfd1

PeteBleackley commited on

Removed superfluous ()
4cda7b6

PeteBleackley commited on

Removed superfluous ()
518e821

PeteBleackley commited on

Corrected inheritance
8823ce8

PeteBleackley commited on

Updated requitements.txt
bacc9ad

PeteBleackley commited on