How to use

Requirements:

git clone https://github.com/ddevaul/desformers
cd desformers
pip install -r requirements.txt
cd ..

Now open your file and add this:

import sys
import torch
sys.path.append('./desformers/src')
from torch.utils.checkpoint import checkpoint
from transformers2 import BertConfig, BertTokenizer
from transformers2.models.bert import BertForMaskedLM

preload_path = 'cabrooks/character-level-logion'
char_tokenizer = BertTokenizer.from_pretrained(preload_path)
wordpiece_tokenizer = BertTokenizer.from_pretrained("cabrooks/LOGION-50k_wordpiece")
config = BertConfig()
config.word_piece_vocab_size = 50000
config.vocab_size = char_tokenizer.vocab_size
config.char_tokenizer = char_tokenizer
config.wordpiece_tokenizer = wordpiece_tokenizer
config.max_position_embeddings = 1024
config.device2 = device
model = BertForMaskedLM(config).to(device)

Download the weights from "my_custom_model.pth". Load these weights into the model:

model.load_state_dict(torch.load('my_custom_model.pth', map_location=torch.device('cpu')))

You are now ready to use the model.

Author:

This model was developed by Desmond DeVaul for his senior thesis at Princeton University.

It was built on the work of the Logion team at Princeton: https://www.logionproject.princeton.edu.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.