David Pollack commited on
Commit
c4334ac
1 Parent(s): 59bb7d4

update readme

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,12 +1,24 @@
1
- This is a dummy model that can be used for testing.
2
 
3
- It was created as follows in a python shell
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ```python
6
  import transformers
7
  config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
8
  model = transformers.DistilBertForSequenceClassification(config)
9
- tokenizer = transformers.DistilBertTokenizer("/tmp/empty_vocab.txt", model_max_length=512)
10
  config.save_pretrained(".")
11
  model.save_pretrained(".")
12
  tokenizer.save_pretrained(".")
 
1
+ This is a dummy model that can be used for testing. It should always give random results (i.e. `{"label": "negative", "score": 0.5}`).
2
 
3
+ It was created as follows:
4
+
5
+ 1. Create a vocab.txt file (in /tmp/vocab.txt in this example).
6
+
7
+ ```
8
+ [UNK]
9
+ [SEP]
10
+ [PAD]
11
+ [CLS]
12
+ [MASK]
13
+ ```
14
+
15
+ 2. Open a python shell and run the following
16
 
17
  ```python
18
  import transformers
19
  config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
20
  model = transformers.DistilBertForSequenceClassification(config)
21
+ tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512)
22
  config.save_pretrained(".")
23
  model.save_pretrained(".")
24
  tokenizer.save_pretrained(".")