EzraAragon commited on
Commit
9aa393d
·
1 Parent(s): 797261c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -25,11 +25,13 @@ In particular, for training the model we used a batch size of 256, Adam optimize
25
 
26
  # Usage
27
 
28
- ```
29
  ## Use a pipeline as a high-level helper from the transformers import pipeline
 
30
  pipe = pipeline("fill-mask", model="citiusLTL/DisorBERT")
31
-
32
  ## Load model directly
 
33
  from transformers import AutoTokenizer, AutoModelForMaskedLM
34
 
35
  tokenizer = AutoTokenizer.from_pretrained("citiusLTL/DisorBERT")
 
25
 
26
  # Usage
27
 
28
+
29
  ## Use a pipeline as a high-level helper from the transformers import pipeline
30
+ ```
31
  pipe = pipeline("fill-mask", model="citiusLTL/DisorBERT")
32
+ ```
33
  ## Load model directly
34
+ ```
35
  from transformers import AutoTokenizer, AutoModelForMaskedLM
36
 
37
  tokenizer = AutoTokenizer.from_pretrained("citiusLTL/DisorBERT")