fbaigt commited on
Commit
a52d94d
1 Parent(s): cd91289

Update model card

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -3,10 +3,12 @@ language:
3
  - en
4
  datasets:
5
  - pubmed
 
 
6
  ---
7
 
8
  ## ProcBERT
9
- ProcBERT is a pre-trained language model specifically for procedural text. It was pre-trained on a large-scale procedural corpus (PubMed articles/chemical patents/recipes) containing over 12B tokens and shows great performance on downstream tasks. More details can be found in the following [paper](https://arxiv.org/abs/2109.04711):
10
 
11
  ```
12
  @article{Bai2021PretrainOA,
@@ -21,8 +23,8 @@ ProcBERT is a pre-trained language model specifically for procedural text. It wa
21
  ## Usage
22
  ```
23
  from transformers import *
24
- tokenizer = BertTokenizer.from_pretrained("fbaigt/procbert")
25
- model = BertForTokenClassification.from_pretrained("fbaigt/procbert")
26
  ```
27
 
28
  More usage details can be found [here](https://github.com/bflashcp3f/ProcBERT).
 
3
  - en
4
  datasets:
5
  - pubmed
6
+ - chemical patent
7
+ - cooking recipe
8
  ---
9
 
10
  ## ProcBERT
11
+ ProcBERT is a pre-trained language model specifically for procedural text. It was pre-trained on a large-scale procedural corpus (PubMed articles/chemical patents/cooking recipes) containing over 12B tokens and shows great performance on downstream tasks. More details can be found in the following [paper](https://arxiv.org/abs/2109.04711):
12
 
13
  ```
14
  @article{Bai2021PretrainOA,
 
23
  ## Usage
24
  ```
25
  from transformers import *
26
+ tokenizer = AutoTokenizer.from_pretrained("fbaigt/procbert")
27
+ model = AutoModelForTokenClassification.from_pretrained("fbaigt/procbert")
28
  ```
29
 
30
  More usage details can be found [here](https://github.com/bflashcp3f/ProcBERT).