lambdavi commited on
Commit
aec5a29
•
1 Parent(s): da30276

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -134,11 +134,13 @@ This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that ca
134
  ### Direct Use for Inference
135
 
136
  ```python
137
- from span_marker import SpanMarkerModel, SpanMarkerTokenizer
 
 
138
 
139
  # Download from the 🤗 Hub
140
  model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-legal")
141
- tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.tokenizer.config)
142
  model.set_tokenizer(tokenizer)
143
 
144
  # Run inference
@@ -151,11 +153,13 @@ You can finetune this model on your own dataset.
151
  <details><summary>Click to expand</summary>
152
 
153
  ```python
154
- from span_marker import SpanMarkerModel, Trainer, SpanMarkerTokenizer
 
 
155
 
156
  # Download from the 🤗 Hub
157
  model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-legal")
158
- tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.tokenizer.config)
159
  model.set_tokenizer(tokenizer)
160
 
161
  # Specify a Dataset with "tokens" and "ner_tag" columns
 
134
  ### Direct Use for Inference
135
 
136
  ```python
137
+ from span_marker import SpanMarkerModel
138
+ from span_marker.tokenizer import SpanMarkerTokenizer
139
+
140
 
141
  # Download from the 🤗 Hub
142
  model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-legal")
143
+ tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.config)
144
  model.set_tokenizer(tokenizer)
145
 
146
  # Run inference
 
153
  <details><summary>Click to expand</summary>
154
 
155
  ```python
156
+ from span_marker import SpanMarkerModel, Trainer
157
+ from span_marker.tokenizer import SpanMarkerTokenizer
158
+
159
 
160
  # Download from the 🤗 Hub
161
  model = SpanMarkerModel.from_pretrained("lambdavi/span-marker-luke-legal")
162
+ tokenizer = SpanMarkerTokenizer.from_pretrained("roberta-base", config=model.config)
163
  model.set_tokenizer(tokenizer)
164
 
165
  # Specify a Dataset with "tokens" and "ner_tag" columns