Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sociocom
/
MedNERN-CR-JA
like
2
Token Classification
Transformers
PyTorch
Safetensors
MedTxt-CR-JA-training-v2.xml
Japanese
doi:10.57967/hf/0620
bert
NER
medical documents
Inference Endpoints
License:
cc-by-4.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
8faec96
MedNERN-CR-JA
4 contributors
History:
7 commits
gabrielandrade2
Fix -i parameter when input is a directory
8faec96
9 months ago
dictionaries
Add normalization methods
over 1 year ago
.gitattributes
1.57 kB
Add normalization methods
over 1 year ago
EntityNormalizer.py
2.04 kB
Update model with additional negative examples, improve support scripts
11 months ago
NER_medNLP.py
9.37 kB
Update model with additional negative examples, improve support scripts
11 months ago
README.md
5.53 kB
Update model with additional negative examples, improve support scripts
11 months ago
config.json
4.3 kB
Update model with additional negative examples, improve support scripts
11 months ago
id_to_tags.pkl
pickle
671 Bytes
LFS
Update model with additional negative examples, improve support scripts
11 months ago
key_attr.pkl
pickle
191 Bytes
LFS
Fork of MedNER-CR-JA
about 2 years ago
model.safetensors
440 MB
LFS
Adding `safetensors` variant of this model (#1)
11 months ago
predict.py
8.07 kB
Fix -i parameter when input is a directory
9 months ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch.LongStorage"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
440 MB
LFS
Update model with additional negative examples, improve support scripts
11 months ago
requirements.txt
723 Bytes
Update model with additional negative examples, improve support scripts
11 months ago
special_tokens_map.json
125 Bytes
Add tokenizer configuration files
over 1 year ago
text.txt
1.94 kB
Fork of MedNER-CR-JA
about 2 years ago
tokenizer_config.json
585 Bytes
Update model with additional negative examples, improve support scripts
11 months ago
utils.py
585 Bytes
Update model with additional negative examples, improve support scripts
11 months ago
vocab.txt
258 kB
Add tokenizer configuration files
over 1 year ago