--- language: - "ja" tags: - "japanese" - "wikipedia" - "cc100" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" --- # roberta-base-japanese-juman-ud-goeswith ## Model Description This is a RoBERTa model pretrained on Japanese Wikipedia and CC-100 texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese). ## How to Use ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-japanese-juman-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ``` [fugashi](https://pypi.org/project/fugashi) is required.