--- language: - "ja" tags: - "japanese" - "wikipedia" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" widget: - text: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている" --- # deberta-base-japanese-wikipedia-ud-goeswith ## Model Description This is a DeBERTa(V2) model pretrained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing (using `goeswith` for subword-relation), derived from [deberta-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). ## How to Use ```py class UDgoeswith(object): def __init__(self,bert): from transformers import AutoTokenizer,AutoModelForTokenClassification self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForTokenClassification.from_pretrained(bert) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=self.tokenizer(text,return_offsets_mapping=True) v=w["input_ids"] n=len(v)-1 with torch.no_grad(): d=self.model(input_ids=torch.tensor([v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[v[i]] for i in range(1,n)])) e=d.logits.numpy()[:,1:n,:] e[:,:,0]=numpy.nan m=numpy.full((n,n),numpy.nan) m[1:,1:]=numpy.nanmax(e,axis=2).transpose() p=numpy.zeros((n,n)) p[1:,1:]=numpy.nanargmax(e,axis=2).transpose() for i in range(1,n): m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="# text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s