--- language: - "lzh" tags: - "classical chinese" - "literary chinese" - "ancient chinese" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "apache-2.0" pipeline_tag: "token-classification" widget: - text: "孟子見梁惠王" --- # roberta-classical-chinese-base-ud-goeswith ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto). ## How to Use ```py class UDgoeswith(object): def __init__(self,bert): from transformers import AutoTokenizer,AutoModelForTokenClassification self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForTokenClassification.from_pretrained(bert) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=self.tokenizer(text,return_offsets_mapping=True) v=w["input_ids"] n=len(v)-1 with torch.no_grad(): d=self.model(input_ids=torch.tensor([v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[v[i]] for i in range(1,n)])) e=d.logits.numpy()[:,1:n,:] e[:,:,0]=numpy.nan m=numpy.full((n,n),numpy.nan) m[1:,1:]=numpy.nanmax(e,axis=2).transpose() p=numpy.zeros((n,n)) p[1:,1:]=numpy.nanargmax(e,axis=2).transpose() for i in range(1,n): m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="# text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s