roberta_chinese_large

Overview

Language model: roberta-large Model size: 1.2G Language: Chinese Training data: CLUECorpusSmall Eval data: CLUE dataset

Results

For results on downstream tasks like text classification, please refer to this repository.

Usage

NOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!

import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_large")
roberta = BertModel.from_pretrained("clue/roberta_chinese_large")

About CLUE benchmark

Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.

Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/

Downloads last month
85
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.