## swtx SIMCSE RoBERTa WWM Ext Chinese | |
This model provides simplified Chinese sentence embeddings encoding based on [Simple Contrastive Learning](https://arxiv.org/abs/2104.08821). | |
The pretrained model(Chinese RoBERTa WWM Ext) is used for token encoding. | |
## How to use | |
```Python | |
from transformers import AutoTokenizer, AutoModel | |
tokenizer = AutoTokenizer.from_pretrained("swtx/simcse-chinese-roberta-wwm-ext") | |
model = AutoModel.from_pretrained("swtx/simcse-chinese-roberta-wwm-ext") | |
``` |