|
--- |
|
license: cc |
|
language: |
|
- en |
|
library_name: transformers |
|
tags: |
|
- social media |
|
- contrastive learning |
|
--- |
|
# Contrastive Learning of Sociopragmatic Meaning in Social Media |
|
|
|
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p> |
|
<p align="center" float="left"> |
|
|
|
<p align="center">Publish at Findings of ACL 2023</p> |
|
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p> |
|
|
|
[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)]() |
|
[![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)]() |
|
|
|
|
|
<p align="center" width="100%"> |
|
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a> |
|
</p> |
|
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes. |
|
|
|
|
|
|
|
## Checkpoints of Models Pre-Trained with InfoDCL |
|
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji |
|
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag |
|
|
|
## Model Performance |
|
|
|
<p align="center" width="100%"> |
|
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a> |
|
</p> |
|
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs). |