Commit
·
f5e8f76
1
Parent(s):
ca1cf1e
Update README.md
Browse files
README.md
CHANGED
|
@@ -17,18 +17,19 @@ tags:
|
|
| 17 |
[]()
|
| 18 |
[]()
|
| 19 |
|
|
|
|
| 20 |
<p align="center" width="100%">
|
| 21 |
-
<a><img src="https://
|
| 22 |
</p>
|
| 23 |
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
|
| 24 |
|
| 25 |
## Checkpoints of Models Pre-Trained with InfoDCL
|
| 26 |
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
|
| 27 |
-
* InfoDCL-RoBERTa trained with
|
| 28 |
|
| 29 |
## Model Performance
|
| 30 |
|
| 31 |
<p align="center" width="100%">
|
| 32 |
-
<a><img src="https://
|
| 33 |
</p>
|
| 34 |
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
|
|
|
|
| 17 |
[]()
|
| 18 |
[]()
|
| 19 |
|
| 20 |
+
|
| 21 |
<p align="center" width="100%">
|
| 22 |
+
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
|
| 23 |
</p>
|
| 24 |
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
|
| 25 |
|
| 26 |
## Checkpoints of Models Pre-Trained with InfoDCL
|
| 27 |
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
|
| 28 |
+
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
|
| 29 |
|
| 30 |
## Model Performance
|
| 31 |
|
| 32 |
<p align="center" width="100%">
|
| 33 |
+
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
|
| 34 |
</p>
|
| 35 |
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
|