| Pretrained models for our paper (https://arxiv.org/abs/2210.10258) | |
| ```bibtex | |
| @inproceedings{wu-etal-2022-continued, | |
| title = "Continued Pretraining for Better Zero- and Few-Shot Promptability", | |
| author = "Zhaofeng Wu and Robert L. Logan IV and Pete Walsh and Akshita Bhagia and Dirk Groeneveld and Sameer Singh and Iz Beltagy", | |
| booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", | |
| month = dec, | |
| year = "2022", | |
| publisher = "Association for Computational Linguistics", | |
| } | |
| ``` | |
| Please see the "Files and versions" tab for the models. We release our MTL models (notated MTL-T🔥P🔥 in our paper) and the meta-learned models, with different sizes and prompt configurations. |