We investigate applying MAML to boost performance on binary content moderation tasks in low-resource contexts. We used the MAML algorithm, implemented in PyTorch, to pre-train a model whose internal representation is amenable to a variety of content moderation tasks with minimal finetuning. Our distribution of content moderation tasks comprised 8 tasks, including sentiment analysis and insincere question detection, each with a separate dataset. We compared the ability of this model pre-trained with MAML to adapt to perform well on unseen binary content moderation tasks to that of a model pre-trained using traditional transfer learning approaches and a model trained from scratch. Empirically, we found that MAML did not noticeably improve adaptation performance. However, we hope that, with additional improvements to MAML and fewer computational limits, MAML can be applied to train robust and adaptive large-scale content moderation systems in low-resource contexts.
-
Notifications
You must be signed in to change notification settings - Fork 0
jamqd/Content_Moderation_MAML
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Robust Model-Agnostic Meta-Learning for Binary Content Moderation Tasks in NLP
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published