We investigate applying MAML to boost performance on binary content moderation tasks in low-resource contexts. We used the MAML algorithm, implemented in PyTorch, to pre-train a model whose internal representation is amenable to a variety of content moderation tasks with minimal finetuning. Our distribution of content moderation tasks comprised 8 tasks, including sentiment analysis and insincere question detection, each with a separate dataset. We compared the ability of this model pre-trained with MAML to adapt to perform well on unseen binary content moderation tasks to that of a model pre-trained using traditional transfer learning approaches and a model trained from scratch. Empirically, we found that MAML did not noticeably improve adaptation performance. However, we hope that, with additional improvements to MAML and fewer computational limits, MAML can be applied to train robust and adaptive large-scale content moderation systems in low-resource contexts.