This repository contains the code for the paper “Parsing as Pretraining” by Vilares, David, et al., which proposes a novel method for improving the performance of natural language understanding models by using syntactic parsing as a pretraining task. The code also contains our updates to conduct our experiments for our article “Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing” , where we used it to evaluate it on different Arabic treebanks with different Arabic BERT models.
Finetune AraBERTV02 to perform sequence labeling dependency parsing on ArPoT and PADT treebanks. HERE