Skip to content

Latest commit

 

History

History
111 lines (80 loc) · 4.15 KB

README.md

File metadata and controls

111 lines (80 loc) · 4.15 KB

PERIN: Permutation-invariant Semantic Parsing

David Samuel & Milan Straka

Charles University
Faculty of Mathematics and Physics
Institute of Formal and Applied Linguistics


Paper
Pretrained models
Interactive demo on Google Colab

Overall architecture



PERIN is a universal sentence-to-graph neural network architecture modeling semantic representation from input sequences.

The main characteristics of our approach are:

  • Permutation-invariant model: PERIN is, to our best knowledge, the first graph-based semantic parser that predicts all nodes at once in parallel and trains them with a permutation-invariant loss function.
  • Relative encoding: We present a substantial improvement of relative encoding of node labels, which allows the use of a richer set of encoding rules.
  • Universal architecture: Our work presents a general sentence-to-graph pipeline adaptable for specific frameworks only by adjusting pre-processing and post-processing steps.

Our model was ranked among the two winning systems in both the cross-framework and the cross-lingual tracks of MRP 2020 and significantly advanced the accuracy of semantic parsing from the last year's MRP 2019.



This repository provides the official PyTorch implementation of our paper "ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN" together with pretrained base models for all five frameworks from MRP 2020: AMR, DRG, EDS, PTG and UCCA.



How to run

🐾   Clone repository and install the Python requirements

git clone https://github.com/ufal/perin.git
cd perin

pip3 install -r requirements.txt 
pip3 install git+https://github.com/cfmrp/mtool.git#egg=mtool

🐾   Download and pre-process the dataset

Download the treebanks into ${data_dir} and split the cross-lingual datasets into training and validation parts by running:

./scripts/split_dataset.sh "path_to_a_dataset.mrp"

Preprocess and cache the dataset (computing the relative encodings can take up to several hours):

python3 preprocess.py --config config/base_amr.yaml --data_directory ${data_dir}

You should also download CzEngVallex if you are going to parse PTG:

curl -O https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-1512/czengvallex.zip
unzip czengvallex.zip
rm frames_pairs.xml czengvallex.zip

🐾   Train

To train a shared model for the English and Chinese AMR, run the following script. Other configurations are located in the config folder.

python3 train.py --config config/base_amr.yaml --data_directory ${data_dir} --save_checkpoints --log_wandb

Note that the companion file in needed only to provide the lemmatized forms, so it's also possible to train without it (but that will most likely negatively influence the accuracy of label prediction) -- just set the companion paths to None.

🐾   Inference

You can run the inference on the validation and test datasets by running:

python3 inference.py --checkpoint "path_to_pretrained_model.h5" --data_directory ${data_dir}

Citation

@inproceedings{Sam:Str:20,
  author = {Samuel, David and Straka, Milan},
  title = {{{\'U}FAL} at {MRP}~2020:
           {P}ermutation-Invariant Semantic Parsing in {PERIN}},
  booktitle = CONLL:20:U,
  address = L:CONLL:20,
  pages = {\pages{--}{53}{64}},
  year = 2020
}