Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate AMR from a file #13

Open
Wangpeiyi9979 opened this issue Dec 24, 2019 · 5 comments
Open

How to generate AMR from a file #13

Wangpeiyi9979 opened this issue Dec 24, 2019 · 5 comments

Comments

@Wangpeiyi9979
Copy link

Hi, Thanks for your nice work. and I want to realize a simple task

If I just have a file as follows:

sentence1
sentence2
....

And how to use this pre-train model to generate AMR of these sentences directly..

@SimonWesterlind
Copy link

SimonWesterlind commented Jan 2, 2020

Hi @Wangpeiyi9979 !

This requires a bit of edits to the code as it is right now. This will not be an exhaustive list of things you might need to do, but a few tips:

  • Add the pre-trained model to /data/AMR.
  • Edit the pre-traied model's config.json-file as the filepaths in it are wrong. They should all begin with /data.
  • Make some code to convert those sentences into the following format:
# AMR release; corpus: lpp; section: dev; number of AMRs: 3 (generated on Fri Nov 1, 2019 at 21:03:52)

# ::id lpp_1943.1 ::date 2019-11-18 14:58:12.957282 ::annotator Annotator ::preferred
# ::snt This is a sentence.
# ::save-date 2019-11-12 12:24:17.523046
(d / dummy)

# ::id separator_id_of_result ::date 2019-11-18 14:58:12.957282 ::annotator Blitzy ::preferred
# ::snt This is the next sentence.
# ::save-date 2019-11-12 12:24:17.523046
(d / dummy)
  • Then you should be able to run it all by doing the following sequence: Your code that converts the sentence-file to AMR-format, prepare-data, feature-annotation, data-preprocessing, data-postprocessing.

Best of luck! :)

@gghati
Copy link

gghati commented Jul 2, 2020

@SimonWesterlind Thanks for answering, if you have script to convert the sentence to the given format, please share it with us :)

@bjascob
Copy link

bjascob commented Jul 2, 2020

Here's a simple script to convert sentences to an AMR format.

infn  = 'data/sents.txt'
outfn = 'data/sents.txt.amr'

# Load the file
print('Reading ', infn)
sents = []
with open(infn) as f:
    for line in f.readlines():
        line = line.strip()
        if not line: continue
        sents.append(line)

# Create a dummy amr file
print('Writing ', outfn)
with open(outfn, 'w') as f:
    for i, sent in enumerate(sents):
        f.write('# ::id sents_id.%d\n' % i)
        f.write('# ::snt %s\n' % sent)
        f.write('(d / dummy)\n')
        f.write('\n')

After you do this you still need to...

  1. Run the feature annotator script on the file. This is Readme step 3. You'll need to edit the script and remove {dev_data} and {train_data} and replace test_data with the name of your new file.
  2. Run the preprocessing script on it (Readme step 4) with similar mods to script as above, Note that the script only runs the "text_anonymizer" on the test data. The "recategorizer" does not need to run since it's only run on the train and dev data.
  3. Run the model to do prediction (Readme step 6)
  4. Run the post-processor (Readme step 7) - I would recommend commenting out the "Wikification" section. It's a little more complicated to get this working and the online Spolitlight server that the script uses is very unreliable.

@gghati
Copy link

gghati commented Jul 2, 2020

@bjascob Thanks for the script! :) 👍, I have to edit the script to make it work, you can view that script at https://github.com/gauravghati/stog/blob/master/scripts/sentence-amr.py

The format needed for the input is:

# ::id sents_id.4
# ::snt Zero is a beautiful number.
# ::tokens ['Zero', 'is', 'a', 'beautiful', 'number', '.']
# ::lemmas ['Zero', 'is', 'a', 'beautiful', 'number', '.']
# ::pos_tags ['NNP', 'VBZ', 'DT', 'JJ', 'NN', '.']
# ::ner_tags ['GPE', 'O', 'O', 'O', 'O', 'O']  
(d / dummy)

# ::id sents_id.1
# ::snt But that did not really surprise me much .
# ::tokens ['But', 'that', 'did', 'not', 'really', 'surprise', 'me', 'much', '.']
# ::lemmas ['But', 'that', 'did', 'not', 'really', 'surprise', 'me', 'much', '.']
# ::pos_tags ['CC', 'DT', 'VBD', 'RB', 'RB', 'VB', 'PRP', 'JJ', '.']
# ::ner_tags ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
(d / dummy)

tokens, lemmas, pos_tag, and ner_tag is also needed for the input!

@bjascob
Copy link

bjascob commented Jul 4, 2020

The annotator script (see readme 3. step) creates the tokens, lemmas,... tags. This uses the Stanford NLP system. Using NLTK to annotate will likely give you less than optimal results since the internals are all setup to work with the Stanford Named-Entity tags, not NLTK's (and NLTK is a fairly poor parser).

Also, don't forget to run the other pre-processing step before generating, otherwise things won't generate correctly (even though it will probably run without an error).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants