You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, I am experimenting with BART which is encoder-decoder model, which is mostly used as seq2seq form (mainly for summarization and translation). But, I am able to train the model for Question Answering (generative QA).
On a very rudimentary run for askathon data (128 samples), I fine-tuned and overfitted the vanilla bart model to see if it can work nicely.
Then, used evalem to evaluate and generate comparison table.
metric
askathon-tuned
bart-vanilla
MeteorMetric
0.456912
0.130023
BleuMetric
0.211337
0.0652976
F1Metric
0.530609
0.142459
AccuracyMetric
0.494754
0.0874828
RougeMetric
0.48671
0.101733
BertScore
0.690443
0.41951
BartScore
-3.49565
-5.38511
ExactMatchMetric
0.421875
0
evalem code
The evalem code is a moneky-patch where I have created a new temporary component for Generative QA.
Currently, I am experimenting with BART which is encoder-decoder model, which is mostly used as seq2seq form (mainly for summarization and translation). But, I am able to train the model for Question Answering (generative QA).
On a very rudimentary run for
askathon
data (128 samples), I fine-tuned and overfitted the vanilla bart model to see if it can work nicely.Then, used evalem to evaluate and generate comparison table.
evalem code
The evalem code is a moneky-patch where I have created a new temporary component for Generative QA.
I) Generative QA component (temporary for now)
II) Connecting the components
III) Askathon dataloader
cc: @muthukumaranR @xhagrg
The text was updated successfully, but these errors were encountered: