You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for great work!
It seems that accessing metadata for every element on the page in prediction setting is a very time consuming task (judging by personal experience with Selenium).
Therefore I have a question: did you try to train models that use only pure html features (css class names, element attributes, xpath encoding) rather than using font information/bounding boxes/etc.?
The text was updated successfully, but these errors were encountered:
Hello and thanks for the kind words!
We have tried that a little but we haven't invested too much time since, from what we've seen, adding visual features (especially bounding boxes) substantially boosts the model's predictive performance. If you find features that are available in the HTML only (without rendering the page) and offer competitive performance please share your results! I'd love to see alternatives, especially since we didn't spend a lot of time on feature engineering.
First thing that comes in mind is xpath embedding as in MarkupLM. Will try to use that on your dataset eventually.
Btw, I think it is a great idea to see how MarkupLM performs on your dataset to know if GNN can outperform large language models.
Hi, thanks for great work!
It seems that accessing metadata for every element on the page in prediction setting is a very time consuming task (judging by personal experience with Selenium).
Therefore I have a question: did you try to train models that use only pure html features (css class names, element attributes, xpath encoding) rather than using font information/bounding boxes/etc.?
The text was updated successfully, but these errors were encountered: