-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LibWeb: Make character reference tokenization more spec-compliant and efficient #3011
Open
squeek502
wants to merge
4
commits into
LadybirdBrowser:master
Choose a base branch
from
squeek502:named-character-references
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
LibWeb: Make character reference tokenization more spec-compliant and efficient #3011
squeek502
wants to merge
4
commits into
LadybirdBrowser:master
from
squeek502:named-character-references
+5,217
−4,540
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This already passes, so there's no reason to skip it anymore
Previously, if the NumericCharacterReferenceEnd state was reached when current_input_character was None, then the DONT_CONSUME_NEXT_INPUT_CHARACTER macro would restore back before the EOF, and allow the next state (after the SWITCH_TO_RETURN_STATE) to proceed with the last digit of the numeric character reference. For example, with something like `&LadybirdBrowser#1111`, before this commit the output would incorrectly be `<code point with the value 1111>1` instead of just `<code point with the value 1111>`. Instead of putting the `if (current_input_character.has_value())` check inside NumericCharacterReferenceEnd directly, it was instead added to DONT_CONSUME_NEXT_INPUT_CHARACTER, because all usages of the macro benefit from this check, even if the other existing usage sites don't exhibit any bugs without it: - In MarkupDeclarationOpen, if the current_input_character is EOF, then the previous character is always `!`, so restoring and then checking forward for strings like `--`, `DOCTYPE`, etc won't match and the BogusComment state will run one extra time (once for `!` and once for EOF) with no practical consequences. With the `has_value()` check, BogusComment will only run once with EOF. - In AfterDOCTYPEName, ConsumeNextResult::RanOutOfCharacters can only occur when stopping at the insertion point, and because of how the code is structured, it is guaranteed that current_input_character is either `P` or `S`, so the `has_value()` check is irrelevant.
Instead of just A-F/a-f, any char A-Z/a-z was being accepted as a valid hexadecimal digit.
Hello! One or more of the commit messages in this PR do not match the Ladybird code submission policy, please check the |
squeek502
force-pushed
the
named-character-references
branch
from
December 22, 2024 21:10
136c05a
to
d8be1ca
Compare
squeek502
changed the title
LibWeb: Make character reference matching more spec-compliant and efficient
LibWeb: Make character reference tokenization more spec-compliant and efficient
Dec 22, 2024
squeek502
force-pushed
the
named-character-references
branch
from
December 22, 2024 21:19
d8be1ca
to
50ea5b4
Compare
There are two changes happening here: a correctness fix, and an optimization. In theory they are unrelated, but the optimization actually paves the way for the correctness fix. Before this commit, the HTML tokenizer would attempt to look for named character references by checking from after the `&` until the end of m_decoded_input, which meant that it was unable to recognize things like named character references that are inserted via `document.write` one byte at a time. For example, if `∉` was written one-byte-at-a-time with `document.write`, then the tokenizer would only check against `n` since that's all that would exist at the time of the check and therefore erroneously conclude that it was an invalid named character reference. This commit modifies the approach taken for named character reference matching by using a trie-like structure (specifically, a deterministic acyclic finite state automaton or DAFSA), which allows for efficiently matching one-character-at-a-time and therefore it is able to pick up matching where it left off after each code point is consumed. Note: Because it's possible for a partial match to not actually develop into a full match (e.g. `¬indo` which could lead to `⋵̸`), some backtracking is performed after-the-fact in order to only consume the code points within the longest match found (e.g. `¬indo` would backtrack back to `¬`). With this new approach, `document.write` being called one-byte-at-a-time is handled correctly, which allows for passing more WPT tests, with the most directly relevant tests being `/html/syntax/parsing/html5lib_entities01.html` and `/html/syntax/parsing/html5lib_entities02.html` when run with `?run_type=write_single`. Additionally, the implementation now better conforms to the language of the spec (and resolves a FIXME) because exactly the matched characters are consumed and nothing more, so SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN. The new approach is also an optimization: - Instead of a linear search using `starts_with`, the usage of a DAFSA means that it is always aware of which characters can lead to a match at any given point, and will bail out whenever a match is no longer possible. - The DAFSA is able to take advantage of the note in the section `13.5 Named character references` that says "This list is static and will not be expanded or changed in the future." and tailor its Node struct accordingly to tightly pack each node's data into 32-bits. Together with the inherent DAFSA property of redundant node deduplication, the amount of data stored for named character reference matching is minimized. In my testing: - A benchmark tokenizing an arbitrary set of HTML test files was about 1.23x faster (2070ms to 1682ms). - A benchmark tokenizing a file with tens of thousands of named character references mixed in with truncated named character references and arbitrary ASCII characters/ampersands runs about 8x faster (758ms to 93ms). - The size of `liblagom-web.so` was reduced by 94.96KiB. Some technical details: A DAFSA (deterministic acyclic finite state automaton) is essentially a trie flattened into an array, but it also uses techniques to minimize redundant nodes. This provides fast lookups while minimizing the required data size, but normally does not allow for associating data related to each word. However, by adding a count of the number of possible words from each node, it becomes possible to also use it to achieve minimal perfect hashing for the set of words (which allows going from word -> unique index as well as unique index -> word). This allows us to store a second array of data so that the DAFSA can be used as a lookup for e.g. the associated code points.
squeek502
force-pushed
the
named-character-references
branch
from
December 23, 2024 14:47
50ea5b4
to
59f9fa6
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes a few bugs in numeric character reference tokenization, but the main focus is on named character references (both in terms of correctness and performance).
Tons of details in the commit messages, so see those for the full explanation. This implementation is also based on this implementation written in Zig, and there are more nitty-gritty details available in its README if you're interested.
The tl;dr is that a DAFSA is generated at compile-time and used for efficient codepoint-by-codepoint named character reference matching.
Note: I have not touched the Swift implementation of the tokenizer, and in fact will have introduced a compile error in the Swift version with my changes. Advice on how to handle the Swift side of things is appreciated.
In a benchmark using an arbitrary set of test files found online, the tokenizer is about 1.23x faster (2070ms to 1682ms).
General benchmark code
In a benchmark hyper-focused on named character references (see
named-char-test.html
here for the test file used), the tokenizer is ~8x faster (758ms to 93ms).Named character reference benchmark code
The amount of data stored for the named character reference matching has also been reduced by around 95KiB.
With all the changes in this PR together, many WPT subtests now pass, and AFAICT all character reference-related subtests are now passing. Here are the results from running the WPT tests in
/html/syntax/parsing
(comparing master to this branch):WPT results for /html/syntax/parsing
There are probably other newly passing subtests elsewhere (one I'm aware of is
html/webappapis/dynamic-markup-insertion/document-write/041.html
)