Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison diagram isn't very informative and makes pest look bad. #5

Open
agausmann opened this issue May 4, 2019 · 7 comments
Open
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@agausmann
Copy link

  • It only lists two competitors, and only one of those is in the parser family. Benchmarks against more of the common parsers like combine and lalrpop would also be useful.

  • Pest is the listed as the slowest solution, with barely any explanation as to how it could be better. Making excuses for why they run faster is a weak argument; highlighting Pest's strengths is a lot better.

@dragostis
Copy link
Contributor

Thanks for the raising this issue. Here's my stance on the matter: both parsers listed in the comparison offer great performance and pest's goal is not to compete with them, but merely to show how pest can offer performance in the same order of magnitude.

Now, things will be different in 3.0. I have rewritten the generator and I'm working on the optimizer with great results.

@LeoDog896
Copy link
Contributor

I can't find any new benchmarks on pest - is there any updated information from the performance benchmarks mentioned above?

@lwandrebeck
Copy link

lwandrebeck commented Mar 4, 2023

There's a nom_benchmark repository in geal/gcouprie (nom's author) github that could be used (it runs nom, pest and one or two others as far as i can remember), but it's quite outdated, and needs some work.
Otherwise, maybe we should have our own benchmark suite to keep a good tracking of pest perfs throughout time ? I'll be glad to give a hand on that front.

@LeoDog896
Copy link
Contributor

Good idea! It'll be nice if we can get an official repo created for that.

@lwandrebeck
Copy link

There's a nom_benchmark repository in geal/gcouprie (nom's author) that could be used (it runs nom, pest and one or two others as far as i can remember), but it's quite outdated, and needs some work.
Otherwise, maybe we should have our own benchmark suite to keep a good tracking of pest perfs throughout time ? I'll be glad to give a hand on that front.
Edit : https://github.com/rust-bakery/parser_benchmarks for reference

@tomtau
Copy link
Contributor

tomtau commented Nov 29, 2023

One more repo that's more up to date: https://github.com/rosetta-rs/parse-rosetta-rs#results

@corneliusroemer
Copy link

I was about to open an issue just to realize that I had misread - 'in somewhere below' as 'somewhere in between'.

"in somewhere below" isn't really normal English I'd say so my brain misparsed and then thought the graph or text was wrong.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants