Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: compute buffer size ahead of time #23

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

achingbrain
Copy link
Member

@achingbrain achingbrain commented Mar 31, 2022

The current implementation loops over message fields and each field encoder creates a Uint8Array which is added to a Uint8ArrayList and stitched together at the end of the process.

Other implementations compute the expected length of the serialized message, allocate a buffer that size then have encoders write their data into it at the correct offsets.

This PR makes protons compute the final message buffer size first in the same way.

Thing is, it doesn't seem to make a huge amount of difference to performance.

Before:

Running "Encode/Decode" suite...
Progress: 100%

  pbjs:
    12 166 ops/s, ±3.92%   | 5.12% slower

  protons:
    9 755 ops/s, ±2.19%    | slowest, 23.93% slower

  protobufjs:
    12 823 ops/s, ±2.02%   | fastest

Finished 3 cases!
  Fastest: protobufjs
  Slowest: protons

After:

Running "Encode/Decode" suite...
Progress: 100%

  pbjs:
    11 866 ops/s, ±3.43%   | 2.05% slower

  protons:
    9 356 ops/s, ±2.45%    | slowest, 22.77% slower

  protobufjs:
    12 114 ops/s, ±2.16%   | fastest

Finished 3 cases!
  Fastest: protobufjs
  Slowest: protons

@achingbrain
Copy link
Member Author

Opened as a draft as it's a useful performance comparison but we may not want to merge it.

Longer term we may just want to rip the guts out of another implementation to do the serialization/deserialization as it's the least interesting thing about this module.

The current implementation loops over message fields and each field
encoder creates a `Uint8Array` which is added to a `Uint8ArrayList`
and stitched together at the end of the process.

Other implementations compute the expected length of the serialized
message, allocate a buffer that size then have encoders write their
data into it at the correct offsets.

This PR makes protons compute the final message buffer size first in
the same way.

Thing is, it doesn't seem to make a huge amount of difference to performance.

Before:

```
Running "Encode/Decode" suite...
Progress: 100%

  pbjs:
    12 166 ops/s, ±3.92%   | 5.12% slower

  protons:
    9 755 ops/s, ±2.19%    | slowest, 23.93% slower

  protobufjs:
    12 823 ops/s, ±2.02%   | fastest

Finished 3 cases!
  Fastest: protobufjs
  Slowest: protons
```

After:

```
Running "Encode/Decode" suite...
Progress: 100%

  pbjs:
    11 866 ops/s, ±3.43%   | 2.05% slower

  protons:
    9 356 ops/s, ±2.45%    | slowest, 22.77% slower

  protobufjs:
    12 114 ops/s, ±2.16%   | fastest

Finished 3 cases!
  Fastest: protobufjs
  Slowest: protons
```
@achingbrain achingbrain force-pushed the feat/compute-buffer-size-before-encode branch from 82cfa14 to fecfedd Compare March 31, 2022 08:39
@achingbrain achingbrain changed the title chore: remove long dep feat: compute buffer size ahead of time Mar 31, 2022
@achingbrain achingbrain force-pushed the main branch 2 times, most recently from ecfb76d to 942e050 Compare January 31, 2024 11:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant