Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compressor Optimizer #367

Open
wants to merge 17 commits into
base: main
Choose a base branch
from
Open

Compressor Optimizer #367

wants to merge 17 commits into from

Conversation

Tabaie
Copy link
Contributor

@Tabaie Tabaie commented Nov 30, 2024

This PR implements issue #366.

The compressor now attempts to recompress everything from scratch upon encountering a full blob, before attempting to bypass compression altogether.

Checklist

  • I wrote new tests for my new core changes.
  • I have successfully run tests, style checker and build against my new changes locally.
  • I have informed the team of any breaking changes if there are any.

@Tabaie Tabaie added enhancement New feature or request performances Label the current work as being directed toward performance optimization Data compressor labels Nov 30, 2024
@Tabaie Tabaie self-assigned this Nov 30, 2024
@Tabaie Tabaie linked an issue Nov 30, 2024 that may be closed by this pull request
4 tasks
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e November 30, 2024 01:15 — with GitHub Actions Error
@codecov-commenter
Copy link

codecov-commenter commented Nov 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.40%. Comparing base (972cc85) to head (02a6635).

Additional details and impacted files
@@             Coverage Diff              @@
##               main     #367      +/-   ##
============================================
+ Coverage     68.22%   70.40%   +2.18%     
+ Complexity     1128     1063      -65     
============================================
  Files           319      305      -14     
  Lines         12795    12186     -609     
  Branches       1277     1165     -112     
============================================
- Hits           8729     8580     -149     
+ Misses         3540     3121     -419     
+ Partials        526      485      -41     
Flag Coverage Δ *Carryforward flag
hardhat 98.59% <ø> (-0.11%) ⬇️
kotlin 68.09% <ø> (+2.24%) ⬆️ Carriedforward from 94e9da9

*This pull request uses carry forward flags. Click here to find out more.

see 54 files with indirect coverage changes

@Tabaie Tabaie temporarily deployed to docker-build-and-e2e November 30, 2024 01:25 — with GitHub Actions Inactive
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 2, 2024 04:03 — with GitHub Actions Error
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 3, 2024 17:12 — with GitHub Actions Error
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 3, 2024 18:55 — with GitHub Actions Error
@gbotrel
Copy link
Contributor

gbotrel commented Dec 3, 2024

So to sum up, this is a non-breaking change, we could compress better (by ~7%) if we re-compress the full blob each time we attempt to append a block; but doing so is too slow so you propose to keep the original method, that would give a "preliminary result", and change the result behind the scenes between calls?

Wouldn't this have some side effects @jpnovais ? (i.e basically the compressor could say "we compressed block 1, it takes N Bytes, current blob is now at 100kB", then recompress it with more context and update the internal state to "current blob is 98kB" without notifying the coordinator. ).

Re implementation, before introducing an async optimizer, I'ld prefer to understand perf constraints better; i.e. how long it takes now, how long it would take if we recompress all the blob at each append, and within what limit we need operate . i.e. if we say the compressor could take as much as XXXms , then we may just want to have a simpler cleaner code and kill this async pattern.

@jpnovais
Copy link
Collaborator

jpnovais commented Dec 3, 2024

My input on this optimization is:

Context:

  • The coordinator does not actively keep track of the current blob size, that is the responsibility of the compressor, which is aware of blob limit.
  • Coordonator calls fun CanWrite(data: ByteArray, data_len: Int): Boolean before calling Write, if returns false then coordinator knows the blob is full and starts a new one

My take based on the above:

  • As mentioned, Asyc can be problematic so I would avoid it. Also, I think we don't need it here.
  • My suggestion: when coordinator calls CanWrite before returning false, the compressor would try to perform the full compression an see if that extra blob fits.
    • pros:
      • lazy computation - it only does the "full re-compression" when it reaches the limit;
      • 100% transparent to the coordinator and keeps same API. This leaves room for further internal code performance optimizations ;

@Tabaie @gbotrel WDYT?

@gbotrel
Copy link
Contributor

gbotrel commented Dec 3, 2024

Yep that would make less CPU use to do it only when full . But this last call to "CanWrite" may be 10x slower than the previous calls.
So what are the bounds perf-wise we need to operate in? If a call to CanWrite takes 500ms is that acceptable? (not saying it will, just want a rough order of magnitude of bound)

@jpnovais
Copy link
Collaborator

jpnovais commented Dec 4, 2024

So what are the bounds perf-wise we need to operate in? If a call to CanWrite takes 500ms is that acceptable? (not saying it will, just want a rough order of magnitude of bound)

It's ok to have a call to CanWrite that takes 500ms at the of the blob, as long as the preceding calls are not affected timewise.

@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 6, 2024 18:54 — with GitHub Actions Error
@Tabaie
Copy link
Contributor Author

Tabaie commented Dec 6, 2024

I agree, doing it synchronously at the end is a good idea. In fact it's similar to the "no compress" logic we already have.

@Tabaie Tabaie temporarily deployed to docker-build-and-e2e December 6, 2024 21:23 — with GitHub Actions Inactive
gbotrel
gbotrel previously approved these changes Dec 9, 2024
Copy link
Contributor

@gbotrel gbotrel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 I would probably just add one or two test to ensure this is correctly triggered and that the internal state is correctly reset after

@jpnovais
Copy link
Collaborator

The difference, from the user's point of view, is output nondeterminism, as it will differ based on how much time the compressor has had to optimize.

What does this mean, "based on how much time the compressor has had to optimize" from the coordinator's PoV?

@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 16, 2024 14:59 — with GitHub Actions Error
@Tabaie
Copy link
Contributor Author

Tabaie commented Dec 16, 2024

The difference, from the user's point of view, is output nondeterminism, as it will differ based on how much time the compressor has had to optimize.

What does this mean, "based on how much time the compressor has had to optimize" from the coordinator's PoV?

This was for the parallel optimizer so it no longer applies. Removing from description.

@Tabaie Tabaie temporarily deployed to docker-build-and-e2e December 16, 2024 17:33 — with GitHub Actions Inactive
@Tabaie Tabaie requested a review from ivokub December 18, 2024 14:38
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 18, 2024 14:41 — with GitHub Actions Error
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 18, 2024 19:53 — with GitHub Actions Error
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 18, 2024 22:48 — with GitHub Actions Error
@Tabaie
Copy link
Contributor Author

Tabaie commented Dec 18, 2024

The difference, from the user's point of view, is output nondeterminism, as it will differ based on how much time the compressor has had to optimize.

What does this mean, "based on how much time the compressor has had to optimize" from the coordinator's PoV?

This was for the parallel optimizer so it no longer applies. Removing from description.

@jpnovais Does it look good now?

@jpnovais
Copy link
Collaborator

@Tabaie sure. Sorry for the delay and not reverting on the 1st comment.

Copy link
Contributor

@ivokub ivokub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems good to me. Before merging can you confirm that by referencing to the same buffer when recompressing we don't have some internal overwrites of bytes.Buffer which could cause issues?

Also some comments about simplifying error handling.

That said - imo the current approach may be error prone when wanting to make changes in the future. It heavily relies on reverting the compressor in case the block doesnt fit (or when we only run CanWrite). And the state logic imo is complex enough that it is diffcult to follow and test.

What I would recommend is to add a method to clone the compressor using a pool and then perform operations on the cloned compressor. If write succeeds then we update the reference in BlobMaker, otherwise we discard the cloned compressor (by resetting and putting it back into the pool). Internally it would imo also require using a pool of bytes.Buffer.

Imo trying to optimize for memory here doesn't make a lot of sense as the blocks/blobs in general are small (imo blob maximally 2MB and maximum uncompressed data assuming 5x compression 10MB) relative to the rest of memory usage.

From my side this is not blocker for merging the PR for now, but would definitely simplify the implementation and guard against bugs in the future.

prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
prover/lib/compressor/blob/v1/blob_maker.go Show resolved Hide resolved
@Tabaie Tabaie deployed to docker-build-and-e2e December 20, 2024 17:53 — with GitHub Actions Active
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Data compressor enhancement New feature or request performances Label the current work as being directed toward performance optimization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

"Wholesale" blob compression
5 participants