-
-
Notifications
You must be signed in to change notification settings - Fork 587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Read all chunks when hashing a readable stream #2911
Conversation
Hi @jpdutoit Thank you for the PR! Can you write a unit test in |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2911 +/- ##
==========================================
- Coverage 94.10% 94.10% -0.01%
==========================================
Files 136 136
Lines 13369 13377 +8
Branches 2266 2263 -3
==========================================
+ Hits 12581 12588 +7
- Misses 788 789 +1 ☔ View full report in Codecov by Sentry. |
Hi @jpdutoit Your point about "etag" is correct, and the current hono "middleware/etag" behavior is incorrect. I also believe that this PR will fix the "middleware/etag" behavior. sha1import { sha1 } from '... /... /utils/crypto'. I think it is an incorrect result for the name etagHowever, since "etag" does not require a strict "sha1 value for data", your method of generating "etag" is suitable in terms of saving memory usage about "etag". For that reason, I believe it is better to resolve the problem in "middleware/etag" regarding the generation of "etag" when split into multiple chunks. |
This is all good comments, haven't had time to revisit this yet... |
For fixing the sha1 part, it would be better with a solution that supports true incremental hashing instead of needing to load the whole thing into memory, not sure what portable options there are. Unfortunately subtle.crypto does not support that.. |
Hi @jpdutoit As you say, incremental hashing would be best. However, Since we have received a report from another person, it would be good to solve this problem temporarily using the following approach, but what do you think? |
Sorry for the delay, I'm not using Hono at the moment... But looks like you fixed it in the meantime! So I'll close this PR. |
createHash was only reading the first chunk when hashing a readable stream. This breaks etags for longer responses.
On bun the first chunk seems to always be 16384 bytes long, so I added a test there. Could not reproduce on node
The author should do the following, if applicable
bun run format:fix && bun run lint:fix
to format the code