-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HADOOP-19336: S3A: Test failures after CSE support added #7164
Conversation
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
@steveloughran - Could you please review the changes |
regarding ITestUploadRecovery we try to regenerate the failure condition "whole block uploaded then server returns 500" that is: it is testing that the SDK will recover from such a failure by resubmitting the block again. In which case, although this is a rare occurrence, it does surface in the wild sporadically and it seems like this going to result in a failure with this message. Ideally, I would like this recovery to be handled. Is this possible? what do other clients do here? do you have a special "unreliable test endpoint" to generate these failures for more accurate testing? If it can't be recovered from and the error message can be expected
|
🎊 +1 overall
This message was automatically generated. |
Retrying MPU with CSE is not supported : aws/amazon-s3-encryption-client-java#268 (comment) Sure, i will modify the test to fail when cse is used and add the error stack in toubleshooting |
💔 -1 overall
This message was automatically generated. |
9b7f17a
to
b4662ac
Compare
💔 -1 overall
This message was automatically generated. |
not sure what's up with yetus/jenkins right now |
Yeah, looks like some problem with yetus, let me retrigger the pipeline |
b4662ac
to
97c8a78
Compare
🎊 +1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
+1
added failure of recovery stack trace and explanation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the extra stack trace and text needed to explain that upload failure. Someone will encounter it -having a searchable section on the error and what it means will stop people filing new JIRAs (they will ask the same SO question repeatedly, but that's not our problem)
merged with my extra stack trace. I was writing as something I wanted you to add, then remembered I can edit a PR -and for docs that's safe. |
💔 -1 overall
This message was automatically generated. |
Description of PR
Test fix and documentation changes. This commit does the following
The reason for the same is mentioned here
For ITestUploadRecovery.testMagicWriteRecovery - It is expected to fail when CSE is enabled and pass otherwise. It helps to capture the case when this gets fixed in future
How was this patch tested?
tested in us-east-1 with (both per bucket and base s3 configuration )
For code changes:
LICENSE
,LICENSE-binary
,NOTICE-binary
files?