Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed E2E test for sanity check #48

Open
sbezverk opened this issue Mar 2, 2018 · 4 comments
Open

Proposed E2E test for sanity check #48

sbezverk opened this issue Mar 2, 2018 · 4 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@sbezverk
Copy link
Contributor

sbezverk commented Mar 2, 2018

Below is the list of tests proposed to implement for E2E sanity check, please review and provide your feedback:

CreateVolume:

  1. Create volume with specific name and then create volume with the same name (Should not fail)
  2. Create volume with specific name and then create volume with the same name but different capacity (Should fail with 6 ALREADY_EXISTS)
  3. Create volume with incompatible size (should fail with 11 OUT_OF_RANGE)

DeleteVolume:

  1. Delete Volume and no volume id provided (should fail with 3 INVALID_ARGUMENT)
  2. Delete volume with non existing volume id (should not fail)
  3. Delete Volume with existing id (should not fail)

ControllerPublishVolume:

  1. Publish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Publish volume with empty node id (should fail with 3 INVALID_ARGUMENT)
  3. Publish volume with empty VolumeCapability (should fail with 3 INVALID_ARGUMENT)
  4. Publish volume with empty readonly (should fail with 3 INVALID_ARGUMENT)
  5. Publish volume with non existing volume id (should fail with 5 NOT_FOUND)
  6. Publish volume with non existing node id (should fail with 5 NOT_FOUND)
  7. (If code permits) Publish already published volume with compatible capabilities (should not fail)
  8. (If code permits) Publish already published volume with incompatible capabilities (should fail with 6 ALREADY_EXISTS)
  9. (If code permits) Publish already published volume with different node id (should fail with 9 FAILED_PRECONDITION)
    10.(If code permits) Publish volume to node id with reached "Max volumes attached" (should fail with 8 RESOURCE_EXHAUSTED)

ControllerUnpublishVolume:

  1. Unpublish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Unpublish volume with empty node id (should not fail)
  3. Unpublish volume with non existing volume id (should fail with 5 NOT_FOUND)
  4. Unpublish volume with non existing node id (should fail with 5 NOT_FOUND)
  5. (if code permits) Unpublish volume with unpublish node id mismatching published node id (should not fail)

ValidateVolumeCapabilities:

  1. Validate with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Validate with empty VolumeCapability (should fail with 3 INVALID_ARGUMENT)
  3. Validate with non existing volume id (should fail with 5 NOT_FOUND)

ListVolumes:

  1. Check if list volume capabilities is available (should not fail)
  2. If 1 is true then create a couple of volumes and request to list them. (Should not fail)

GetCapacity:

  1. Check GetCapacity (Should not fail)

ControllerGetCapabilities:

  1. Check ControllerGetCapabilities (should not fail)

NodeStageVolume:

  1. Stage volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Stage volume with empty staging_target_path (should fail with 3 INVALID_ARGUMENT)
  3. Stage volume with empty volume_capability (should fail with 3 INVALID_ARGUMENT)
  4. Stage volume non existing volume id (should fail with 5 NOT_FOUND)
  5. (if code permits) Stage volume_id has already been published at the specified staging_target_path but is incompatible with the specified volume_capability flag (Should fail with 6 ALREADY_EXISTS)
  6. Publish volume with non existing node id (should fail with 5 NOT_FOUND)

NodeUnstageVolume:

  1. Ustage volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Ustage volume with empty staging_target_path (should fail with 3 INVALID_ARGUMENT)
  3. Ustage volume non existing volume id (should fail with 5 NOT_FOUND)
  4. (if code permits) Unstage volume id with not matching staging_target_path (Should not fail)

NodePublishVolume:

  1. Publish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Publish volume with empty target_path (should fail with 3 INVALID_ARGUMENT)
  3. Publish volume with empty VolumeCapability (should fail with 3 INVALID_ARGUMENT)
  4. Publish volume with empty readonly (should fail with 3 INVALID_ARGUMENT)
  5. Publish volume with non existing volume id (should fail with 5 NOT_FOUND)
  6. (If code permits) Publish already published volume with incompatible capabilities (should fail with 6 ALREADY_EXISTS)

NodeUnpublishVolume:

  1. Unpublish volume with empty volume id (should fail with 3 INVALID_ARGUMENT)
  2. Unpublish volume with empty target_path (should fail with 3 INVALID_ARGUMENT)
  3. Unpublish volume with non existing volume id (should fail with 5 NOT_FOUND)

NodeGetId:

  1. Check NodeGetId (should not fail )

NodeGetCapabilities:

  1. Check NodeGetCapabilities (should not fail)
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@msau42
Copy link
Collaborator

msau42 commented Jun 5, 2019

Have we implemented all of these test cases?

/lifecycle frozen
/help

@k8s-ci-robot
Copy link
Contributor

@msau42:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

Have we implemented all of these test cases?

/lifecycle frozen
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jun 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

4 participants