Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated pipeline stages for Jenkinsfile #50

Merged
merged 1 commit into from
Dec 10, 2020
Merged

Conversation

telday
Copy link
Contributor

@telday telday commented Dec 7, 2020

Increases segmentation on pipeline runs and makes understanding failures easier

What does this PR do?

Increased the level of segmentation overall for the pipeline. This should help keep test runs separate once we start adding integration tests for the other clients.

What ticket does this PR close?

Resolves #20

Checklists

Change log

  • The CHANGELOG has been updated, or
  • This PR does not include user-facing changes and doesn't require a CHANGELOG update

Test coverage

  • This PR includes new unit and integration tests to go with the code changes, or
  • The changes in this PR do not require tests

Documentation

  • Docs (e.g. READMEs) were updated in this PR, and/or there is a follow-on issue to update docs, or
  • This PR does not require updating any documentation

@telday telday requested a review from a team as a code owner December 7, 2020 14:56
@BradleyBoutcher
Copy link

BradleyBoutcher commented Dec 7, 2020

So I have a few questions and a few comments (forgive me for being verbose, I err on the side of over-explaining):

  • I could see us adding a setup test environment stage, but shouldn't we use the bin/setup_conjur script for this?

  • Are we going to add other integration tests besides Python? From what I understand, end-to-end tests for this project (specifically just the openapi spec) can be covered with one language, which is true for most projects. And then we'll have separate integration tests in the actual repo for a specific API implementation, like conjur-api-java, conjur-api-python, etc.

  • If we add new integration tests that are housed separately from the existing python tests, we should be able to start them all using a test runner script. The Jenkinsfile should contain as little "custom" logic as possible. This is because if Jenkins wants to do something, chances are, a developer will want to do it too, and we want it to be reusable and efficient. Also, if we need to change the way we have to run tests, it's much safer to only have to change one file rather than two.

For now, it would be better to keep using bin/test_integration to handle setting up Conjur and running the integration tests. For example: if we were to add integration tests in other languages, our tests would be structured like this:

tests
   - python
        -start
   - other language
       - start

And then we would have bin/test_integration call those start scripts sequentially, and / or run them in parallel in the Jenkinsfile.

All this to say, I don't see any stories that are focused on adding integration tests in other languages currently, and we may not actually do that during this effort. Much of this PR falls under the YAGNI principle, and should be left out for now.

On the topic of issue #20, I think we should be focusing on producing Junit output from the python integration tests, and making that available in Jenkins. The issue doesn't go into detail, so I'll add some here:
AC:

  • All new code changes run the tests on Jenkins: Whenever we modify a piece of the source code, we don't have to manually add the test to our test runner. As is, we have this mostly done. Whenever someone modifies the source code or adds the test, the bin/test_integration script should test those changes with a single command.
  • All tests are enumerated when tests on Jenkins are run: Tests aren't hidden or lumped together unnecessarily. For example, running unit tests and integration tests should be two separate stages, typically run in parallel.
  • Failures in a single test do not fail the whole test suite: This one is pretty self explanatory, but can be difficult in practice. In our case, it looks like the Python tests continue to run even if one fails, so we may be good here.
  • All items above are visible in our Jenkins CI/CD UI - Again pretty self explanatory, but this is the one where we need Junit support added for these tests. The output for any given script may not be the same as the rest, so it's good to use a standard like Junit that is parsable by Jenkins.

Srdjan linked to this Jenkinsfile in the Python API repo, and that's actually a great example to follow: https://github.com/cyberark/conjur-api-python3/blob/master/Jenkinsfile

Increases segementation on pipeline runs and makes understanding
failures easier
@telday
Copy link
Contributor Author

telday commented Dec 8, 2020

@BradleyBoutcher Just made a few changes. I went back to just calling the integration_tests script in place of separate stages. I also added a JUnit output for the nose2 test runs.

As far as integration tests in other languages goes, I think we are starting on Ruby next. Is the overall plan to move to having a separate repository for each generated client? I was thinking that because they are all derived from this one spec we would keep everything in this repository and just generate the clients as needed to package and distribute releases.

@BradleyBoutcher
Copy link

BradleyBoutcher commented Dec 9, 2020

@telday

I was mistaken before, my apologies, I've been trying to get up to speed with the project to understand the direction it's going. It does look like we can pursue Ruby tests per this story: #49, but specifically we'll want to use the structure described there. This'll make it easier to:

  1. Run tests for just one client, rather than all of them
  2. Move the integration tests for a given client to their own repository easily
  3. Make our test framework easy to modify or add test cases

Currently, we do not have a scoped-out migration plan, so the generated clients we work on will be housed here for the time being. That being said, I do believe the clients will be migrated to their own repositories once we reach feature parity with the existing language-specific repos, and we come up with a clear migration strategy. That's a long ways down the line though, so for this effort we should focus on:

  1. Reaching feature parity with other APIs by adding all the endpoints to the spec
  2. Make sure our test framework is solid

For now, I'd be happy to jump on a call with you and @john-odonnell to work on our testing strategy sometime, I think we could flesh out these stories a bit more with your insight.

Copy link

@BradleyBoutcher BradleyBoutcher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one small thing, then it's ready to go 👍

--no-deps \
test-python \
nose2 -v -s test/python/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a .gitignore entry for the junit output?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is already in the .gitignore

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah great, thanks!

Copy link

@BradleyBoutcher BradleyBoutcher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@telday telday merged commit e1a3680 into main Dec 10, 2020
@john-odonnell john-odonnell deleted the jenkins-pipeline-update branch December 28, 2020 23:34
@telday telday restored the jenkins-pipeline-update branch January 19, 2021 14:20
@john-odonnell john-odonnell deleted the jenkins-pipeline-update branch February 1, 2021 17:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Test results are available via Jenkins
2 participants