Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pytest #64

Open
anoronh4 opened this issue Aug 10, 2023 · 0 comments
Open

Fix pytest #64

anoronh4 opened this issue Aug 10, 2023 · 0 comments

Comments

@anoronh4
Copy link
Collaborator

anoronh4 commented Aug 10, 2023

Pytest failing in new branch feature/make_cff. Here's an example error:

'run_test_profile' exited with exit code '1' instead of '0'.
stderr: 
stdout: 2023/08/09 20:49:57  info unpack layer: sha256:d41bcd4f5755f041e9524d8fc6557853cc4770532cef6d053143809198f180f7
    2023/08/09 20:49:57  info unpack layer: sha256:94d9ccc9d[29](https://github.com/mskcc/forte/actions/runs/5813597204/job/15761516072#step:9:30)[31](https://github.com/mskcc/forte/actions/runs/5813597204/job/15761516072#step:9:32)6c2f090417c0ca115b3997419080aa450aec6f3fe0fad9cc334
    2023/08/09 20:49:57  info unpack layer: sha256:84c4d764dab707003120fe7e027a934fb1ef5379674a23d47e41772783380dd1
    FATAL:   While making image from oci registry: error fetching image to cache: while building SIF from layers: packer failed to pack: while unpacking tmpfs: error unpacking rootfs: unpack layer: unpack entry: opt/conda/pkgs/cache/09cdf8bf.json: unpack to regular file: short write: write /home/runner/build-temp-718394053/rootfs/opt/conda/pkgs/cache/09cdf8bf.json: no space left on device

This needs to be addressed since once the branch is merged into the main or develop branches, it will affect all further development. This is likely occurring when singularity images, some of which are >1 Gb, are pulled into the environment and use up too much of the limited disk space. Github Actions does not cache the containers and is not linked to docker hub the way Travis CI is. Combinations of one or more of the following options should help:

  1. Maximizing build disk space by cleaning up extraneous files on the system, example here
  2. Disable certain downstream portions of the workflow
  3. Independently test subworkflows
  • A way to accomplish this is to split the current fusion subworkflow into two subworkflows. 1 for running fusion callers, and the second for combining and annotating the output of one. Then we can write a pytest and github workflow for each.
  1. Trim down containers used. Most images are < 100M. The container for metafusion, starfusion and fusioncatcher are >1Gb. Metafusion can be cleaned by running conda clean -afy at the end of the Dockerfile. Other suggestions here
  2. Use --squash parameter when building the docker.

In addition, pytests are also taking longer and longer, and there is a timeout at 45 minutes. Options 2 and 3 could cut down on time but we will still be billing the same amount of time or more if we run all tests each time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant