You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The run_variation.sh script does not seem to do anything - it does not even check its previous invocation's status.
This means the script found a bot-downloading tag on one of the dataset collection elements that hold the download links.
The bot will interpret this tag as an indication that a previous invocation of itself is still trying to download data.
There are two reasons for the tag being present:
i) another invocation of the script is in fact still downloading data. Check the corresponding staging history and your scheduling logs to see if that's the case.
ii) a previous run of the script got interrupted before having a chance to remove the bot-downloading tag from the download links element it was working on. In that case you will need to remove the tag manually (currently only possible via the API) to allow any new runs of the script. This is how it works:
Find the collection element carrying the tag and copy its dataset ID
Copy the history ID containing the collection
Have your API key on the server the script is running on ready
In a Python shell run the following commands:
from bioblend import galaxy
gi = galaxy.GalaxyInstance('<server_url>', key='<api_key>')
gi.histories.update_dataset('<history ID>', '<dataset ID>', tags=[])
A downstream workflow run (consensus, reporting, etc.) has failed. Can I just run the WF manually again?
It's better to trigger a rerun of the whole analysis batch!
The summarize.py script (if you're using it) will compare the VCF collection elements in different histories by identity to reconstruct the connection between histories of one batch. Rerunning just a single WF will break this.
No description provided.
The text was updated successfully, but these errors were encountered: