You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Lancet team at NYGC wants to display their compressed de Bruijn graph GFAs on vgteam/sequenceTubeMap. However, when they run them through GetBlunted to bluntify them, the output GFAs seem to be made up of 1 and 2-base nodes, well in excess of the number that should really be required by the graph structure.
We need to figure out a way to bluntify their graphs while retaining a graph structure that is mostly made up of nodes of appreciable length.
It would also be good to be able to get and use a mapping back and forth between their input graphs and the bluntified output graphs, for (I think) converting read alignments and other annotations.
The text was updated successfully, but these errors were encountered:
Thanks Adam, yes I could imagine GetBlunted potentially struggling depending on how the DBG looks. Let me know if that doesn't work for you, and if you are able to get an example dataset for this issue
The Lancet team at NYGC wants to display their compressed de Bruijn graph GFAs on vgteam/sequenceTubeMap. However, when they run them through GetBlunted to bluntify them, the output GFAs seem to be made up of 1 and 2-base nodes, well in excess of the number that should really be required by the graph structure.
We need to figure out a way to bluntify their graphs while retaining a graph structure that is mostly made up of nodes of appreciable length.
It would also be good to be able to get and use a mapping back and forth between their input graphs and the bluntified output graphs, for (I think) converting read alignments and other annotations.
The text was updated successfully, but these errors were encountered: