Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚧 [WIP] πŸš‘οΈ Hotfix 1.8.7: Restore longitudinal functionality #2160

Draft
wants to merge 33 commits into
base: main
Choose a base branch
from

Conversation

shnizzedy
Copy link
Member

@shnizzedy shnizzedy commented Oct 31, 2024

Fixes

Fixes #2159 by @Shinwon Park
Related to Google Doc Longitudinal project spec by @e-kenneally

Description

  • Adds T1w-brain-template to warp_longitudinal_T1w_to_template's inputs and updates
    node, out = strat_pool.get_data("T1w_brain_template")
    calls to
    node, out = strat_pool.get_data("T1w-brain-template")
    calls.
  • Generates from-longitudinal_to-T1w_mode-image_desc-linear_xfm on the fly with
    if strat_pool.check_rpool("from-longitudinal_to-T1w_mode-image_desc-linear_xfm"):
    xfm_prov = strat_pool.get_cpac_provenance(
    "from-longitudinal_to-T1w_mode-image_desc-linear_xfm")
    reg_tool = check_prov_for_regtool(xfm_prov)
    xfm: tuple[pe.Node, str] = strat_pool.get_data("from-longitudinal_to-T1w_mode-image_desc-linear_xfm")
    else:
    xfm_prov = strat_pool.get_cpac_provenance(
    "from-T1w_to-longitudinal_mode-image_desc-linear_xfm")
    reg_tool = check_prov_for_regtool(xfm_prov)
    # create inverse xfm if we don't have it
    invt = pe.Node(interface=fsl.ConvertXFM(), name='convert_xfm')
    invt.inputs.invert_xfm = True
    wf.connect(
    *strat_pool.get_data("from-T1w_to-longitudinal_mode-image_desc-linear_xfm"), invt, "in_file")
    xfm = (invt, "out_file")
    if it doesn't already exist (and it never does).
  • Updates each to
    if not dry_run:
    workflow.run()
    to allow for test_config to run without running the graphs.
  • Changes
    # Now we have all the anat_preproc set up for every session
    # loop over the different anat preproc strategies
    strats_brain_dct = {}
    strats_head_dct = {}
    for cpac_dir in cpac_dirs:
    if os.path.isdir(cpac_dir):
    for filename in os.listdir(cpac_dir):
    if 'T1w.nii' in filename:
    for tag in filename.split('_'):
    if 'desc-' in tag and 'brain' in tag:
    if tag not in strats_brain_dct:
    strats_brain_dct[tag] = []
    strats_brain_dct[tag].append(os.path.join(cpac_dir,
    filename))
    if tag not in strats_head_dct:
    strats_head_dct[tag] = []
    head_file = filename.replace(tag, 'desc-reorient')
    strats_head_dct[tag].append(os.path.join(cpac_dir,
    head_file))
    to
    # Now we have all the anat_preproc set up for every session
    # loop over the different anat preproc strategies
    strats_dct: dict[str, list] = {key: [pool.get_data(key) for pool in pools]
    for key in ["desc-brain_T1w", "desc-head_T1w"]}
    # Rename nodes to include session name to avoid duplicates
    for key in strats_dct:
    for i, resource in enumerate(strats_dct[key]):
    resource[0] = resource[0].clone(f"{resource[0].name}_{session_id_list[i]}")
    to get the resources from the ResourcePool instead of the output directory (affording graph-building without running).
  • Pulls
    wf = initialize_nipype_wf(config, sub_list[0],
    # just grab the first one for the name
    name=f"template_node_brain")
    config.pipeline_setup[
    'pipeline_name'] = f'longitudinal_{orig_pipe_name}'
    template_node_name = 'longitudinal_anat_template_brain'
    # This node will generate the longitudinal template (the functions are
    # in longitudinal_preproc)
    # Later other algorithms could be added to calculate it, like the
    # multivariate template from ANTS
    # It would just require to change it here.
    template_node = subject_specific_template(
    workflow_name=template_node_name
    )
    template_node.inputs.set(
    avg_method=config.longitudinal_template_generation[
    'average_method'],
    dof=config.longitudinal_template_generation['dof'],
    interp=config.longitudinal_template_generation['interp'],
    cost=config.longitudinal_template_generation['cost'],
    convergence_threshold=config.longitudinal_template_generation[
    'convergence_threshold'],
    thread_pool=config.longitudinal_template_generation[
    'thread_pool'],
    unique_id_list=list(session_wfs.keys())
    )
    num_sessions = len(strats_dct["desc-brain_T1w"])
    merge_brains = pe.Node(Merge(num_sessions), name="merge_brains")
    merge_skulls = pe.Node(Merge(num_sessions), name="merge_skulls")
    for i in list(range(0, num_sessions)):
    wf.connect(*strats_dct["desc-brain_T1w"][i], merge_brains, f"in{i + 1}")
    wf.connect(*strats_dct["desc-head_T1w"][i], merge_skulls, f"in{i + 1}")
    wf.connect(merge_brains, "out", template_node, "input_brain_list")
    wf.connect(merge_skulls, "out", template_node, "input_skull_list")
    long_id = f'longitudinal_{subject_id}_strat-desc-brain_T1w'
    wf, rpool = initiate_rpool(wf, config, part_id=long_id)
    rpool.set_data("space-longitudinal_desc-brain_T1w",
    template_node, 'brain_template', {},
    "", template_node_name)
    rpool.set_data("space-longitudinal_desc-brain_T1w-template",
    template_node, 'brain_template', {},
    "", template_node_name)
    rpool.set_data("space-longitudinal_desc-reorient_T1w",
    template_node, 'skull_template', {},
    "", template_node_name)
    rpool.set_data("space-longitudinal_desc-reorient_T1w-template",
    template_node, 'skull_template', {},
    "", template_node_name)
    pipeline_blocks = [mask_longitudinal_T1w_brain]
    pipeline_blocks = build_T1w_registration_stack(rpool, config,
    pipeline_blocks, space="longitudinal")
    pipeline_blocks = build_segmentation_stack(rpool, config,
    pipeline_blocks)
    wf = connect_pipeline(wf, config, rpool, pipeline_blocks)
    excl = ['space-longitudinal_desc-brain_T1w',
    'space-longitudinal_desc-reorient_T1w',
    'space-longitudinal_desc-brain_mask']
    rpool.gather_pipes(wf, config, add_excl=excl)
    if not dry_run:
    wf.run()
    # now, just write out a copy of the above to each session
    config.pipeline_setup['pipeline_name'] = orig_pipe_name
    for session in sub_list:
    unique_id = session['unique_id']
    try:
    creds_path = session['creds_path']
    if creds_path and 'none' not in creds_path.lower():
    if os.path.exists(creds_path):
    input_creds_path = os.path.abspath(creds_path)
    else:
    err_msg = 'Credentials path: "%s" for subject "%s" ' \
    'session "%s" was not found. Check this path ' \
    'and try again.' % (creds_path, subject_id,
    unique_id)
    raise Exception(err_msg)
    else:
    input_creds_path = None
    except KeyError:
    input_creds_path = None
    wf = initialize_nipype_wf(config, sub_list[0])
    wf, rpool = initiate_rpool(wf, config, session, rpool=rpool)
    config.pipeline_setup[
    'pipeline_name'] = f'longitudinal_{orig_pipe_name}'
    if "derivatives_dir" in session:
    rpool = ingress_output_dir(
    wf, config, rpool, long_id, data_paths=session, part_id=subject_id,
    ses_id=unique_id, creds_path=input_creds_path)
    select_node_name = f'FSL_select_{unique_id}'
    select_sess = pe.Node(Function(input_names=['session',
    'output_brains',
    'warps'],
    output_names=['brain_path', 'warp_path'],
    function=select_session),
    name=select_node_name)
    select_sess.inputs.session = unique_id
    wf.connect(template_node, 'output_brain_list', select_sess,
    'output_brains')
    wf.connect(template_node, 'warp_list', select_sess, 'warps')
    rpool.set_data("space-longitudinal_desc-brain_T1w",
    select_sess, 'brain_path', {}, "",
    select_node_name)
    rpool.set_data("from-T1w_to-longitudinal_mode-image_"
    "desc-linear_xfm",
    select_sess, 'warp_path', {}, "",
    select_node_name)
    config.pipeline_setup['pipeline_name'] = orig_pipe_name
    excl = ['space-template_desc-brain_T1w',
    'space-T1w_desc-brain_mask']
    rpool.gather_pipes(wf, config, add_excl=excl)
    if not dry_run:
    wf.run()
    out of a
    for strat in strats_brain_dct.keys():
    loop; I don't think we need the loop when we don't have to look in the output directory.
  • Updated some node names to include session to allow those nodes to exist in a single graph, e.g.
    flowchart LR
    subgraph separate_graphs ["separate graphs"]
     direction TB
     subgraph ses_1 ["session 1"]
       direction TB
       node_1["node_1"] --> 1_inner["…"]
       1_inner --> 1_out["outputs"]
     end
     subgraph ses_2 ["session 2"]
       direction TB
       node_2["node_1"] --> 2_inner["…"]
       2_inner --> 2_out["outputs"]
     end
     1_out --> subsequent
     2_out --> subsequent
     subgraph subsequent ["…"]
     end
    end
    subgraph combined
      node_3["node_1_ses-1"] --> combined_subsequent["…"]
      node_4["node_1_ses-2"] --> combined_subsequent
    end
    separate_graphs --> combined
    node_1 --> node_3
    node_2 --> node_4
    1_inner --> combined_subsequent
    2_inner --> combined_subsequent
    subsequent --> combined_subsequent
    
    Loading
  • Refactors
    @nodeblock(
    name="transform_whole_head_T1w_to_T1template",
    config=["registration_workflows", "anatomical_registration"],
    switch=["run"],
    inputs=[
    (
    "desc-head_T1w",
    "from-T1w_to-template_mode-image_xfm",
    "space-template_desc-head_T1w",
    ),
    "T1w-template",
    ],
    outputs={"space-template_desc-head_T1w": {"Template": "T1w-template"}},
    )
    def warp_wholeheadT1_to_template(wf, cfg, strat_pool, pipe_num, opt=None):
    and
    @nodeblock(
    name="transform_T1mask_to_T1template",
    switch=[
    ["registration_workflows", "anatomical_registration", "run"],
    ["anatomical_preproc", "run"],
    ["anatomical_preproc", "brain_extraction", "run"],
    ],
    inputs=[
    ("space-T1w_desc-brain_mask", "from-T1w_to-template_mode-image_xfm"),
    "T1w-template",
    ],
    outputs={"space-template_desc-brain_mask": {"Template": "T1w-template"}},
    )
    def warp_T1mask_to_template(wf, cfg, strat_pool, pipe_num, opt=None):
    into
    def warp_to_template(warp_what: Literal["mask", "wholehead"],
    space_from: Literal["longitudinal", "T1w"]) -> NodeBlockFunction:
    """Get a NodeBlockFunction to transform a resource from ``space`` to template.
    The resource being warped needs to be the first list or string in the tuple
    in the first position of the decorator's "inputs".
    """
    _decorators = {"mask": {
    "name": f"transform_{space_from}-mask_to_T1-template",
    "switch": [
    ["registration_workflows", "anatomical_registration", "run"],
    ["anatomical_preproc", "run"],
    ["anatomical_preproc", "brain_extraction", "run"],
    ],
    "inputs": [
    (f"space-{space_from}_desc-brain_mask",
    f"from-{space_from}_to-template_mode-image_xfm"),
    "T1w-template",
    ],
    "outputs": {"space-template_desc-brain_mask": {"Template": "T1w-template"}},
    }, "wholehead": {
    "name": f"transform_wholehead_{space_from}_to_T1template",
    "config": ["registration_workflows", "anatomical_registration"],
    "switch": ["run"],
    "inputs": [
    (
    ["desc-head_T1w", "desc-reorient_T1w"],
    [f"from-{space_from}_to-template_mode-image_xfm",
    f"from-{space_from}_to-template_mode-image_xfm"],
    "space-template_desc-head_T1w",
    ),
    "T1w-template",
    ],
    "outputs": {"space-template_desc-head_T1w": {"Template": "T1w-template"}},
    }}
    if space_from != "T1w":
    _decorators[warp_what]["inputs"][0] = tuple((prepend_space(
    _decorators[warp_what]["inputs"][0][0], space_from),
    *_decorators[warp_what]["inputs"][0][1:]
    ))
    @nodeblock(**_decorators[warp_what])
    def warp_to_template_fxn(wf, cfg, strat_pool, pipe_num, opt=None):
    so we can just have one definition for the four nearly identical functions for [mask, wholehead] Γ— [T1w, longitudinal]
  • Updates

    C-PAC/CPAC/pipeline/engine.py

    Lines 1095 to 1097 in 96db8b0

    # del all_forks
    for pipe_idx in self.rpool[resource]:
    pipe_x = self.get_pipe_number(pipe_idx)
    to

    C-PAC/CPAC/pipeline/engine.py

    Lines 1116 to 1121 in 16cb70c

    # del all_forks
    for pipe_idx in self.rpool[resource]:
    try:
    pipe_x = self.get_pipe_number(pipe_idx)
    except ValueError:
    continue
    so already-deleted forks don't cause any issues.
  • Adds keyword-only parameter rpool to

    C-PAC/CPAC/pipeline/engine.py

    Lines 2362 to 2364 in 16cb70c

    def initiate_rpool(
    wf, cfg, data_paths=None, part_id=None, *, rpool: Optional[ResourcePool] = None
    ):
    so we can initiate a ResourcePool from an existing pool instead of only from {}
    rpool = ResourcePool(rpool=rpool.rpool if rpool else None, name=unique_id, cfg=cfg)
  • Removes https://github.com/FCP-INDI/C-PAC/blob/main/CPAC/registration/registration.py#L2374 from
    if 'space-longitudinal' in brain:
    for key in outputs:
    for direction in ['from', 'to']:
    if f'{direction}-T1w' in key:
    new_key = key.replace(f'{direction}-T1w',
    f'{direction}-longitudinal')
    outputs[new_key] = outputs[key]
    del outputs[key]
    to keep both T1w and longitudinal xfms in rpool
  • Changes
    outputs=[
    "label-CSF_mask",
    "label-GM_mask",
    "label-WM_mask",
    "label-CSF_desc-preproc_mask",
    "label-GM_desc-preproc_mask",
    "label-WM_desc-preproc_mask",
    "label-CSF_probseg",
    "label-GM_probseg",
    "label-WM_probseg",
    "label-CSF_pveseg",
    "label-GM_pveseg",
    "label-WM_pveseg",
    "space-longitudinal_label-CSF_mask",
    "space-longitudinal_label-GM_mask",
    "space-longitudinal_label-WM_mask",
    "space-longitudinal_label-CSF_desc-preproc_mask",
    "space-longitudinal_label-GM_desc-preproc_mask",
    "space-longitudinal_label-WM_desc-preproc_mask",
    "space-longitudinal_label-CSF_probseg",
    "space-longitudinal_label-GM_probseg",
    "space-longitudinal_label-WM_probseg",
    ],
    to
    outputs=[f"{long}label-{tissue}_{entity}" for
    long in ["", "space-longitudinal_"] for
    tissue in ["CSF", "GM", "WM"] for
    entity in ["mask", "desc-preproc_mask", "probseg", "pveseg"]],
    in order to include some missing longitudinal resources and to make the list pithier.

Technical details

♻️ refactor warp_T1mask_to_template to warp_mask_to_template

@nodeblock(
name="transform_T1mask_to_T1template",
switch=[
["registration_workflows", "anatomical_registration", "run"],
["anatomical_preproc", "run"],
["anatomical_preproc", "brain_extraction", "run"],
],
inputs=[
("space-T1w_desc-brain_mask", "from-T1w_to-template_mode-image_xfm"),
"T1w-template",
],
outputs={"space-template_desc-brain_mask": {"Template": "T1w-template"}},
)
def warp_T1mask_to_template(wf, cfg, strat_pool, pipe_num, opt=None):
xfm_prov = strat_pool.get_cpac_provenance(
'from-T1w_to-template_mode-image_xfm')
reg_tool = check_prov_for_regtool(xfm_prov)
num_cpus = cfg.pipeline_setup['system_config'][
'max_cores_per_participant']
num_ants_cores = cfg.pipeline_setup['system_config']['num_ants_threads']
apply_xfm = apply_transform(f'warp_T1mask_to_T1template_{pipe_num}',
reg_tool, time_series=False, num_cpus=num_cpus,
num_ants_cores=num_ants_cores)
apply_xfm.inputs.inputspec.interpolation = "NearestNeighbor"
'''
if reg_tool == 'ants':
apply_xfm.inputs.inputspec.interpolation = cfg.registration_workflows[
'functional_registration']['func_registration_to_template'][
'ANTs_pipelines']['interpolation']
elif reg_tool == 'fsl':
apply_xfm.inputs.inputspec.interpolation = cfg.registration_workflows[
'functional_registration']['func_registration_to_template'][
'FNIRT_pipelines']['interpolation']
'''
connect = strat_pool.get_data("space-T1w_desc-brain_mask")
node, out = connect
wf.connect(node, out, apply_xfm, 'inputspec.input_image')
node, out = strat_pool.get_data("T1w-template")
wf.connect(node, out, apply_xfm, 'inputspec.reference')
node, out = strat_pool.get_data("from-T1w_to-template_mode-image_xfm")
wf.connect(node, out, apply_xfm, 'inputspec.transform')
outputs = {
'space-template_desc-brain_mask': (apply_xfm, 'outputspec.output_image')
}
return (wf, outputs)
becomes
def warp_mask_to_template(space: Literal["longitudinal", "T1w"]) -> NodeBlockFunction:
"""Get a NodeBlockFunction to transform a mask from ``space`` to template."""
@nodeblock(
name=f"transform_{space}-mask_to_T1-template",
switch=[
["registration_workflows", "anatomical_registration", "run"],
["anatomical_preproc", "run"],
["anatomical_preproc", "brain_extraction", "run"],
],
inputs=[
(f"space-{space}_desc-brain_mask",
f"from-{space}_to-template_mode-image_xfm"),
"T1w-template",
],
outputs={"space-template_desc-brain_mask": {"Template": "T1w-template"}},
)
def warp_mask_to_template_fxn(wf, cfg, strat_pool, pipe_num, opt=None):
"""Transform a mask to template space."""
xfm_prov = strat_pool.get_cpac_provenance(
f'from-{space}_to-template_mode-image_xfm')
reg_tool = check_prov_for_regtool(xfm_prov)
num_cpus = cfg.pipeline_setup['system_config'][
'max_cores_per_participant']
num_ants_cores = cfg.pipeline_setup['system_config']['num_ants_threads']
apply_xfm = apply_transform(f'warp_T1mask_to_T1template_{pipe_num}',
reg_tool, time_series=False, num_cpus=num_cpus,
num_ants_cores=num_ants_cores)
apply_xfm.inputs.inputspec.interpolation = "NearestNeighbor"
'''
if reg_tool == 'ants':
apply_xfm.inputs.inputspec.interpolation = cfg.registration_workflows[
'functional_registration']['func_registration_to_template'][
'ANTs_pipelines']['interpolation']
elif reg_tool == 'fsl':
apply_xfm.inputs.inputspec.interpolation = cfg.registration_workflows[
'functional_registration']['func_registration_to_template'][
'FNIRT_pipelines']['interpolation']
'''
connect = strat_pool.get_data(f"space-{space}_desc-brain_mask")
node, out = connect
wf.connect(node, out, apply_xfm, 'inputspec.input_image')
node, out = strat_pool.get_data("T1w-template")
wf.connect(node, out, apply_xfm, 'inputspec.reference')
node, out = strat_pool.get_data(f"from-{space}_to-template_mode-image_xfm")
wf.connect(node, out, apply_xfm, 'inputspec.transform')
outputs = {
'space-template_desc-brain_mask': (apply_xfm, 'outputspec.output_image')
}
return wf, outputs
return warp_mask_to_template_fxn
which just uses f-strings to plug in the space and otherwise use identical code.

Tests

Screenshots

Checklist

  • My pull request has a descriptive title (not a vague title like Update index.md).
  • My pull request targets the main branch of the repository.
  • My commit messages follow best practices.
  • My code follows the established code style of the repository.
  • I added tests for the changes I made (if applicable).
  • I updated the changelog.
  • I added or updated documentation (if applicable).
  • I tried running the project locally and verified that there are no visible errors.

Developer Certificate of Origin

Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

@shnizzedy
Copy link
Member Author

I don't understand why we do this twice:

template_node = subject_specific_template(
workflow_name=template_node_name
)
template_node.inputs.set(
avg_method=config.longitudinal_template_generation[
'average_method'],
dof=config.longitudinal_template_generation['dof'],
interp=config.longitudinal_template_generation['interp'],
cost=config.longitudinal_template_generation['cost'],
convergence_threshold=config.longitudinal_template_generation[
'convergence_threshold'],
thread_pool=config.longitudinal_template_generation[
'thread_pool'],
unique_id_list=list(session_wfs.keys())
)
template_node = subject_specific_template(
workflow_name='subject_specific_func_template_' + subject_id
)
template_node.inputs.set(
avg_method=config.longitudinal_template_average_method,
dof=config.longitudinal_template_dof,
interp=config.longitudinal_template_interp,
cost=config.longitudinal_template_cost,
convergence_threshold=config.longitudinal_template_convergence_threshold,
thread_pool=config.longitudinal_template_thread_pool,
)
but it might just be a gap in my understanding

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

πŸ› No resources for warp_longitudinal_T1w_to_template
1 participant