Skip to content

Commit

Permalink
Different AMA (from Overleaf)
Browse files Browse the repository at this point in the history
  • Loading branch information
smoia committed Jan 8, 2024
1 parent 6be4f59 commit bd2ae04
Show file tree
Hide file tree
Showing 15 changed files with 1,055 additions and 54 deletions.
988 changes: 988 additions & 0 deletions ama.bst

Large diffs are not rendered by default.

43 changes: 28 additions & 15 deletions main.tex
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
\documentclass[10pt,a4paper,twocolumns]{proc}
\documentclass[12pt,a4paper]{proc}

%--------------------------------------------
% Input and language
Expand All @@ -7,6 +7,15 @@
\usepackage[T1]{fontenc}
\usepackage[spanish,german,british]{babel}

%--------------------------------------------
% AMA-style
%--------------------------------------------
\usepackage{setspace}\doublespacing
\usepackage[superscript]{cite}
\usepackage{indentfirst}
\author{}\date{} % ignore LaTeX's author and date commands (they print below the title)
\bibliographystyle{ama}

%--------------------------------------------
% License
%--------------------------------------------
Expand Down Expand Up @@ -54,14 +63,14 @@
\usepackage{textgreek,amsmath,nicefrac}

%--------------------------------------------
% Images and bibliography (with oxford comma)
% Images and bibliography
%--------------------------------------------
\usepackage{caption,cleveref}
\usepackage[super,sort&compress]{natbib}
\setcitestyle{comma,numbers,super,open={},close={}}
\makeatletter
\renewcommand\@biblabel[1]{#1.}
\makeatother
% \usepackage[super,sort&compress]{natbib}
% \setcitestyle{comma,numbers,super,open={},close={}}
% \makeatletter
% \renewcommand\biblabel[1]{#1.}
% \makeatother
%--------------------------------------------
% Use french spacing
%--------------------------------------------
Expand All @@ -77,7 +86,6 @@
% \usepackage{lineno}
% \linenumbers


\title{Proceedings of the OHBM Brainhack 2022}
%\subtitle{subtitle}

Expand All @@ -101,9 +109,7 @@

\begin{document}

\maketitle

\authors{Stefano Moia\textsuperscript{1, 2, 3}, %
\hfill\authors{Stefano Moia\textsuperscript{1, 2, 3}, %
Hao-Ting Wang\textsuperscript{1, 4}, %
Anibal S. Heinsfeld\textsuperscript{5, 6}, %
Dorota Jarecka\textsuperscript{7}, %
Expand Down Expand Up @@ -280,6 +286,12 @@
}
\\

\vspace{-0.25in} % reduce space before title
{\let\newpage\relax\maketitle} % print title
\vspace{-1in} % reduce space after title



\begin{abstract}
OHBM Brainhack 2022 took place in June 2022. The first hybrid OHBM hackathon, it had an in-person component taking place in Glasgow and three hubs around the globe to improve inclusivity and fit as many timezones as possible.
In the buzzing setting of the Queen Margaret Union and of the virtual platform, 23 projects were presented for development.
Expand All @@ -291,7 +303,7 @@ \section*{Introduction}
The Organisation of Human Brain Mapping BrainHack (shortened to OHBM
Brainhack herein) is a yearly satellite event of the main OHBM
meeting, organised by the Open Science Special Interest Group following
the model of Brainhack hackathons\citep{Gau2021}.
the model of Brainhack hackathons\cite{Gau2021}.
Where other hackathons set up a competitive environment based on
outperforming other participants' projects, Brainhacks foster a
collaborative environment in which participants can freely collaborate
Expand Down Expand Up @@ -471,8 +483,8 @@ \section{Platforms, website, and IT}

\section{Project Reports}

The peculiar nature of a Brainhack\citep{Gau2021} reflects in the nature of the projects developed during the event, that can span very different types of tasks.
While most projects feature more \'hackathon-style\' software development, in the form of improving software integration (\Cref{sec:DLDI}), API refactoring (\Cref{sec:Neuroscout}), or creation of new toolboxes and platforms (\Cref{sec:NeuroCausal,sec:NARPS,sec:pymc}), the inclusion of newcomers and participants with less strong software development skills can foster projects oriented to user testing (\Cref{sec:DLC,sec:NARPS}) or documentation compilation (\Cref{sec:physiopy}).
The peculiar nature of a Brainhack\cite{Gau2021} reflects in the nature of the projects developed during the event, that can span very different types of tasks.
While most projects feature more `hackathon-style' software development, in the form of improving software integration (\Cref{sec:DLDI}), API refactoring (\Cref{sec:Neuroscout}), or creation of new toolboxes and platforms (\Cref{sec:NeuroCausal,sec:NARPS,sec:pymc}), the inclusion of newcomers and participants with less strong software development skills can foster projects oriented to user testing (\Cref{sec:DLC,sec:NARPS}) or documentation compilation (\Cref{sec:physiopy}).
The scientific scopes of Brainhacks were reflected in projects revolving around data exploration (\Cref{sec:AHEAD,sec:HyppoMRIQC}) or model development (\Cref{sec:pymc}), or adding aspects of open science practices (namely, the Brain Imaging Data Structure) to toolboxes (\Cref{sec:FLUX,sec:vasomosaic}).
Finally, fostering a collaborative environment and avoiding pitching projects against each others not only opens up the possibility for participants to fluidly move between different groups, but also to have projects which sole aim is supporting other projects (\Cref{sec:BHC}), learning new skills by having fun (\Cref{sec:explodingbrains}), or fostering discussions and conversations among participants to improve the adoption of open science practices (\Cref{sec:metadata}).

Expand Down Expand Up @@ -508,7 +520,7 @@ \section{Conclusion and future directions}

The organisation managed to provide a positive onsite environment,
aiming to allow participants to self-organise in the spirit of the
Brainhack\citep{Gau2021}, with plenty of moral - and physical - support.
Brainhack\cite{Gau2021}, with plenty of moral - and physical - support.

The technical setup, based on heavy automatisation flow to allow project
submission to be streamlined, was a fundamental help to the organisation
Expand Down Expand Up @@ -541,6 +553,7 @@ \section{Conclusion and future directions}
team experiment, and we hope that our findings will be helpful to future
Brainhack events organisations.

\bibliographystyle{ama}
\printbibliography

\end{document}
10 changes: 5 additions & 5 deletions summaries/VASOMOSAIC.tex
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,17 @@ \subsection{MOSAIC for VASO fMRI}\label{sec:vasomosaic}
\"Omer Faruk G\"ulban, %
Benedikt A. Poser}

Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging\citep{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by \textcite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically:
Vascular Space Occupancy (VASO) is a functional magnetic resonance imaging (fMRI) method that is used for high-resolution cortical layer-specific imaging\cite{Huber2021a}. Currently, the most popular sequence for VASO at modern SIEMENS scanners is the one by Stirnberg and St\"ocker\cite{Stirnberg2021a} from the DZNE in Bonn, which is employed at more than 30 research labs worldwide. This sequence concomitantly acquires fMRI BOLD and blood volume signals. In the SIEMENS' reconstruction pipeline, these two complementary fMRI contrasts are mixed together within the same time series, making the outputs counter-intuitive for users. Specifically:

\begin{itemize}
\item The \'raw\' NIfTI converted time-series are not BIDS compatible (see \url{https://github.com/bids-standard/bids-specification/issues/1001}).
\item The `raw' NIfTI converted time-series are not BIDS compatible (see \url{https://github.com/bids-standard/bids-specification/issues/1001}).

\item The order of odd and even BOLD and VASO image TRs is unprincipled, making the ordering dependent on the specific implementation of NIfTI converters.
\end{itemize}

Workarounds with 3D distortion correction, results in interpolation artifacts. Alternative workarounds without MOSAIC decorators result in unnecessarily large data sizes.

In the previous Brainhack\citep{Gau2021}, we extended the existing 3D-MOSAIC functor that was previously developed by Benedikt Poser and Philipp Ehses. This functor had been previously used to sort volumes of images by dimensions of echo-times, by RF-channels, and by magnitude and phase signals. In this Brainhack, we successfully extended and validated this functor to also support the dimensionality of SETs (that is representing BOLD and VASO contrast).
In the previous Brainhack\cite{Gau2021}, we extended the existing 3D-MOSAIC functor that was previously developed by Benedikt Poser and Philipp Ehses. This functor had been previously used to sort volumes of images by dimensions of echo-times, by RF-channels, and by magnitude and phase signals. In this Brainhack, we successfully extended and validated this functor to also support the dimensionality of SETs (that is representing BOLD and VASO contrast).

We are happy to share the compiled SIEMENS ICE (Image Calculation Environment) functor that does this sorting. Current VASO users, who want to upgrade their reconstruction pipeline to get the MOSAIC sorting feature too, can reach out to Renzo Huber ([email protected]) or R\"udiger Stirnberg ([email protected]).

Expand All @@ -34,8 +34,8 @@ \subsection{MOSAIC for VASO fMRI}\label{sec:vasomosaic}
\label{fig:VASOMOSAIC}
\end{figure}

Furthermore, Remi Gau, generated a template dataset that exemplifies how one could to store layer-fMRI VASO data. This includes all the meta data for raw and derivatives. Link to this VASO fMRI BIDS demo: \url{https://gin.g-node.org/RemiGau/ds003216/src/bids_demo}.
Furthermore, Remi Gau, generated a template dataset that exemplifies how one could to store layer-fMRI VASO data. This includes all the meta data for `raw and `derivatives'. Link to this VASO fMRI BIDS demo: \url{https://gin.g-node.org/RemiGau/ds003216/src/bids_demo}.

Acknowledgements: We thank Chris Rodgers for instructions on how to overwrite existing reconstruction binaries on the SIEMENS scanner without rebooting. We thank David Feinberg, Alex Beckett and Samantha Ma for helping in testing the new reconstruction binaries at the Feinbergatron scanner in Berkeley via remote scanning. We thank Maastricht University Faculty of Psychology and Neuroscience for supporting this project with 2.5 hours of \'development scan time\'.
Acknowledgements: We thank Chris Rodgers for instructions on how to overwrite existing reconstruction binaries on the SIEMENS scanner without rebooting. We thank David Feinberg, Alex Beckett and Samantha Ma for helping in testing the new reconstruction binaries at the Feinbergatron scanner in Berkeley via remote scanning. We thank Maastricht University Faculty of Psychology and Neuroscience for supporting this project with 2.5 hours of `development scan time'.

\end{document}
8 changes: 4 additions & 4 deletions summaries/ahead-project.tex
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,16 @@ \subsection{Exploring the AHEAD brains together}\label{sec:AHEAD}
Pierre-Louis Bazin}

\subsubsection{Introduction}
One of the long-standing goals of neuroanatomy is to compare the cyto- and myeloarchitecture of the human brain. The recently made available 3D whole-brain post-mortem data set provided by \textcite{Alkemade2022} includes multiple microscopy contrasts and 7-T quantitative multi-parameter MRI reconstructed at 200µm from two human brains. Through the co-registration across MRI and microscopy modalities, this data set provides a unique direct comparison between histological markers and quantitative MRI parameters for the same human brain. In this BrainHack project, we explored this dataset, focusing on: (i) data visualization in online open science platforms, (ii) data integration of quantitative MRI with microscopy, (iii) data analysis of cortical profiles from a selected region of interest.
One of the long-standing goals of neuroanatomy is to compare the cyto- and myeloarchitecture of the human brain. The recently made available 3D whole-brain post-mortem data set provided by Alkemade and colleagues\cite{Alkemade2022} includes multiple microscopy contrasts and 7-T quantitative multi-parameter MRI reconstructed at 200µm from two human brains. Through the co-registration across MRI and microscopy modalities, this data set provides a unique direct comparison between histological markers and quantitative MRI parameters for the same human brain. In this BrainHack project, we explored this dataset, focusing on: (i) data visualization in online open science platforms, (ii) data integration of quantitative MRI with microscopy, (iii) data analysis of cortical profiles from a selected region of interest.


\subsubsection{Results}

Visualization and annotation of large neuroimaging data sets can be challenging, in particular for collaborative data exploration. Here we tested two different infrastructures: BrainBox \url{https://brainbox.pasteur.fr/}, a web-based visualization and annotation tool for collaborative manual delineation of brain MRI data, see e.g.\citep{heuer_evolution_2019}, and Dandi Archive \url{https://dandiarchive.org/}, an online repository of microscopy data with links to Neuroglancer \url{https://github.com/google/neuroglancer}. While Brainbox could not handle the high resolution data well, Neuroglancer visualization was successful after conversion to the Zarr microscopy format (\Cref{fig:ahead}A).
Visualization and annotation of large neuroimaging data sets can be challenging, in particular for collaborative data exploration. Here we tested two different infrastructures: BrainBox \url{https://brainbox.pasteur.fr/}, a web-based visualization and annotation tool for collaborative manual delineation of brain MRI data, see e.g.\cite{heuer_evolution_2019}, and Dandi Archive \url{https://dandiarchive.org/}, an online repository of microscopy data with links to Neuroglancer \url{https://github.com/google/neuroglancer}. While Brainbox could not handle the high resolution data well, Neuroglancer visualization was successful after conversion to the Zarr microscopy format (\Cref{fig:ahead}A).

To help users explore the original high-resolution microscopy sections, we also built a python notebook to automatically query the stains around a given MNI coordinate using the Nighres toolbox~\citep{huntenburg_nighres_2018} (\Cref{fig:ahead}B).
To help users explore the original high-resolution microscopy sections, we also built a python notebook to automatically query the stains around a given MNI coordinate using the Nighres toolbox~\cite{huntenburg_nighres_2018} (\Cref{fig:ahead}B).

For the cortical profile analysis we restricted our analysis on S1 (BA3b) as a part of the somato-motor area from one hemisphere of an individual human brain. S1 is rather thin (\(\sim\)2mm) and it has a highly myelinated layer 4 (see arrow \Cref{fig:ahead}C). In a future step, we are aiming to characterize differences between S1 (BA3b) and M1 (BA4). For now, we used the MRI-quantitative-R1 contrast to define, segment the region of interest and compute cortical depth measurement. In ITK-SNAP\citep{Yushkevich2006} we defined the somato-motor area by creating a spherical mask (radius 16.35mm) around the ‘hand knob’ in M1. To improve the intensity homogeneity of the qMRI-R1 images, we ran a bias field correction (N4BiasFieldCorrection,\citep{Cox1996}). Tissue segmentation was restricted to S1 and was obtained by combining four approaches: (i) fsl-fast\citep{Smith2004} for initial tissues probability map, (ii) semi-automatic histogram fitting in ITK-SNAP, (iii) Segmentator\citep{Gulban2018}, and (iv) manual editing. We used the LN2\_LAYERS program from LAYNII open source software\citep{Huber2021} to compute the equi-volume cortical depth measurements for the gray matter. Finally, we evaluated cortical depth profiles for three quantitative MRI contrasts (R1, R2, proton density) and three microscopy contrasts (thionin, bieloschowsky, parvalbumin) by computing a voxel-wise 2D histogram of image intensity (\Cref{fig:ahead}C). Some challenges are indicated by arrows 2 and 3 in the lower part of \Cref{fig:ahead}C.
For the cortical profile analysis we restricted our analysis on S1 (BA3b) as a part of the somato-motor area from one hemisphere of an individual human brain. S1 is rather thin (\(\sim\)2mm) and it has a highly myelinated layer 4 (see arrow \Cref{fig:ahead}C). In a future step, we are aiming to characterize differences between S1 (BA3b) and M1 (BA4). For now, we used the MRI-quantitative-R1 contrast to define, segment the region of interest and compute cortical depth measurement. In ITK-SNAP\cite{Yushkevich2006} we defined the somato-motor area by creating a spherical mask (radius 16.35mm) around the ‘hand knob’ in M1. To improve the intensity homogeneity of the qMRI-R1 images, we ran a bias field correction (N4BiasFieldCorrection,\cite{Cox1996}). Tissue segmentation was restricted to S1 and was obtained by combining four approaches: (i) fsl-fast\cite{Smith2004} for initial tissues probability map, (ii) semi-automatic histogram fitting in ITK-SNAP, (iii) Segmentator\cite{Gulban2018}, and (iv) manual editing. We used the LN2\_LAYERS program from LAYNII open source software\cite{Huber2021} to compute the equi-volume cortical depth measurements for the gray matter. Finally, we evaluated cortical depth profiles for three quantitative MRI contrasts (R1, R2, proton density) and three microscopy contrasts (thionin, bieloschowsky, parvalbumin) by computing a voxel-wise 2D histogram of image intensity (\Cref{fig:ahead}C). Some challenges are indicated by arrows 2 and 3 in the lower part of \Cref{fig:ahead}C.

From this Brainhack project, we conclude that the richness of the data set must be exploited from multiple points of view, from enhancing the integration of MRI with microscopy data in visualization software to providing optimized multi-contrast and multi-modality data analysis pipeline for high-resolution brain regions.

Expand Down
Loading

0 comments on commit bd2ae04

Please sign in to comment.