1.8 General Software

Convolutional neural networks for classification of Alzheimer’s disease: Overview and reproducible evaluation.

Numerous machine learning (ML) approaches have been proposed for automatic classification of Alzheimer’s disease (AD) from brain imaging data. In particular, over 30 papers have proposed to use convolutional neural networks (CNN) for AD classification from anatomical MRI. However, the classification performance is difficult to compare across studies due to variations in components such as participant selection, image preprocessing or validation procedure. Moreover, these studies are hardly reproducible because their frameworks are not publicly accessible and because implementation details are lacking. Lastly, some of these papers may report a biased performance due to inadequate or unclear validation or model selection procedures. In the present work, we aim to address these limitations through three main contributions. First, we performed a systematic literature review. We identified four main types of approaches: i) 2D slice-level, ii) 3D patch-level, iii) ROI-based and iv) 3D subject-level CNN. Moreover, we found that more than half of the surveyed papers may have suffered from data leakage and thus reported biased performance. Our second contribution is the extension of our open-source framework for classification of AD using CNN and T1-weighted MRI. The framework comprises previously developed tools to automatically convert ADNI, AIBL and OASIS data into the BIDS standard, and a modular set of image preprocessing procedures, classification architectures and evaluation procedures dedicated to deep learning. Finally, we used this framework to rigorously compare different CNN architectures. The data was split into training/validation/test sets at the very beginning and only the training/validation sets were used for model selection. To avoid any overfitting, the test sets were left untouched until the end of the peer-review process. Overall, the different 3D approaches (3D-subject, 3D-ROI, 3D-patch) achieved similar performances while that of the 2D slice approach was lower. Of note, the different CNN approaches did not perform better than a SVM with voxel-based features. The different approaches generalized well to similar populations but not to datasets with different inclusion criteria or demographical characteristics. All the code of the framework and the experiments is publicly available: general-purpose tools have been integrated into the Clinica software (www.clinica.run) and the paper-specific code is available at: https://github.com/aramis-lab/AD-DL.

URL: https://github.com/aramis-lab/AD-DL

Beyond the eye: A relational model for early dementia detection using retinal OCTA images.

Early detection of dementia, such as Alzheimer’s disease (AD) or mild cognitive impairment (MCI), is essential to enable timely intervention and potential treatment. Accurate detection of AD/MCI is challenging due to the high complexity, cost, and often invasive nature of current diagnostic techniques, which limit their suitability for large-scale population screening. Given the shared embryological origins and physiological characteristics of the retina and brain, retinal imaging is emerging as a potentially rapid and cost-effective alternative for the identification of individuals with or at high risk of AD. In this paper, we present a novel PolarNet+ that uses retinal optical coherence tomography angiography (OCTA) to discriminate early-onset AD (EOAD) and MCI subjects from controls. Our method first maps OCTA images from Cartesian coordinates to polar coordinates, allowing approximate sub-region calculation to implement the clinician-friendly early treatment of diabetic retinopathy study (ETDRS) grid analysis. We then introduce a multi-view module to serialize and analyze the images along three dimensions for comprehensive, clinically useful information extraction. Finally, we abstract the sequence embedding into a graph, transforming the detection task into a general graph classification problem. A regional relationship module is applied after the multi-view module to explore the relationship between the sub-regions. Such regional relationship analyses validate known eye-brain links and reveal new discriminative patterns. The proposed model is trained, tested, and validated on four retinal OCTA datasets, including 1,671 participants with AD, MCI, and healthy controls. Experimental results demonstrate the performance of our model in detecting AD and MCI with an AUC of 88.69% and 88.02%, respectively. Our results provide evidence that retinal OCTA imaging, coupled with artificial intelligence, may serve as a rapid and non-invasive approach for large-scale screening of AD and MCI. The code is available at https://github.com/iMED-Lab/PolarNet-Plus-PyTorch, and the dataset is also available upon request.

URL: https://github.com/iMED-Lab/PolarNet-Plus-PyTorch

Learning to synthesise the ageing brain without longitudinal data.

How will my face look when I get older? Or, for a more challenging question: How will my brain look when I get older? To answer this question one must devise (and learn from data) a multivariate auto-regressive function which given an image and a desired target age generates an output image. While collecting data for faces may be easier, collecting longitudinal brain data is not trivial. We propose a deep learning-based method that learns to simulate subject-specific brain ageing trajectories without relying on longitudinal data. Our method synthesises images conditioned on two factors: age (a continuous variable), and status of Alzheimer’s Disease (AD, an ordinal variable). With an adversarial formulation we learn the joint distribution of brain appearance, age and AD status, and define reconstruction losses to address the challenging problem of preserving subject identity. We compare with several benchmarks using two widely used datasets. We evaluate the quality and realism of synthesised images using ground-truth longitudinal data and a pre-trained age predictor. We show that, despite the use of cross-sectional data, our model learns patterns of gray matter atrophy in the middle temporal gyrus in patients with AD. To demonstrate generalisation ability, we train on one dataset and evaluate predictions on the other. In conclusion, our model shows an ability to separate age, disease influence and anatomy using only 2D cross-sectional data that should be useful in large studies into neurodegenerative disease, that aim to combine several data sources. To facilitate such future studies by the community at large our code is made available at https://github.com/xiat0616/BrainAgeing.

URL: https://github.com/xiat0616/BrainAgeing

Inferring brain causal and temporal-lag networks for recognizing abnormal patterns of dementia.

Brain functional network analysis has become a popular method to explore the laws of brain organization and identify biomarkers of neurological diseases. However, it is still a challenging task to construct an ideal brain network due to the limited understanding of the human brain. Existing methods often ignore the impact of temporal-lag on the results of brain network modeling, which may lead to some unreliable conclusions. To overcome this issue, we propose a novel brain functional network estimation method, which can simultaneously infer the causal mechanisms and temporal-lag values among brain regions. Specifically, our method converts the lag learning into an instantaneous effect estimation problem, and further embeds the search objectives into a deep neural network model as parameters to be learned. To verify the effectiveness of the proposed estimation method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database by comparing the proposed model with several existing methods, including correlation-based and causality-based methods. The experimental results show that our brain networks constructed by the proposed estimation method can not only achieve promising classification performance, but also exhibit some characteristics of physiological mechanisms. Our approach provides a new perspective for understanding the pathogenesis of brain diseases. The source code is released at https://github.com/NJUSTxiazw/CTLN.

URL: https://github.com/NJUSTxiazw/CTLN

Precise and Rapid Whole-Head Segmentation from Magnetic Resonance Images of Older Adults using Deep Learning.

Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields, particularly in non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE.

URL: https://github.com/lab-smile/GRACE

“Recon-all-clinical”: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI.

Surface-based analysis of the cerebral cortex is ubiquitous in human neuroimaging with MRI. It is crucial for tasks like cortical registration, parcellation, and thickness estimation. Traditionally, such analyses require high-resolution, isotropic scans with good gray-white matter contrast, typically a T1-weighted scan with 1 mm resolution. This requirement precludes application of these techniques to most MRI scans acquired for clinical purposes, since they are often anisotropic and lack the required T1-weighted contrast. To overcome this limitation and enable large-scale neuroimaging studies using vast amounts of existing clinical data, we introduce recon-all-clinical, a novel methodology for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and contrast. Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions (SDFs), and classical geometry processing for accurate surface placement while maintaining topological and geometric constraints. The method does not require retraining for different acquisitions, thus simplifying the analysis of heterogeneous clinical datasets. We evaluated recon-all-clinical on multiple public datasets like ADNI, HCP, AIBL, OASIS and including a large clinical dataset of over 9,500 scans. The results indicate that our method produces geometrically precise cortical reconstructions across different MRI contrasts and resolutions, consistently achieving high accuracy in parcellation. Cortical thickness estimates are precise enough to capture aging effects, independently of MRI contrast, even though accuracy varies with slice thickness. Our method is publicly available at https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical, enabling researchers to perform detailed cortical analysis on the huge amounts of already existing clinical MRI scans. This advancement may be particularly valuable for studying rare diseases and underrepresented populations where research-grade MRI data is scarce.

URL: https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical

White matter hyperintensities segmentation using the ensemble U-Net with multi-scale highlighting foregrounds.

White matter hyperintensities (WMHs) are abnormal signals within the white matter region on the human brain MRI and have been associated with aging processes, cognitive decline, and dementia. In the current study, we proposed a U-Net with multi-scale highlighting foregrounds (HF) for WMHs segmentation. Our method, U-Net with HF, is designed to improve the detection of the WMH voxels with partial volume effects. We evaluated the segmentation performance of the proposed approach using the Challenge training dataset. Then we assessed the clinical utility of the WMH volumes that were automatically computed using our method and the Alzheimer’s Disease Neuroimaging Initiative database. We demonstrated that the U-Net with HF significantly improved the detection of the WMH voxels at the boundary of the WMHs or in small WMH clusters quantitatively and qualitatively. Up to date, the proposed method has achieved the best overall evaluation scores, the highest dice similarity index, and the best F1-score among 39 methods submitted on the WMH Segmentation Challenge that was initially hosted by MICCAI 2017 and is continuously accepting new challengers. The evaluation of the clinical utility showed that the WMH volume that was automatically computed using U-Net with HF was significantly associated with cognitive performance and improves the classification between cognitive normal and Alzheimer’s disease subjects and between patients with mild cognitive impairment and those with Alzheimer’s disease. The implementation of our proposed method is publicly available using Dockerhub (https://hub.docker.com/r/wmhchallenge/pgs).

URL: https://hub.docker.com/r/wmhchallenge/pgs

Metadata-conditioned generative models to synthesize anatomically-plausible 3D brain MRIs.

Recent advances in generative models have paved the way for enhanced generation of natural and medical images, including synthetic brain MRIs. However, the mainstay of current AI research focuses on optimizing synthetic MRIs with respect to visual quality (such as signal-to-noise ratio) while lacking insights into their relevance to neuroscience. To generate high-quality T1-weighted MRIs relevant for neuroscience discovery, we present a two-stage Diffusion Probabilistic Model (called BrainSynth) to synthesize high-resolution MRIs conditionally-dependent on metadata (such as age and sex). We then propose a novel procedure to assess the quality of BrainSynth according to how well its synthetic MRIs capture macrostructural properties of brain regions and how accurately they encode the effects of age and sex. Results indicate that more than half of the brain regions in our synthetic MRIs are anatomically plausible, i.e., the effect size between real and synthetic MRIs is small relative to biological factors such as age and sex. Moreover, the anatomical plausibility varies across cortical regions according to their geometric complexity. As is, the MRIs generated by BrainSynth significantly improve the training of a predictive model to identify accelerated aging effects in an independent study. These results indicate that our model accurately capture the brain’s anatomical information and thus could enrich the data of underrepresented samples in a study. The code of BrainSynth will be released as part of the MONAI project at https://github.com/Project-MONAI/GenerativeModels.

URL: https://github.com/Project-MONAI/GenerativeModels

Automated deep learning segmentation of high-resolution 7 Tesla postmortem MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases.

Postmortem MRI allows brain anatomy to be examined at high resolution and to link pathology measures with morphometric measurements. However, automated segmentation methods for brain mapping in postmortem MRI are not well developed, primarily due to limited availability of labeled datasets, and heterogeneity in scanner hardware and acquisition protocols. In this work, we present a high-resolution dataset of 135 postmortem human brain tissue specimens imaged at 0.3 mm3 isotropic using a T2w sequence on a 7T whole-body MRI scanner. We developed a deep learning pipeline to segment the cortical mantle by benchmarking the performance of nine deep neural architectures, followed by post-hoc topological correction. We evaluate the reliability of this pipeline via overlap metrics with manual segmentation in 6 specimens, and intra-class correlation between cortical thickness measures extracted from the automatic segmentation and expert-generated reference measures in 36 specimens. We also segment four subcortical structures (caudate, putamen, globus pallidus, and thalamus), white matter hyperintensities, and the normal appearing white matter, providing a limited evaluation of accuracy. We show generalizing capabilities across whole-brain hemispheres in different specimens, and also on unseen images acquired at 0.28 mm3 and 0.16 mm3 isotropic T2*w fast low angle shot (FLASH) sequence at 7T. We report associations between localized cortical thickness and volumetric measurements across key regions, and semi-quantitative neuropathological ratings in a subset of 82 individuals with Alzheimer’s disease (AD) continuum diagnoses. Our code, Jupyter notebooks, and the containerized executables are publicly available at the project webpage (https://pulkit-khandelwal.github.io/exvivo-brain-upenn/).

URL: https://pulkit-khandelwal.github.io/exvivo-brain-upenn/

Hyperfusion: A hypernetwork approach to multimodal integration of tabular and medical imaging data for predictive modeling.

The integration of diverse clinical modalities such as medical imaging and the tabular data extracted from patients’ Electronic Health Records (EHRs) is a crucial aspect of modern healthcare. Integrative analysis of multiple sources can provide a comprehensive understanding of the clinical condition of a patient, improving diagnosis and treatment decision. Deep Neural Networks (DNNs) consistently demonstrate outstanding performance in a wide range of multimodal tasks in the medical domain. However, the complex endeavor of effectively merging medical imaging with clinical, demographic and genetic information represented as numerical tabular data remains a highly active and ongoing research pursuit. We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR’s values and measurements. This approach aims to leverage the complementary information present in these modalities to enhance the accuracy of various medical applications. We demonstrate the strength and generality of our method on two different brain Magnetic Resonance Imaging (MRI) analysis tasks, namely, brain age prediction conditioned by subject’s sex and multi-class Alzheimer’s Disease (AD) classification conditioned by tabular data. We show that our framework outperforms both single-modality models and state-of-the-art MRI tabular data fusion methods. A link to our code can be found at https://github.com/daniel4725/HyperFusion.

URL: https://github.com/daniel4725/HyperFusion

Multi-Modal Diagnosis of Alzheimer’s Disease using Interpretable Graph Convolutional Networks.

The interconnection between brain regions in neurological disease encodes vital information for the advancement of biomarkers and diagnostics. Although graph convolutional networks are widely applied for discovering brain connection patterns that point to disease conditions, the potential of connection patterns that arise from multiple imaging modalities has yet to be fully realized. In this paper, we propose a multi-modal sparse interpretable GCN framework (SGCN) for the detection of Alzheimer’s disease (AD) and its prodromal stage, known as mild cognitive impairment (MCI). In our experimentation, SGCN learned the sparse regional importance probability to find signature regions of interest (ROIs), and the connective importance probability to reveal disease-specific brain network connections. We evaluated SGCN on the Alzheimer’s Disease Neuroimaging Initiative database with multi-modal brain images and demonstrated that the ROI features learned by SGCN were effective for enhancing AD status identification. The identified abnormalities were significantly correlated with AD-related clinical symptoms. We further interpreted the identified brain dysfunctions at the level of large-scale neural systems and sex-related connectivity abnormalities in AD/MCI. The salient ROIs and the prominent brain connectivity abnormalities interpreted by SGCN are considerably important for developing novel biomarkers. These findings contribute to a better understanding of the network-based disorder via multi-modal diagnosis and offer the potential for precision diagnostics. The source code is available at https://github.com/Houliang-Zhou/SGCN.

URL: https://github.com/Houliang-Zhou/SGCN

ICAM-Reg: Interpretable Classification and Regression With Feature Attribution for Mapping Neurological Phenotypes in Individual Scans.

An important goal of medical imaging is to be able to precisely detect patterns of disease specific to individual scans; however, this is challenged in brain imaging by the degree of heterogeneity of shape and appearance. Traditional methods, based on image registration, historically fail to detect variable features of disease, as they utilise population-based analyses, suited primarily to studying group-average effects. In this paper we therefore take advantage of recent developments in generative deep learning to develop a method for simultaneous classification, or regression, and feature attribution (FA). Specifically, we explore the use of a VAE-GAN (variational autoencoder - general adversarial network) for translation called ICAM, to explicitly disentangle class relevant features, from background confounds, for improved interpretability and regression of neurological phenotypes. We validate our method on the tasks of Mini-Mental State Examination (MMSE) cognitive test score prediction for the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort, as well as brain age prediction, for both neurodevelopment and neurodegeneration, using the developing Human Connectome Project (dHCP) and UK Biobank datasets. We show that the generated FA maps can be used to explain outlier predictions and demonstrate that the inclusion of a regression module improves the disentanglement of the latent space. Our code is freely available on GitHub https://github.com/CherBass/ICAM.

URL: https://github.com/CherBass/ICAM

A benchmark for hypothalamus segmentation on T1-weighted MR images.

The hypothalamus is a small brain structure that plays essential roles in sleep regulation, body temperature control, and metabolic homeostasis. Hypothalamic structural abnormalities have been reported in neuropsychiatric disorders, such as schizophrenia, amyotrophic lateral sclerosis, and Alzheimer’s disease. Although mag- netic resonance (MR) imaging is the standard examination method for evaluating this region, hypothalamic morphological landmarks are unclear, leading to subjec- tivity and high variability during manual segmentation. Due to these limitations, it is common to find contradicting results in the literature regarding hypothalamic volumetry. To the best of our knowledge, only two automated methods are available in the literature for hypothalamus segmentation, the first of which is our previous method based on U-Net. However, both methods present performance losses when predicting images from different datasets than those used in training. Therefore, this project presents a benchmark consisting of a diverse T1-weighted MR image dataset comprising 1381 subjects from IXI, CC359, OASIS, and MiLI (the latter created specifically for this benchmark). All data were provided using automatically generated hypothalamic masks and a subset containing manually annotated masks. As a baseline, a method for fully automated segmentation of the hypothalamus on T1-weighted MR images with a greater generalization ability is presented. The pro- posed method is a teacher-student-based model with two blocks: segmentation and correction, where the second corrects the imperfections of the first block. After using three datasets for training (MiLI, IXI, and CC359), the prediction performance of the model was measured on two test sets: the first was composed of data from IXI, CC359, and MiLI, achieving a Dice coefficient of 0.83; the second was from OASIS, a dataset not used for training, achieving a Dice coefficient of 0.74. The dataset, the baseline model, and all necessary codes to reproduce the experiments are available at https://github.com/MICLab-Unicamp/HypAST and https://sites.google.com/ view/calgary-campinas-dataset/hypothalamus-benchmarking. In addition, a leaderboard will be maintained with predictions for the test set submitted by anyone working on the same task.

URL: [‘https://github.com/MICLab-Unicamp/HypAST’, ‘https://sites.google.com/’]

Reproducible evaluation of classification methods in Alzheimer’s disease: Framework and application to MRI and PET data.

A large number of papers have introduced novel machine learning and feature extraction methods for automatic classification of Alzheimer’s disease (AD). However, while the vast majority of these works use the public dataset ADNI for evaluation, they are difficult to reproduce because different key components of the validation are often not readily available. These components include selected participants and input data, image preprocessing and cross-validation procedures. The performance of the different approaches is also difficult to compare objectively. In particular, it is often difficult to assess which part of the method (e.g. preprocessing, feature extraction or classification algorithms) provides a real improvement, if any. In the present paper, we propose a framework for reproducible and objective classification experiments in AD using three publicly available datasets (ADNI, AIBL and OASIS). The framework comprises: i) automatic conversion of the three datasets into a standard format (BIDS); ii) a modular set of preprocessing pipelines, feature extraction and classification methods, together with an evaluation framework, that provide a baseline for benchmarking the different components. We demonstrate the use of the framework for a large-scale evaluation on 1960 participants using T1 MRI and FDG PET data. In this evaluation, we assess the influence of different modalities, preprocessing, feature types (regional or voxel-based features), classifiers, training set sizes and datasets. Performances were in line with the state-of-the-art. FDG PET outperformed T1 MRI for all classification tasks. No difference in performance was found for the use of different atlases, image smoothing, partial volume correction of FDG PET images, or feature type. Linear SVM and L2-logistic regression resulted in similar performance and both outperformed random forests. The classification performance increased along with the number of subjects used for training. Classifiers trained on ADNI generalized well to AIBL and OASIS. All the code of the framework and the experiments is publicly available: general-purpose tools have been integrated into the Clinica software (www.clinica.run) and the paper-specific code is available at: https://gitlab.icm-institute.org/aramislab/AD-ML.

URL: https://gitlab.icm-institute.org/aramislab/AD-ML

Automated recognition and analysis of body bending behavior in C.

elegans. BACKGROUND: Locomotion behaviors of Caenorhabditis elegans play an important role in drug activity screening, anti-aging research, and toxicological assessment. Previous studies have provided important insights into drug activity screening, anti-aging, and toxicological research by manually counting the number of body bends. However, manual counting is often low-throughput and takes a lot of time and manpower. And it is easy to cause artificial bias and error in counting results. RESULTS: In this paper, an algorithm is proposed for automatic counting and analysis of the body bending behavior of nematodes. First of all, the numerical coordinate regression method with convolutional neural network is used to obtain the head and tail coordinates. Next, curvature-based feature point extraction algorithm is used to calculate the feature points of the nematode centerline. Then the maximum distance between the peak point and the straight line between the pharynx and the tail is calculated. The number of body bends is counted according to the change in the maximum distance per frame. CONCLUSION: Experiments are performed to prove the effectiveness of the proposed algorithm. The accuracy of head coordinate prediction is 0.993, and the accuracy of tail coordinate prediction is 0.990. The Pearson correlation coefficient between the results of the automatic count and manual count of the number of body bends is 0.998 and the mean absolute error is 1.931. Different strains of nematodes are selected to analyze differences in body bending behavior, demonstrating a relationship between nematode vitality and lifespan. The code is freely available at https://github.com/hthana/Body-Bend-Count .

URL: https://github.com/hthana/Body-Bend-Count