Speaker
Description
Deciphering brain architecture at a system level requires the ability to quantitatively map its structure with cellular and subcellular resolution. Besides posing significant challenges to current optical microscopy methods, this ambitious goal require the development of a new generation of tools to make sense of the huge amount of raw images generated, which can easily exceeds several TeraBytes for a single sample. We present an integrated pipeline allowing the image transformation from a collection of voxel gray levels to a semantic representation of the sample.
As a first step, the hundreds of adjacent tiles produced by the microscope needs to be aligned and fuse together. To this aim, we developed ZetaStitcher, a software for image stitching that computes global optimal alignment of imaging datasets as big as 8 TB in less than an hour. The fused volume is then generated virtually, without the need to create a physical copy of the dataset, by means of a dedicated API.
The virtually fused volume is then processed to extract meaningful information. We demonstrate two complementary approaches based on deep convolutional networks. In one case, a 3D conv-net is used to ‘semantically deconvolve’ the image [Frasconi et al., Bioinformatics 2014], allowing accurate localization of neuronal bodies with standard clustering algorithms (e.g. mean shift). The scalability of this approach is demonstrated by mapping whole-brain spatial distribution of different neuronal populations with single-cell resolution.
To go beyond simple localization, we exploited a 2D conv-net estimating for each pixel the probability of being part of a neuron [Mazzamuto et al., LNCS 2018]. The output of the net is then processed with a contour finding, obtaining reliable segmentation of cell morphology. This information can be used to classify neurons, expanding the potential of chemical labeling strategies.