Mouse Brain Atlas Building System

From seed
Jump to navigation Jump to search


This page is for the work of constructing atlas and co-registration of multiple subject brains to the atlas.

The atlas contains an anatomical model with mean/variance of each structure's position, and a set of texture classifiers, one for each structure. For an unannotated image, the classifiers give a set of 2D score maps, which are stacked to form 3D score maps, indicating the likely position of each structure in the subject brain and providing anchors for registering the subject brain to atlas. Registration consists of first a global affine step that aims to standardize the pose of the subject brain, and then a rigid transform for each structure independently that captures the brain-to-brain variance on structure position/pose. As all subject brains are co-registered on the atlas, the new mean/variance of each structure is computed and used to map future data.

System Diagram[edit]

System Diagram


1/5/2017 Process MD635 (fluorescent Neurotrace Blue). The classifiers trained on regular Nissl stained images do not perform well on NT Blue. I decide to manually annotate MD635 and train a new set of classifiers. With these we have score maps of comparable quality to regular Nissl stacks.

11/18/2016 Presented Neuroscience Meeting (SfN) 2016 Poster: Building a 3D Data-Driven Atlas for Mouse Brainstem

6/16/2016 Write-up on incorporating uncertainty by computing the Hessian.

1/19/2016 Research in details how Allen Brain Institute built their atlases. Allen Atlases Methodology Summary (work in progress), Evernote Notes.

1/22/2016 Realize that we have received only one of the two alternating sets of each stack. Start transferring the missing data from Partha's lab.

Construct 3D volume from aligned sections (assuming 20um thickness and 0.46um/pixel planer resolution). Render using vispy.

--Yoavfreund 21:44, 25 January 2016 (PST) I don't think you should work on 3D volume rendering. That is not the plan.

This shows coronal virtual section from the thumbnail-level 3D reconstruction. Reasonable, but the midline is skewed, which seems to suggest the sagittal sectioning is not completely vertical.

2/5/2016 Gave a talk on update of the project in David's group meeting.

2/11/2016 Find the optimal 3D rigid transform that aligns test brain to atlas, by maximizing the same-class overlap between the test brain's landmark probability volume and the atlas volume.

I first use grid search to find a good starting point in a coarse-to-fine fashion, then approach optima using gradient descent. However, scores often stuck in a sub-optimal level like this. Converged score is too sensitive to initialization.

After achieving a good 3D transform (by trying different initialization), I project the annotations in the atlas volume to sections of the test brain. This shows the transferred annotations on top of test sections. Localization is pretty accurate.

2/12/2016 Working on the function of taking an initial contour (annotation transferred from the atlas or annotation from a nearby section in the same stack) and adjust the contour to fit the test image.

This function is useful because:

  1. it provides more accurately localized automatic annotation.
  2. it provides a way to augment training data from patches in unlabeled images.

I have several ideas:

  • directly detect contour (for example, 0.5 level-set) on the class probability map of a certain landmark.
  • use active contour (snake) model to adjust the initial contour towards the gradients defined by class probability map.

Both these ideas rely on the classifier being accurate enough. This is an example. The dotted curve is the initial contour transferred from the atlas. The green contour is the 0.5-level set on the probability map of 7N. This contour is obviously not satisfactory. I think for now the most important task is to improve the classifier so that it localizes more accurately.


Latest update slides

Worked on the algorithm that locally adjusts a slightly off contour to fit the image. This is useful both in saving manual annotation and in making collecting training example from new images more accurate.

Transformation is specified by five parameters (x/y shift, rotation, x/y scaling).

The objective function to maximize is the average score within the contour based on the landmark-of-interest's probability map.

I am working on integrating this into the GUI. Part of the reason for doing this is I want the annotation to fully cover the atlas brain (MD589). Previously we only received a half of each complete set, and although we have annotated all of those, we still need to annotate the roughly 200 new sections per brain that we received later. Manually doing this is very time-consuming. This prompts me to place the automatic annotation suggestion feature at a priority. Besides, it is novel and might be worthwhile to be in the MICCAI paper.

Here are the automatic annotation results. Blue contours are ground-truth manual annotation. Green contours are annotations from the closest annotated neighbor section. Red contours are locally adjusted version.

Since the local adjustment algorithm is based on probability maps generated by the classifier, much of the localization inaccuracy is due to inaccuracy of probability maps.

At the same time, I am consolidating the modules (incl. annotation, learning, registration) so that I can run experiments efficiently on ALL annotation classes (instead of the nine class we focused on so far), and ALL sections (and more stacks). I see this is essential because there is no doubt our next step is to improve the classifier. A consolidated pipeline paves road for complicated neural network experimentation.


Improve the procedure of computing 3D transform from atlas to test brain. I parameterize rotation with quaternion and avoid specifying explicit learning rate using Adagrad, which make optimization by gradient descent much more stable. Updated atlas annotation projection result

Possible next steps:

  • Stick with the current trained 10-class classifier. Apply it to wider range of sections and all other 10ish brains. Register them all to atlas.
  • Improve classifier. For example, 7N are often mistaken as 5N. The class probabilities outputted by classifier are competitive, resulting in a 7N region has a low "7N score" that is roughly the same as "5N score". If we use location (or relative location) as an additional feature, then a 7N region's "7N score" will be much higher than "5N score". This should make the registration objective function's landscape much sharper.
  • Include section contours as an outer shell into the atlas. Let the first registration step be to align the section contour shells.
  • The atlas volume from one brain is not perfect. It needs some geometrical rectification. For example, straighten the midline, smooth the annotation surface etc.
  • Train classifier to recognize more landmarks.


Registered 9 brains to the atlas. Use atlas-transferred annotation as initial contours, fit them to images using snake. There are a couple of issues:

  • For sections where no atlas annotations are mapped, we still should be able to detect landmarks based on probability maps alone (even if given no initial contours).
  • check whether a snake-refined contour is valid by, e.g. checking the probability mass inside the contour is large enough
  • deal with multiple-part contours, both multi-part annotations and multi-part snake contours - keep only the largest part or use a combined polygon
  • initial contour does not overlap with the landmark - move it along the direction that is from the boundary to the closest high-responding pixel.
  • On some sections a landmark has initial contour but no probability map is generated due to low response -> classifier problem