Skip to content

Visual maps from anatomy

Rosemary Le edited this page Apr 18, 2017 · 31 revisions

Overview

Benson and his colleagues (including Aguirre and Brainard) developed methods to identify the positions of the early visual field maps from anatomical (T1-weighted) MRI data. They can achieve this because the detailed properties of the cortical folds are well-correlated with the positions of these functional maps. The general principle - correlation between structural folds and functional regions - is an emerging theme from work in occipital and temporal cortex (e.g., see the papers by Weiner on face and Witthoft on hV4 in the Grill-Spector lab.

The docker container (also a Flywheel Gear) that we describe here implements Noah Benson's algorithm for identifying V1/V2/V3 from an anatomical T1.

It also returns the assignments of all the visual field maps represented in the Wang et al. atlas onto the T1-weighted image.

Input

T1 anatomy nifti file (usually in acpc-space).

Outputs

More detailed information for the docker outputs can be found here. Below we list the most relevant files:

  • scanner.template_areas.nii.gz - A nifti file at the same resolution and in the same space as the T1 anatomical file. The value at each voxel indicates the visual area. For example, the visual area template will consist of values ranging from 0-3 (1: V1, 2: V2, 3: V3, 0: none of the above). This nifti file does not differentiate between the left and right visual areas.

  • scanner.wang2015_atlas.nii.gz - A nifti file at the same resolution and in the same space as the T1 anatomical file. The value at each voxel indicates the visual area from the Wang et al. atlas. Values range between 0 to 25. 0 indicates voxels that are not labeled as a visual area.

  • retinotopy_templates.zip - This zip files contains the maps in Freesurfer form.

Visualizing

In Flywheel ...

  • Select the visualization icon and choose Papaya or Slice Drop.
  • The color map "Spectrum" works well in Papaya.

In mrVista ...

TODO

A few comments on things we should change.

  • The word 'scanner' (denoting the space: 'scanner' space vs 'native' space) is misleading. The output files are in the same coordinate frame as the T1-weighted input, which is usually ACPC-space. Those data may or may not be in scanner (image) coordinates.
  • We should probably regularize 'wang2015_atlas' and 'template_areas' to both be, say, 'wang2015_atlas' and benson2014_atlas'. We think that would be helpful.
  • Currently the Benson template niftis do not distinguish between left and right hemispheres. There are .label files however that do distinguish left and right. RL is working on code to convert from .label to .nii.gz files (might try a few approaches)
  • The algorithm also returns gifti files. These can be downloaded and viewed with a gifti viewer (e.g. [http://www.artefact.tk/software/matlab/gifti/].

The conversion from MGZ to GIFTI seems to be broken in Flywheel. Let's see if we can't figure out what the deal is.

References

Benson, N. C., Butt, O. H., Datta, R., Radoeva, P. D., Brainard, D. H., & Aguirre, G. K. (2012). The retinotopic organization of striate cortex is well predicted by surface topology. Current Biology, 22(21), 2081-2085.

Benson, N. C., Butt, O. H., Brainard, D. H., & Aguirre, G. K. (2014). Correction of distortion in flattened representations of the cortical surface allows prediction of V1-V3 functional organization from anatomy. PLoS Comput Biol, 10(3), e1003538.

Wang L, Mruczek REB, Arcaro MJ, Kastner S. Probabilistic Maps of Visual Topography in Human Cortex. Cerebral Cortex (New York, NY). 2015;25(10):3911-3931. doi:10.1093/cercor/bhu277.

Witthoft N, Nguyen ML, Golarai G, et al. Where Is Human V4? Predicting the Location of hV4 and VO1 from Cortical Folding. Cerebral Cortex (New York, NY). 2014;24(9):2401-2408. doi:10.1093/cercor/bht092.

Clone this wiki locally