nii_stat

[Image: cancel]

Introduction


The nii_stat tools are designed to correlate behavioral data (task performance) with brain imaging data. This tool was designed to understand stroke but the tool can be applied to a wide range of modalities.

What you need:

  1. A copy of Matlab (no additional toolboxes are required). Note that some older Matlab versions (<2012) have difficulties opening modern Excel files. One solution is to use Excel to export the Excel data to Excel 5.0/''95 format. Another option is to use Excel to save the data as a tab-delimited text file.
  2. A copy of SPM 8 or later
  3. A copy of the nii_stat scripts.

Image Data


This script expects each participant to have a Matlab format Mat file that contains the imaging data. For our studies, these are automatically generated as the modalities are processed. However, you can also create these by running the script nii_nii2mat and your NIfTI format brain images. For example, the image LM1002.nii will be converted to LM1002.nii. This script will not only allow voxelwise analyses but aids in region to interest analyses.

Behaviors


The image on the right shows an Excel spreadsheet. This spreadsheet lists the patient identifier in the first column and behavioral performance in the subsequent columns. You can include as many participants (rows) and as many behaviors (all columns except the first) as you wish.

The first column is the patient identifier. This must correspond to the filename for that individuals’ images. For example, the software expects to find a file named ‘LM1001.mat’ in the same folder as the Excel file, and will generate a warning if it can not find this image. Analysis can still proceed as long as at least a few images are found. For example, in this example it is possible that we have the behavioral data for LM1037 but we do not yet have the imaging data. In this case the analysis would proceed with all the available datasets.

Each subsequent column is a behavioral variable. In this case we have ‘Behavior’ and ‘Apraxia’. Note that these scores can be continuous (there are several levels of Aphasia severity) or binomial (Apraxia is ranked as either 0 or 1 since this behavior was scored as being present or absent). The behavioral values should be numerical (“1″, “1.2345″, “-99″) and not text (“present”, “severe”).

Note that the software is able to deal with empty cells (LM1025 will be excluded from analysis since we have no behavioral data, and LM1024 will be excluded from our analysis of “apraxia”.

[Image: cancel]

Number of permutations

We will compute many statistical tests – one per brain region/voxel. Therefore, there potentially thousands of chances for making a false alarm. We need some method to control for this increase in familywise error. For more details, you may want to look at this article. This software provides several options:

  1. -1: False Discovery Rate – this method attempts to control the ratio of false alarms to hits. It is sensitive to the total amount of signal in our volume. Personally, I am not a fan of this method. However, in theory it does provide better statistical power than other methods. This uses a Matlab script from Groppe and colleagues.
  2. 0: Bonferroni Correction: This method robustly controls for familywise error, but it has very poor statistical power. This is fast to compute, and so it is a good default to see that everything is working correctly. However, it is very inefficient. You really do not want to use this for published work.
  3. (very large integer, e.g. 5000): Permutation Thresholding randomizes the order of your labels (behavioral scores) to see if your observed effects are really unusual. This is the gold standard for controlling familywise error. If tends to be less conservative than Bonferroni. The big downside is that it is slow to compute. For more details see the FSL randomise web page.
  4. (very large negative integer, e.g. -5000): Freedman-Lane Permutation: When this is selected the permutation threshold is computed for each behavior using each and every other behavior as a nuisance regressor. This requires your spreadsheet has multiple behavioral columns with the same rows filled (an individual can not provide behavior for only some of the measures). This method uses code by Ridgway and colleagues.

Corrected P threshold


You can select your statistical cut-off. For example, with p < 0.05 and permutation thresholding you can be confident that the odds of even a single surviving regions/voxels is a false alarm is 1/20. Note if you select False Discovery Rate this threshold refers to the false alarm versus hit ration (e.g. q = 0.05 means 1 in every 20 surviving regions is likely a false alarm).

Be ware this is a one-tailed threshold. For most of our analyses we have a one-tailed hypothesis: brain injury will make you worse not better at a task. However, with some modalities (e.g. resting state) one could perhaps explain reduced signal as injury and increased signal as compensation - in this case you should consider entering half your total desired p value (e.g. if you enter 0.025, one would expect a 5% error rate for the combination of left and right tails).

Minimum Overlap


When analyzing brain injury we have very little statistical power in regions that are rarely injured – we really can not be confident of the function of a brain region that was only damaged in one of your 80 participants. Including low power voxels into an analysis is very detrimental for techniques such as false discovery rate (but has less impact on permutation).This option allows you to only examine regions typically injured in your population. For example if you enter “5″ only voxels damaged in five individuals will be entered into the analysis. This option only applies to binomial voxelwise analyses (e.g. voxelwise analysis of hand drawn lesions).

ROI

We can compute statistics for every single voxel of the brain. Unfortunately, this approach tends to have low statistical power: we are doing a tremendous number of tests (facing a multiple comparison problem) and since most behavioral modules are relatively large with somewhat sloppy normalization each voxel is pretty noisy. This option allows you to conduct voxelwise analyses or to conduct analyses in Brodmann maps or Johns Hopkins maps. The Johns Hopkins University (JHU) map also generates a nice set of region names, which is handy for interpreting the results. For this purpose I am not a huge fan of the Brodmann maps: large regions of the insula are not uniquely labeled.

[Image: cancel]

Modality


Most users will only analyze data from a single modality. However, this software can handle images in a variety of modalities, some of these are shown in the image to the right. Here in South Carolina we acquire a lot of different brain scans. At other centers you may only have one of these. In any case, this option allows the user to analyze the modalities they are most interested in (note you can select several at a time):

  1. Lesion: A hand drawn lesion map – this is initially binary (a voxel is either damaged or not), but the regions of interest record the proportion of each region injured.
  2. CBF: These are cerebral blood flow images that we generate using Arterial Spin Labeling
  3. Resting state data: These are fMRI scans acquired while the participant relaxes. Our scripts estimate the amplitude of low-frequency fluctuation (ALFF) as well as connectivity between JHU brain regions.
  4. i3mT1: This is an automated lesion probability map generated from T1-weighted data.
  5. i3mT2: This is an automated lesion probability map generated from T2-weighted data.
  6. TBSS: These are voxelwise skeletons of the fractional anisotropy as measured with diffusion tensor imaging.
  7. Diffusion Tensor Imaging: These are connectivity maps between the JHU regions generated from diffusion tensor imaging.
[Image: cancel]

Performance


For most clinical sample sizes this software is exceptionally fast, and will complete in a matter of seconds. However, this tool was designed to scale to large studies, so the rest of this section is devoted to performance concerns that are only relevant for very large studies (>100 participants). The primary considerations are the type of test (binomial or general linear model) and the number of tests (number of voxels/regions * number of behavioral variables * number of permutations). Situations where both the behavioral measure is binomial (e.g. deficit present or absent) and the imaging data is binomial (e.g. voxel is either lesioned or spared) will be computed using the slow Liebermeister measure. If either or both of these are continuous (e.g. a range of deficits, a range of image brightness) the software will exploit general linear models (leveraging work by Ged Ridgway). Region of interest studies will always be fast as one only examines a few hundred discrete areas (and data averaging means that the image data is always continuous, using the fast GLM code). On the other hand low resolution (e.g. 3mm isotropic) images will be processed much faster than high resolution (e.g. 1mm isotropic, where there are 27 times as many tests). This software will harness multiple CPUs for both the GLM statistics and TFCE (if requested), so consider a computer that has many cores. In our testing, we found that there was little benefit for Matlab’s parallel “parfor” loops for permutations (since multiple cores are already exploited for computations). On the other hand, we did find that tuning our code to exploit graphics cards could have a dramatic benefit. The graph shows the performance for a high-end computer using either an Intel i7-4770K CPU at 3.5 Ghz versus the computer’s Nvidia GTX Titan. Regardless of whether processing low resolution (3mm) or high resolution (2mm) data. In general we see a x15 benefit for using the GPU until the limits of the GPU’s RAM are hit. This preliminary work demonstrates the potential for graphics cards for large studies, but also emphasizes that more work is required for this application (e.g. to chunk data from very large datasets to avoid RAM limitations).

Special Items


You are also able to request some special processing. For example, in the image of the dialog box you can see the user has entered “2 3″ in the “Special” box to request regressing lesion volume and de-skewing of the behavioral data. Here are details regarding these options:

  1. Explicit Volume Mask: if selected, a dialog will appear asking you to select a masking image. Only voxels in this mask will be analyzed. This option is only available for voxelwise analyses. This is analogous to “small voxel correction” for fMRI analyses where random field theory is used to control familywise error. In general one has more statistical power if you restrict analyses to only portions of the brain selected a priori.
  2. Regression lesion volume: if selected the behavioral data is adjusted for variability described by the overall lesion volume. In brief, the total lesion volume is computed for each individual, this is regressed with each behavioral variable, and the subsequent analysis is based on the residual variability of the behavior. This option requires that your data has voxelwise lesion data.
  3. De-skew behavioral data. The general linear model assumes that the data is normally distributed. This procedure computes the standard error of the skew (Z-Skew) for each behavioral variability. If you data is significantly skewed (>1.96) a square or square-root transform is applied (depending on whether your data is positively or negatively skewed) prior to analysis. In theory this can improve the power of the subsequent analysis. Note these transforms are not ideal for all data: in many situations you should apply a transform in Excel (e.g. an arc-sin transform for proportional data).
  4. Include WM/CSF connectivity: The Johns Hopkins regions include regions for white matter and cerebral spinal fluid. These regions are typically ignored for connectivity analysis of resting state and DTI tractography. This is because most resting state variability occurs in the gray matter, and tractography is typically aimed at cortico-cortico connections. If this is selected the CSF and WM are included in the analyses. Since more statistical comparisons are computed, you should expect lower statistical power.
  5. Custom ROIs: If you select this you will be asked to select a collection of images to use as your own custom regions of interest. This option requires you to select “voxels” in the prior “ROI” option. You can select binary maps or proportional maps. The software computes the mean signal in each region and conducts analyses. This is useful if you have strong anatomical predictions.

Notes

  • For regions where both the behavior and brain data is binomial (either zero or one) this software computes the Liebermeister measure.
  • For regions where either or both behavior and brain data are continuous statistics are computed using the general linear model. Results are identical to a Student’s pooled-variance t-test (if one of the dimensions is binomial) or a least squares’ linear regression (if both dimensions are continuous).
  • The statistical maps are typically identical to those generated with NPM (with minor rounding error differences). However, in situations with small sample sizes the Matlab script may generate slightly more conservative permutation thresholds than NPM. A major reason for this is that the Matlab code attempts to deal with situations where there are a large number of ties (so the probabilities are not continuous). Likewise, NPM uses the same formulas for false discovery rate as SPM2, while mii_stat uses a slightly different algorithm.