Three-level analysis with FSL and ANTs in Nipype. Part 2.
Today we will be setting up levels 1 and 2 in a 3-level model with FSL and Nipype, applying transformation matrices from ANTs that we created in Part 1.
Onset and contrast files
Before we start running anything, we have to put some files in places. First, all level 1 onset files should be in <project directory>/<subject directory>/<model>/<model specific folder>/onsets/<task#_run#>. Next, we have to have a folder inside your main project directory called models. Under this folder, we will create a model specific folder with a description of EVs and contrasts. Thus, let’s create a folder <project directory>/models/model_1. Now let’s create the following files in there.
For level 1 we should have a file called condition_key.txt with the following content:
As you see in our example, we have 6 EVs, and should create contrasts like these in the task_contrasts.txt:
This is how you create EVs and contrasts in Nipype. Let’s do it for level 2. By the way, the example experiment had two tasks.
condition_key_l2.txt:
… and contrasts in task_contrasts_l2.txt to compare one task with another:
Ok, all external files are in place, so we can move to our main analysis script.
Preparation and Preprocessing
We need to import some libraries and interfaces to work with:
Assign output type for FSL, so it will create files .nii.gz:
Now, let’s define some variables specific to our project. First, we will be needing the root project directory, where all our data are.
Then, we will also need a working directory, where we can put all temporary files, created by nipype. This directory should be specific for each analysis (alternatively, you can change the name of a workflow for each analysis), and may be deleted as soon as the analysis is finished. Besides, if your analyses use the same working directory and the node names overlap, most probably they will not run properly.
I prefer to input from the command line which subjects to run because it is easier to parallel your jobs.
Now, let’s define the workflow.
… and get a subject’s id:
Again, I am using just one subject per running job, so they are already in parallel. But in case you want to put all of them into this script and run in parallel (may the power of CPU be with you!), this is the place to do it (e.g., put ["Subject001","Subject002"] instead of [subj_list], and remove subj_list variable above.)
Here I use a function to get information for a specific subject. It may be different information such as ids for run numbers, condition information, etc.
Let’s stop for a moment and take a look at what’s going on here. Some subjects had no run 1 (because of a visual spike, it was removed), and some had no last runs (did not finish the experiment), thus run_id is different for some subjects. itk_id was made to indicate an appropriate run. Nipype counts the runs starting at 0, while I counted them starting at 1, thus, the run numbers are adjusted for files made by Nipype earlier. evs_l2 is a vector of ones (1) for each of two tasks. In most of the cases you will be using only one EV at level 2 to combine runs across subjects; and you will have something like this: evs_l2=dict(ev001=[1,1,1,1,1,1,1,1]). Everything under that just parses out the condition files for each run and each level.
Prepare the node to run the function above and grab subject-specific data as we did in the Part 1:
Here we import functional and structural scans, onset files, contrast files, and transformation matrices.
Set up a node to define all inputs required for the preprocessing workflow
Convert functional images to float representation. Since there can be more than one functional run we use a MapNode to convert each run.
Determine the 2nd and 98th percentile intensities of each functional run
Threshold the first run of the functional data at 10% of the 98th percentile
Define a function to get 10% of the intensity
Determine the median value of the functional runs using the mask
Dilate the mask. This is the final mask for level 1.
Mask the motion corrected functional runs with the dilated mask
Determine the mean image from each functional run
Merge the median values with the mean functional images into a coupled list
Smooth each run using SUSAN with the brightness threshold set to 75% of the median value for each run and a mask constituting the mean functional
Here I have to note that if you will be replicating the same in FSL, you won’t get exact same copes. The reason is that Nipype uses a standard formula to calculate smoothing: sqrt(8*log(2)) when FSL uses a different formula. Thus, to completely replicate FSL, you should use fwhm_thr=5.999541516002418 instead of fwhm_thr=6.0 we used above.
Define a function to get the brightness threshold for SUSAN
Mask the smoothed data with the dilated mask
Scale each volume of the run so that the median value of the run is set to 10000
Define a function to get the scaling factor for intensity normalization
Create tempMean
Perform temporal highpass filtering on the data. This is the same as filtered_func_data in FSL output.
We’ve done with preprocessing! Hooray! Now, let’s see how to code the level 1.
Level 1
Setup a basic set of contrasts
Generate design information
Generate a run specific .fsf file for analysis
Generate a run specific .mat file for use by FILMGLS
Estimate a model specified by a .mat file and a functional run
The output from modelestimate is what we see in FSL level 1 analysis.
Level 2
One of the main issues we may see in Nipepe examples is that they recommend first running FLAMEO in level 2, and then registering its outputs to the standard space. Well, if you assume that subjects never move between runs, this might be appropriate. When was the last time you saw subjects being completely still?! Rhetorical question… Thus, we will be applying our registration matrices right after the copes, varcopes, etc. were created in level 1, then merging them and running FLAMEO on them.
As you see, we do not do anything different from registering example_func in Part 1.
Now merge the copes, varcopes and masks for each condition
Setup a set of contrasts for level 2
Here we will use MultipleRegressDesign instead of L2Model recommended in Nipype examples to generate subject and condition-specific level 2 model design files because MultipleRegressDesign can handle more than one EV according to its name… ((c) Captain Obvious)
Estimate a second level model with FLAMEO
Saving output since we are running this code for a single subject and will be combining data across subjects in level 3. Also, let’s save filtered_func_data just in case we need it sometime later, like to create means for timeseries and use them as inputs in PPI.
Run, Forrest….
This is it for the preprocessing, levels 1 and 2. Next time, I will post my code for level 3 with some explanations.
I would like to thank Dr. Tyler Davis for his guidance and help with FSL and quality checking. Of course, all possible inaccuracies in the code are mine.