Duke-UNC Brain Imaging and Analysis Center
BIAC Forums | Profile | Register | Active Topics | Members | Search | FAQ
Username:
Password:
Save Password   Forgot your Password?
 All Forums
 Support Forums
 Windows Support
 FEAT and saving disk space
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

dvsmith
Advanced Member

USA
218 Posts

Posted - Feb 18 2008 :  4:44:34 PM  Show Profile  Visit dvsmith's Homepage  Reply with Quote

Goldman has been relatively low on space. If you're running FEAT to analyze your fMRI data, there are several GBs worth of redundant files that you can delete if you're nearing your quota. The easiest way to do this is to put a few "rm" points in your scripts as they run analyses like in the examples below. You can do this at all three levels of processing, but make sure you keep one copy of the first-level filtered_func_data.nii.gz files so that you can go back and do peristimulus plots if you need time courses. If you do pre-preprocessing separately, you can just keep your pre-processed as is and delete things in the FEAT scripts like below.

In all cases below $REALOUTPUT should be set to the output of FEAT, so it's essentially your $OUTPUT variable with .feat or .gfeat on the end of it. All the "rm" points occur after FEAT completes, so they're all below that line (make sure you don't add a "&" following the feat command).

I have not ran into a case where I needed these files. Future analysis steps in FEAT do not call these particular files at any point. Most folks in Scott's lab have been doing this without any problems. If you do encounter a problem after you delete a file, it most likely will be an unrelated path issue.

Let me know if you have any questions.

Cheers,
David


#FIRST LEVEL

feat ${MAINOUTPUT}/FEAT_0${run}.fsf
cd ${REALOUTPUT}
rm -f filtered_func_data.nii.gz


#SECOND LEVEL

feat ${MAINDIR2}/2ndLvlFixed_${SUBJ}.fsf
cd $REALOUTPUT
for j in `seq 22`; do #replace the 22 with the number of copes you have at the second level.
	
	COPE=cope${j}.feat
	cd $COPE
	rm -f filtered_func_data.nii.gz
	rm -f var_filtered_func_data.nii.gz
	cd ..

done



#THIRD LEVEL

feat ${ANALYZED}/3rdLvl_${RUN}_${CON_NAME}.fsf
cd $REALOUTPUT
cd cope1.feat
rm -f filtered_func_data.nii.gz
rm -f var_filtered_func_data.nii.gz

clithero
Junior Member

37 Posts

Posted - Feb 25 2008 :  3:25:26 PM  Show Profile  Reply with Quote
Goldman is once again incredibly low on space...seems to be hovering around 30-40 GB.
Go to Top of Page

dvsmith
Advanced Member

USA
218 Posts

Posted - Feb 25 2008 :  3:43:30 PM  Show Profile  Visit dvsmith's Homepage  Reply with Quote
Another thing people could do to conserve space in the future (and now) would be to delete all the reg_standard folders in your first-level directories. These will automatically be deleted by FEAT if you tell it to do so when it's doing higher level analyses, which is what I normally do. If you have tons of these sitting around, you may as well trash them since they serve no purpose except to save time, which really isn't critical since the cluster can do registration in a few minutes.

David

Go to Top of Page

yaxley
Junior Member

USA
26 Posts

Posted - Apr 11 2008 :  3:53:03 PM  Show Profile  Reply with Quote
For FSL models that have already been run, the simple bash commands below will delete the necessary files.

$ cd $FSL_Output_Dir_to_clean # eg, Level 1, 2, or 3
$ find . -name filtered_func_data.nii.gz -exec rm -f {} \;
$ find . -name var_filtered_func_data.nii.gz -exec rm -f {} \;



Go to Top of Page

dvsmith
Advanced Member

USA
218 Posts

Posted - Aug 26 2008 :  2:35:10 PM  Show Profile  Visit dvsmith's Homepage  Reply with Quote
One other big file that could be deleted is the res4d.nii.gz within the stats directory in each *.feat directory (higher level or first level). This is just the residuals and the only data files that are really needed from level to the next are the cope*.nii.gz and the varcope*.nii.gz files in the stats directory.

With Rich's code, this would like like this if you've already ran your jobs:

cd $FSL_Output_Dir_to_clean # eg, Level 1, 2, or 3
find . -name res4d.nii.gz -exec rm -f {} \;
find . -name filtered_func_data.nii.gz -exec rm -f {} \;
find . -name var_filtered_func_data.nii.gz -exec rm -f {} \;


And if you're running your jobs, it would like this:

(REALOUTPUT should be the output of your analysis, which will end in either ".feat" or ".gfeat")

#FIRST LEVEL
feat ${MAINOUTPUT}/FEAT_0${run}.fsf
cd ${REALOUTPUT}
rm -f filtered_func_data.nii.gz
rm -f stats/res4d.nii.gz


#SECOND LEVEL
feat ${MAINDIR2}/2ndLvlFixed_${SUBJ}.fsf
cd $REALOUTPUT
for j in `seq 22`; do #replace the 22 with the number of copes you have at the second level. you might also be able to do this with simple wildcard (*)
	
	COPE=cope${j}.feat
	cd $COPE
	rm -f filtered_func_data.nii.gz
	rm -f var_filtered_func_data.nii.gz
	rm -f stats/res4d.nii.gz
	cd ..

done




#THIRD LEVEL
feat ${ANALYZED}/3rdLvl_${RUN}_${CON_NAME}.fsf
cd $REALOUTPUT
cd cope1.feat
rm -f filtered_func_data.nii.gz
rm -f var_filtered_func_data.nii.gz
rm -f stats/res4d.nii.gz
Go to Top of Page

dvsmith
Advanced Member

USA
218 Posts

Posted - Jun 04 2011 :  2:00:36 PM  Show Profile  Visit dvsmith's Homepage  Reply with Quote
One minor update... You can also delete the corrections.nii.gz file at the first level (this is about the same size as the biggest files in your first level output).

feat ${MAINOUTPUT}/FEAT_0${run}.fsf
cd ${OUTPUT}.feat
rm -f filtered_func_data.nii.gz
rm -f stats/res4d.nii.gz
rm -f stats/corrections.nii.gz


Nobody has found a use for these files or a subsequent step in the FSL processing pipeline that uses these files. So, do not hesitate to delete them.
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
BIAC Forums © 2000-2010 Brain Imaging and Analysis Center Go To Top Of Page
This page was generated in 0.06 seconds. Snitz Forums 2000