speed up bedpost by parallelization

603 views Asked by At

I am using an fsl tool called bedpostx which is used to fit a diffusion model to my (preprocessed) data. The problem is that this process has been run for over 24 hours now. I would like to speed up the process by poor man parallelization. To do so I should run bedpostx_single_slice.sh in several terminal, applying this to a batch of slices. I keep getting errors though. This is the command I launch in the terminal:

bedpostx_single_slice.sh Tirocinio/Dati_DTI/DTI_analysis_copy 37

Where the first input is the directory with my data and 37 is the ith slice I want to analyze. This is the error that I get:

terminate called after throwing an instance of 'std::bad_alloc'
what():  std::bad_alloc
Aborted (core dumped)

Unfortunately there is not much documentation on this tools, plus I am pretty much new in programming.

If it can helps, following there is the script of bedpostx_single_slice.sh:

#!/bin/sh
#   Copyright (C) 2012 University of Oxford

export LC_ALL=C

subjdir=$1
slice=$2
shift
shift
opts=$*

slicezp=`${FSLDIR}/bin/zeropad $slice 4`

${FSLDIR}/bin/xfibres\
 --data=$subjdir/data_slice_$slicezp\
 --mask=$subjdir/nodif_brain_mask_slice_$slicezp\
 -b $subjdir/bvals -r $subjdir/bvecs\
 --forcedir --logdir=$subjdir.bedpostX/diff_slices/data_slice_$slicezp \
 $opts  > $subjdir.bedpostX/logs/log$slicezp  && echo Done && touch $subjdir.bedpostX/logs/monitor/$slice

1

There are 1 answers

0
WeirdAlchemy On

BedpostX is pretty well parallelized itself by the FSL team now. You would be much better off taking advantage of that directly.

If you want a quick and easy way to parallelize, check out Parallelizing FSL without the pain from NeuroDebian.