Usage

heudiconv processes DICOM files and converts the output into user defined paths.

CommandLine Arguments

Example:
heudiconv -d rawdata/{subject} -o . -f heuristic.py -s s1 s2 s3

usage: heudiconv [-h] [--version]
                 [-d DICOM_DIR_TEMPLATE | --files [FILES [FILES ...]]]
                 [-s [SUBJS [SUBJS ...]]] [-c {dcm2niix,none}] [-o OUTDIR]
                 [-l LOCATOR] [-a CONV_OUTDIR] [--anon-cmd ANON_CMD]
                 [-f HEURISTIC] [-p] [-ss SESSION] [-b] [--overwrite]
                 [--datalad] [--dbg]
                 [--command {heuristics,heuristic-info,ls,populate-templates,sanitize-jsons,treat-jsons}]
                 [-g {studyUID,accession_number}] [--minmeta]
                 [--random-seed RANDOM_SEED] [--dcmconfig DCMCONFIG]
                 [-q {SLURM,None}] [--queue-args QUEUE_ARGS]

Named Arguments

--version show program’s version number and exit
-d, --dicom_dir_template
 location of dicomdir that can be indexed with subject id {subject} and session {session}. Tarballs (can be compressed) are supported in addition to directory. All matching tarballs for a subject are extracted and their content processed in a single pass
--files Files (tarballs, dicoms) or directories containing files to process. Cannot be provided if using –dicom_dir_template or –subjects
-s, --subjects list of subjects - required for dicom template. If not provided, DICOMS would first be “sorted” and subject IDs deduced by the heuristic
-c, --converter
 

Possible choices: dcm2niix, none

tool to use for DICOM conversion. Setting to “none” disables the actual conversion step – usefulfor testing heuristics.

-o, --outdir output directory for conversion setup (for further customization and future reference. This directory will refer to non-anonymized subject IDs
-l, --locator study path under outdir. If provided, it overloads the value provided by the heuristic. If –datalad is enabled, every directory within locator becomes a super-dataset thus establishing a hierarchy. Setting to “unknown” will skip that dataset
-a, --conv-outdir
 output directory for converted files. By default this is identical to –outdir. This option is most useful in combination with –anon-cmd
--anon-cmd command to run to convert subject IDs used for DICOMs to anonymized IDs. Such command must take a single argument and return a single anonymized ID. Also see –conv-outdir
-f, --heuristic
 Name of a known heuristic or path to the Pythonscript containing heuristic
-p, --with-prov
 Store additional provenance information. Requires python-rdflib.
-ss, --ses session for longitudinal study_sessions, default is none
-b, --bids flag for output into BIDS structure
--overwrite flag to allow overwriting existing converted files
--datalad Store the entire collection as DataLad dataset(s). Small files will be committed directly to git, while large to annex. New version (6) of annex repositories will be used in a “thin” mode so it would look to mortals as just any other regular directory (i.e. no symlinks to under .git/annex). For now just for BIDS mode.
--dbg Do not catch exceptions and show exception traceback
--command

Possible choices: heuristics, heuristic-info, ls, populate-templates, sanitize-jsons, treat-jsons

custom actions to be performed on provided files instead of regular operation.

-g, --grouping

Possible choices: studyUID, accession_number

How to group dicoms (default: by studyUID)

--minmeta Exclude dcmstack meta information in sidecar jsons
--random-seed Random seed to initialize RNG
--dcmconfig JSON file for additional dcm2niix configuration

Conversion submission options

-q, --queue

Possible choices: SLURM, None

batch system to submit jobs in parallel

--queue-args Additional queue arguments

Support

All bugs, concerns and enhancement requests for this software can be submitted here: https://github.com/nipy/heudiconv/issues.

If you have a problem or would like to ask a question about how to use heudiconv, please submit a question to NeuroStars.org with a heudiconv tag. NeuroStars.org is a platform similar to StackOverflow but dedicated to neuroinformatics.

All previous heudiconv questions are available here: http://neurostars.org/tags/heudiconv/

Batch jobs

heudiconv can natively handle multi-subject, multi-session conversions, although it will process these linearly. To speed this up, multiple heudiconv processes can be spawned concurrently, each converting a different subject and/or session.

The following example uses SLURM and Singularity to submit every subjects’ DICOMs as an independent heudiconv execution.

The first script aggregates the DICOM directories and submits them to run_heudiconv.sh with SLURM as a job array.

#!/bin/bash

set -eu

# where the DICOMs are located
DCMROOT=/dicom/storage/voice
# where we want to output the data
OUTPUT=/converted/data/voice

# find all DICOM directories that start with "voice"
DCMDIRS=(`find ${DCMROOT} -maxdepth 1 -name voice* -type d`)

# submit to another script as a job array on SLURM
sbatch --array=0-`expr ${#DCMDIRS[@]} - 1` run_heudiconv.sh ${OUTPUT} ${DCMDIRS[@]}

The second script processes a DICOM directory with heudiconv using the built-in reproin heuristic.

#!/bin/bash
set -eu

OUTDIR=${1}
# receive all directories, and index them per job array
DCMDIRS=(${@:2})
DCMDIR=${DCMDIRS[${SLURM_ARRAY_TASK_ID}]}
echo Submitted directory: ${DCMDIR}

IMG="/singularity-images/heudiconv-0.5.4-dev.sif"
CMD="singularity run -B ${DCMDIR}:/dicoms:ro -B ${OUTDIR}:/output -e ${IMG} --files /dicoms/ -o /output -f reproin -c dcm2niix -b --minmeta -l ."

printf "Command:\n${CMD}\n"
${CMD}
echo "Successful process"