globus client

This page includes the basic commands to use globus. For an overview of storage clients, see Storage clients.

Globus tools


To run the examples below you need to have a valid proxy, see StartGridSession.


  • Listing directories on dCache:

    $globus-url-copy -list gsi

The globus-* client does not offer an option to create directories. For this purpose use a different client, e.g. uberftp client.

Transferring data


The options -dbg -gt 2 -vb would show you extra logging information for your transfer.

  • Copy file from dCache to local machine:

    $globus-url-copy \
    $    gsi \
    $    file:///`pwd`/zap.tar


globus-url-copy does NOT encrypt the data channel when transferring data to and from dCache. Even when you supply the commandline flags -dcpriv or -data-channel-private to enforce encryption the data transfers are still not encrypted. If you need to transfer sensitive data, please contact our helpdesk. Then we can help you with a more secure alternative. This flaw has been reported to the appropriate organisations.

  • Copy file from local machine to dCache:

    $globus-url-copy \
    $    file:///`pwd`/zap.tar \
    $    gsi
  • Recursive upload to dCache:

    $globus-url-copy -cd -r \
    $    /home/homer/testdir/ \
    $    gsi
    ## replace testdir with your directory
  • Recursive download from dCache:

    First create the directory locally, e.g. testdir.

    $globus-url-copy -cd -r \
    $    gsiftp:/// \
    $    /home/homer/testdir/
  • Third party transfer (between dCache sites):

    First create the remote directory, e.g. targetdir.

    $globus-url-copy -cd -r \
    $    gsi \
    $    gsi
    ## note: you must include the trailing slash!

    See also

    For dCache 3rd party transfers see also fts client.

Parallel streams

The globus-url-copy uses by default 10 parallel streams for transfers.

Removing data

The globus-* client does not offer an option to delete files or directories. For this purpose, use a different client, e.g. uberftp client.

Fifo pipes

When you want to process data from a large tar file (hundreds of Gigabytes) that is stored on the Grid Storage, it is possible to extract just the content without copying the complete tar file on the Worker Node. Similarly, you can upload a directory that will be stored in a tar file on the Grid storage on-the-fly. This trick saves space on the local node from keeping the double copy of the data and is possible by using the fifo pipes technique.

Extract directory from dCache

Extract the content of a tar file from the Grid storage on the worker node or UI:

## Create fifo for input data
$mkfifo $INPUT_FIFO
## Extract the directory from fifo and catch PID
$tar -Bxf ${INPUT_FIFO} & TAR_PID=$!
## Download the content of the tar file, replace zap.tar with your tar file
$globus-url-copy -vb \
$    gsi \
$    file:///`pwd`/${INPUT_FIFO} && wait $TAR_PID

Extract a file

Extract a particular from a known directory location in a tar file:

## Create fifo for input file
$mkfifo $INPUT_FIFO
## Extract a particular file from fifo and catch PID
$tar -Bxf ${INPUT_FIFO} zap/filename & TAR_PID=$! # replace zap/filename with the exact location of you file in the tar
## Download the file, replace zap.tar with your tar file
$globus-url-copy -vb \
$    gsi \
$    file:///`pwd`/${INPUT_FIFO} && wait $TAR_PID

Transfer directory to dCache

$mkfifo ${OUTPUT_FIFO} # create a fifo pipe
## Push output directory to file (fifo) and catch PID
$tar -Bcf ${OUTPUT_FIFO} zap/ & TAR_PID=$! # replace zap/ with the directory to be uploaded
## Upload the final dir with fifo
$globus-url-copy -vb file:///${PWD}/${OUTPUT_FIFO} \
$    gsi && wait ${TAR_PID}
## note:add stall-timeout flag in sec (e.g. -stall-timeout 7200) for large files that take too long to complete checksum on the server after transfer