Skip to content

Pilot Data based File Management

drelu edited this page Jan 1, 2013 · 12 revisions

Applications can declaratively specify CUs and DUs and effectively manage the data flow between them using the Pilot-API. A CU can have both input and output dependencies to a set of DUs. For this purpose, the API declares two fields: input_data and output_data that can be populated with a reference to a DU. The RTS will ensure that these dependencies are met when the CU is executed, i. e. either the DUs are moved to a Pilot that is close to the CU or the CU is executed in a Pilot close to the DU's \pilot. The input data is made available in the working directory of the CU. As described, depending on the locality of the DUs/CUs, different costs can be associated with this operation. The runtime system relies on an affinity-aware scheduler that ensures that data movements are minimized and that if possible “affine” CUs and DUs are co-located

Data Compute Dependency Management

# start compute unit
compute_unit_description = {
        "executable": "/bin/cat",
        "arguments": ["test.txt"],
        "number_of_processes": 1,
        "output": "stdout.txt",
        "error": "stderr.txt",   
        "input_data" : [data_unit.get_url()], # this stages the content of the data unit to the working directory of the compute unit
        "output_data": [
                        {
                         data_unit.get_url(): 
                         ["std*"]
                        }
                       ],  
        "affinity_datacenter_label": "eu-de-south",              
        "affinity_machine_label": "mymachine-1" 
}    

Input Staging

The content of the Data-Unit referenced in the input_data field will be moved to the working directory of the Compute Unit.

For each Compute Unit a sub-directory is created in the directory of the parent BJ:

<BIGJOB_WORKING_DIRECTORY>/bj-54aaba6c-32ec-11e1-a4e5-00264a13ca4c/sj-55010912-32ec-11e1-a4e5-00264a13ca4c
<BIGJOB_WORKING_DIRECTORY>/bj-54aaba6c-32ec-11e1-a4e5-00264a13ca4c/sj-55153072-32ec-11e1-a4e5-00264a13ca4c

By default (i.e. if no working directory is specified in its Compute Unit Description), each Compute Units is executed in its Compute Unit specific directory. If a working directory is specified, the Compute Unit is specified in this directory.

Output Staging

The process here is (i) to create a Pilot-Data at the location where you want to move these files to; (ii) then create an empty Data-Unit and bind it do a Pilot-Data. A Data-Unit is a logical container for a set of data; while a Pilot-Data is a physical store for a set of DUs. That means that you can simply create another DU in the Pilot-Data where your input DU resides.

Backend Specific Details

Depending on the backend BJ requires different parameters/configurations.

SSH

Service URL:

ssh://<hostname>/<path>

*Attention: * For the usage on OSG, the SSH private key must be passed to Pilot-Data in order to ensure that the SSH service can be accessed from the OSG worker nodes:

pilot_data_description={
                       "service_url": "ssh://localhost/tmp/pilot-data",
                       "size": 100,
                       "userkey":"/home/luckow/.ssh/rsa_osg",
                         }

iRods / OSG

Service URL:

"irods://gw68/${OSG_DATA}/osg/irods/luckow/?vo=osg&resource-group=osgGridFtpGroup"

Parameter:

  • hostname: hostname which is used to run the manager
  • path: path on which the iRods data dir is mounted on compute nodes (can contains environment variables that will be expaned)

Get parameter:

*vo: iTools VO parameter *resource-group: iTools resource-group parameter