The section 'Beamtime' covers the resources foreseen for a running experiment. It allows the experimenters to operate the beamline instruments and to store / process data during the experiment on the Online Fileserver (OnlineFS). The Online Fileserver fulfills most performance demands of various experiments and acts as a buffer storage. This environment is not foreseen to be accessed past the beamtime nor does it provides automated backup or archiving procedures. Hence, to free storage space on the OnlineFS, to create an archive copy of the dataset, to make data available from outside DESY and to allow data analysis without disturbing running experiments, the data has to be ''put elsewhere'' or migrated. In common, the user has to copy the data to own storage space during the beamtime. Currently USB 2.0 as well as e-sata connections are available at any PETRA III beamline, in some cases USB 3.0 or Firewire connects exist. Perkin Elmer Detector PCs at PETRA III also offer an empty exchange bay (a 'Wechselrahmen') for 3.5'' sata hard drives. Please note that filesystem support depends on the Operating System of the PC used for transferring data.1.1
For data migration, a script can be executed on selected work group servers by beamline staff which allow data migration to DESY dCache, see sec. 1.4. The script usually results in the creation of an archive file (tarball) in the dCache tape instance and in the creation of a corresponding entry in the catalog of the portal. By this, long term data storage is given and a rights management is established with respect to the participants of the beamtime. After migration the data set is available from outside DESY by the portal and can be staged (read back as single files) with a valid DESY account to different storage space in the dCache disk instance. This procedure prevents interfering beamline operation when accessing the data after beamtime. At best the migration is initiated directly at the end of each beamtime by beamline staff.
When in dCache, the data can - for example - be downloaded as single tarball via Web browser (e.g a user from outside DESY) or data can be restored to dedicated disk space on site (usually in case of inhouse user). Optionally one can request the staging execution step when executing the migration step.
When data staging was requested, e.g. via portal, the user requesting the staging will be informed by email that staging has finished. The email will also contain a path name where to find the data when using the compute resources. The data can be accessed as read only from the workgroup server pool p3wgs6 in the same file structure as provided in the migration step. The computers in the pool p3wgs6 provide common software for analysis.
** currently no procedure/rule to access dcache disk from windows (e.g. Netinstall or map network drive) ... update Jul 2013: to map dCache with drive letter from Windows please contact A. Rothkirch, FSEC.
Analysis results cannot be written to dCache disk instance and thus have to be written to disk space having write permission. This can be either the users home directory in the AFS or a so-called XXL users directory on the Offline Fileserver (OfflineFS) in AFS, see sec. 1.5.2. A users XXL directory will be in the common DESY backup similar to his/her AFS home and will be created on direct request to FS-EC. A users XXL volume will have a quota of initially 30GB and can be moderately enlarged. The XXL storage is not foreseen to carry raw beamtime data to allow for backup option. The OfflineFS also provides storage space to temporarily store lager amounts of data. The temporary space is shared among all users and is EXCLUDED from backup.
Please note that this concept also holds for data taken by FS staff at external
facilities, see infobox on page .