next up previous contents index
Next: Handouts for Users Up: FS Data Management Previous: dCache disc instance   Contents   Index

The Online Fileserver system

The PETRA III experiments use DESY-central facilities for data storage. Currently two file server systems been installed. Each of the systems has $4 \times 10$ GE network interfaces (cumulative) that are managed by two processors, see figure 3.1. Each system is redundant to cope with hardware failures by take over option, i.e. if head 1 fails then head 2 do the work for all attached storage to the system3.1. Another safety measure is the 'snapshot' feature. It uses 20 % of the available disk space for a system that allows - withing given limits - users to retrieve files that have accidentally been deleted.
Figure 3.1: File and computer server for PETRA III.
\includegraphics[width=.65\textwidth]{images/illu_storage_ofs.eps}
Currently, the system creates snapshots (a ‘data freeze’) similar a cascade: each four hours a day for the systems takes snapshots hourly.$n$ and once a day a daily.$n$ is made at midnight. Each Sunday a weekly.$n$ snapshot is taken and the entire procedure repeats for up to five weeks. Thus, the status of the files is frozen each four hours a day and kept for a day. The daily status is taken at midnight and kept for for seven days and so on.

The disk space itself is organized in aggregates and volumes. An aggregate is assigned to a compound of disk drives which contain the volumes. A single volume represents the the specific beamline storage and thus, Photon Science PETRA III Beamlines share storage and bandwidth. The Online FS volumes can be accessed from Linux and Windows and has standard Unix rights user/group/others.

The Online Fileserver is set up in four aggregates3.2 which establish the storage for the PETRA III beamlines. A single aggregate is created from multiple hard disk drives spanning over different disk trays. The aggregates are created as Raid 6 arrays with spare drives.
Tests on a single Fileserver head (10GE connect) resulted

Please note that system performance also depends on other factors, for example the number of files written (to a single directory, in total to the system, disk fragmentation).

When the system was bought, four beamlines claimed higher data rates. The volume foreseen for each of those so called hight throughput beamlines was assigned to one of the four aggregates which is managed by a file server head. The remaining volumes of beamlines having lower data rates were assigned according to total aggregate size. The current assignment is given below.

Table 3.1: Assignment of OnlineFS heads to Beamline volumes. Beamlines indicated in blue denote High throughput Beamlines as defined in 2009.
Filer Head Volume assignment size [TiB]
p3-fs01.desy.de P03 xx
  P01 xx
p3-fs02.desy.de P09 xx
  P10 xx
p3-fs03.desy.de P06 xx
  P08 xx
  P02 xx
p3-fs04.desy.de P04 xx
  P11 xx



next up previous contents index
Next: Handouts for Users Up: FS Data Management Previous: dCache disc instance   Contents   Index
Andre Rothkirch 2013-07-17