Services

Server
Vier high-density Server
  • Network (DNS and DHCP)
  • Central user management :
    • Directory Service (Kerberos + LDAP)
    • Integration with TUM/LRZ Active Directory Service
    • Home directories (NFS)
  • Network Attached Storrage (NAS)
  • E18 Linux
  • Web server
  • Mail server
  • Database server
  • Login server
  • High performance computing: batch farm (cluster)

Infrastructure

Two racks, located in the central server room of the Physics department and dedicated to E18, provide the space for the central computing infrastructure of the chair. These racks are provided with a total cooling power of 30kW. While emergency power combined with two UPS grand high availability capacity.

 

E18 Linux

Servers and several dozen workstations at the chair run a customized variant of linux, based on Scientific Linux 6. It includes all central services like user management, including single sign on, storage and cluster access. Besides that it provides a huge set of preinstalled packages of standard and self maintained packages, which add up to a powerful, homogeneous development environment for users.

 

 

E18 Linux
E18 Linux: Desktop

Linux Cluster

To satisfy the high computing demands that are typical for modern particle physics experiments, substantial computing infrastructure has been built up at the chair. This includes a small, but never the less powerful Linux cluster, which is run as a batch farm.

Software

GridEngine servers as cluster scheduler for the batch farm, which runs E18 linux as operation system.

Hardware

Hardware wise the compute cluster consists of 32 compute nodes, providing a total computing power of ~4TFlops/s on 384 cores.

Thereby 192 cores a provided by a high 8 high density nodes, each equiped with 24 cores and 64 GB of RAM, which is packed in 4 HE. This nodes are featuring high performance QDR InfiniBand (40 Gbit/s) interconnects supporting Remote Direct Memory Access (RDMA).

Additional 24 nodes a located in a BladeCenter solution, each providing 8 cores sharing 16 GB of RAM. 

Cluster Frontview
Cluster Front: IB switch (top), 2HE housing for 4 cluster nodes (bottom)
Cluster Backview
Cluster Back: IB switch (top), 2HE housing for 4 cluster nodes (bottom)

Storage

Network attached storage is provided using the NFSv4 and SMB protocols with Kerberos authentication. The total amount of ~175 TiB (brutto) redundant storage is provided by three servers.