Compared to traditional HPC facilities BioHPC aims to be easy-to-use for all of our users regardless of computing experience. Though the heart of our systems is a high-performance compute cluster with fast storage, we offer a wide range of ways to use it. Cloud computing services are now commonplace, and most users are familiar with sites like dropbox for file sharing and storage, or the many public bioinformatics tools that allow data to be processed via the web. The BioHPC Cloud encompases all of our systems and services, with access via:
Our main compute facility is nucleus, an 276-node heterogenous compute cluster, consisting of:
Across these systems a total of >11500 CPU cores and >45TB of RAM is available.
Access to the compute cluster is via our web portal tools, or SSH login to the head node, nucleus.biohpc.swmed.edu with a BioHPC username and password.
Lysosome is our primary high-performance bulk storage consisting of 2.5PB raw storage provided by a Data Direct Networks SFA12X system, and an additional 960TB of raw storage provisioned with Dell PowerVault RAID hardware.
Our Data Direct Networks SFA12K system uses dual active-active storage controllers, connected to multiple storage enclosures. Providing raw speeds of 35-40GB/s the storage is configured to host a Lustre parallel filesystem and is connected to the nucleus cluster with multiple inifiniband and 10Gb ethernet links. 40 Object Storage Targets (backed by the DDN storage) are aggregated into a single high performance filesystem, with operations directed by a Meta Data Target (MDT). The architecture is well suited to typical workloads in BioHPC, which operate on large image and sequence datasets.
Since March 10, 2017, we have deployed a General Parallel Filesystem Storage at the Clements University Hospital site. This storage system is connected to our Nucleus cluster and Lysosome storage using high speed fibres across the Harry Hines Blvd. Currently, it provides the /work and /archive file spaces for users and off-site backups of /home2 mirrors, /project incrementals and other web services for DR. This storage system has ~720 8 TB hard drives in 12 enclosures providing 3.4 PB usable space. The actual aggregated IO throughput is ~14 GB/s, and the per compute node max. throughput is ~3.4 GB/s.
Lamella is our cloud storage gateway, providing easy access to files stored on BioHPC from Windows, Mac or Linux clients, over the web or on a smartphone. The Lamella web interface at lamella.biohpc.swmed.edu provides an easy-to-use site, similar to dropbox, to upload, download, manage and share files. Each user has an allocation of cloud storage and can also directly access their home, project and work allocations. This service also supports file synchronization between computers using the OwnCloud client.
Direct access to files in home, project and work locations is possible by mounting shared drives under Windows, Mac OSX, or Linux. Lamella shares these directories using SMB, with transfer rates up to 100MB/s for users on the campus 1Gb network, and higher speeds available via the campus 10Gb network. FTP access to storage is also available via the lamella gateway.
BioHPC Cloud Clients are computers running Linux, integrated into the BioHPC cloud. These systems provide a graphical Linux desktop and have direct access to BioHPC storage over the campus 10Gb network. All software available on the BioHPC cluster can be used on the cloud clients, which are ideal for developing and testing code and analysis workflows. Scripts and programs can be run locally on the client or directly submitted to the nucleus cluster using the sbatch command. Access to Windows applications is possible via a Virtual Machine which can easily installed easily.
Two models of cloud client are available. The thin-client is a small desktop device, while the workstation is a larger tower PC. The workstation has a more powerful CPU than the thin-client, and an internal GPU card. The workstation is recommended for users who need to run local analysis regularly, or who are developing GPU code.
BioHPC Compute Nodes are interconnected via an Infiniband network in a fat-tree topology. The 6 EDR central swithes provide up to 43.2TB/s total capacities. We are using FDR, EDR cables between 23 Mellanox switches and nodes and fiber cables between switches. The supporting port speeds can reach 56 GB/s and 100 GB/s respectively