BioHPC Cloud

biohpc_cloud

Compared to traditional HPC facilities BioHPC aims to be easy-to-use for all of our users regardless of computing experience. Though the heart of our systems is a high-performance compute cluster with fast storage, we offer a wide range of ways to use it. Cloud computing services are now commonplace, and most users are familiar with sites like dropbox for file sharing and storage, or the many public bioinformatics tools that allow data to be processed via the web. The BioHPC Cloud encompases all of our systems and services, with access via:

  • Simple web-based tools in this portal.
  • Specialized web applications for specific functionality, such as a research image bank.
  • Thin-client and workstation computers that are directly, connected to our storage and compute systems.
  • Virtualized servers providing access to research software via a remote desktop.
  • Traditional remote access using a command line terminal and file transfer clients.

Nucleus Compute Cluster

nucleus.jpg

Our main compute facility is nucleus, an 196-node heterogenous compute cluster, consisting of:

  • 24 nodes with 128GB RAM
  • 78 nodes with 256GB RAM
  • 48 nodes with 256GB RAM and new Xeon v4 processors
  • 8 GPU nodes with 256GB RAM - Tesla K20 / K40 (nucleus42-49)
  • 4 Large GPU nodes with 256GB RAM - two Tesla K80 per node (nucleus006-009)
  • 12 GPU nodes with 256GB RAM - two Tesla P100 per node (nucleus162-173)
  • 2 large memory nodes with 384GB RAM (nucleus81-82)

Across these systems a total of >8500 CPU cores and 45TB of RAM is available.

Nodes are interconnected via an Infiniband EDR & FDR network in a fat-tree topology, with 100/56 GB/s  throughput. 20 of the 256GB nodes contain NVIDIA Tesla GPU cards (K20, K40, K80,P100), for massively parallel jobs using the CUDA framework as well as large visualization tasks.

Access to the compute cluster is via our web portal tools, or SSH login to the head node, nucleus.biohpc.swmed.edu with a BioHPC username and password.


Lysosome Storage System

lysosome.jpg

Lysosome is our primary high-performance bulk storage consisting of 2.5PB raw storage provided by a Data Direct Networks SFA12X system, and an additional 960TB of raw storage provisioned with Dell PowerVault RAID hardware. 

Our Data Direct Networks SFA12K system uses dual active-active storage controllers, connected to multiple storage enclosures. Providing raw speeds of 35-40GB/s the storage is configured to host a Lustre parallel filesystem and is connected to the nucleus cluster with multiple inifiniband and 10Gb ethernet links. 40 Object Storage Targets (backed by the DDN storage) are aggregated into a single high performance filesystem, with operations directed by a Meta Data Target (MDT). The architecture is well suited to typical workloads in BioHPC, which operate on large image and sequence datasets.

Additional storage provided by Dell PowerVault systems also serves Lustre and NFS filesystems and our cloud services, and can achieve peak data rates of up to 6GB/s. NFS and Samba had nodes provide access for a variety of client systems.

 


Endosome Storage System

endosome.jpg

Endosome is a Panasas Activestor parallel storage system, providing our highest perfomance storage which is mounted as the /work filesystem. Endosome consists of 3 chassis filled with Activestor storage blades, each containing HDD and SSD drives. SSD-based caching and small-file storage plus HDD storage for larger files gives excellent performance while still allowing endosome to be used for large datasets. The PanFS filesystem used by endosome allows cluster nodes direct access to data on the storage blades over the inifiniband network, mediated by a storage director.

Endosome currently provides 240TB raw storage, and peak data rates of up to 4GB/s.

 
 

Lamella Cloud Storage Gateway

lamella.png

Lamella is our cloud storage gateway, providing easy access to files stored on BioHPC from Windows, Mac or Linux clients, over the web or on a smartphone. The Lamella web interface at lamella.biohpc.swmed.edu provides an easy-to-use site, similar to dropbox, to upload, download, manage and share files. Each user has an allocation of cloud storage and can also directly access their home, project and work allocations. This service also supports file synchronization between computers using the OwnCloud client.

Direct access to files in home, project and work locations is possible by mounting shared drives under Windows, Mac OSX, or Linux. Lamella shares these directories using SMB, with transfer rates up to 100MB/s for users on the campus 1Gb network, and higher speeds available via the campus 10Gb network. FTP access to storage is also available via the lamella gateway.

 

BioHPC Cloud Client

cloud_client.png

BioHPC Cloud Clients are computers running Linux, integrated into the BioHPC cloud. These systems provide a graphical Linux desktop and have direct access to BioHPC storage over the campus 10Gb network. All software available on the BioHPC cluster can be used on the cloud clients, which are ideal for developing and testing code and analysis workflows. Scripts and programs can be run locally on the client or directly submitted to the nucleus cluster using the sbatch command. Access to Windows applications is possible via a Virtual Machine which can easily installed easily.

Two models of cloud client are available. The thin-client is a small desktop device, while the workstation is a larger tower PC. The workstation has a more powerful CPU than the thin-client, and an internal GPU card. The workstation is recommended for users who need to run local analysis regularly, or who are developing GPU code.