Welcome to the CnbcUsers web
Introduction to CNBC Computing Facilities

The CNBC at Carnegie Mellon University (CMU) maintains its own computer facilities that provide faculty, staff and students with support, access to state of the art equipment and high-speed network access.

The Center maintains multiple servers that are located in the CNBC machine room. All servers have redundant power supplies and Uninterrupted Power Supplies (UPS) ensuring connectivity for several hours without power. A 70TB, enterprise grade, RAID6, disk storage system provides space for users to store their data. They can access the files from their local computer using CMU's network. The files are incrementally backed up to a separate server and saved for 3 months. A server with 40TBs of RAID 5 enterprise grade disk storage space was installed in October 2014 to provide the CNBC with desktop/laptop file incremental backup using CrashPlan PROe software, a multi-platform (MacOS, Windows, Linux, Solaris) enterprise solution. In June 2016, a 100TB , enterprise grade, disk storage system using zfs RAID6 was built to store the MICrONs project data, a project funded by the Intelligence Advanced Research Projects Activity (IARPA).

The CNBC Cluster is shared with the CMU's Psychology Department and is located in the CMU School of Computer Science (SCS) machine room. The SCS Facility has 24/7 support and a team of HPC experts that works closely with the CNBC computing administrator. Currently the cluster is a two, 19" rack system, with power distribution units and Uninterrupted Power Supply (UPS) protecting the essential components from primary power source loss. The cluster consists of 30 nodes, 416 CPUs and 24 Nvidia GeForce Titan X 12GB cards used for GPU processing. These resources are used for research, imaging, modeling, and simulations. The cluster was initially built in July 2010 and upgraded in July 2012. In June 2015, the cluster was reconfigured, upgraded and merged with the CMU Psychology Department's cluster. In June of 2016 a second rack, PDU, Ethernet and Infintiband switches were added to accommodate expanding the cluster. The upgrade included 3 new nodes with 4 GPU processing cards in each. In January 2017, 3 GPU nodes were added each with 4 x Nvidia GeForce Titan X Maxwell 12GB GDDR5 384-bit PCI Express 3.0 (250W). The combined system now includes a total of 85 terabytes of shared disk space and 1.8 terabytes of RAM. Each node is connected via Quad Data Rate (QDR) Infiniband and the cluster is accessible through the Carnegie Mellon University network. The core operating system of the cluster is ROCKS+, a CentOS-based operating system with specialized packages for parallel and distributed computing. This operating system allows for a flexible, stable working environment for the machine’s core applications. Portable Batch System (PBS) is used for job scheduling and to allocate computational tasks, (i.e., batch jobs) among the available CPU computing nodes. The SLURM Workload Manager is used for GPU resource allocation and batch job scheduling. Users' home directories are incrementally backup up to the SCS's tape backup system. The two data partitions are configured using RAID 6 and the ZFS file system. The data partitions have snapshots enabled and each are replicated onto an identical RAID 6 volume for additional protection.

Software packages used on the cluster include:

  • Matlab
  • AFNI
  • Freesurfer
  • MNE
  • R
  • SPM8
  • DSI Studi
  • LENS
The cluster is kept up to date and maintained by SCS Facilities. David Pane, Manager of Computational Resources, is on hand to answer any questions or concerns that users may have about the cluster.

If you are interested in using the CNBC cluster for computational research, please contact David. A small maintenance fee is associated with access to the cluster and helps the CNBC make software and hardware improvements as advances in technology occur.

Documentation for psych-o:

CnbcUsers Web Utilities

Edit | Attach | Watch | Print version | History: r21 < r20 < r19 < r18 < r17 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r21 - 2017-09-27 - DavidPane
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback