processor by ColiN00B on Pixabay

Biochemistry Compute Cluster

What is the Biochemistry Computational Cluster?

The Biochemistry Computational Cluster (BCC) is a High Throughput Computing (HTC) environment available to the University of Wisconsin-Madison Department of Biochemistry. The BCC utilizes HTCondor, which is developed by the Computer Sciences Department at the UW-Madison.

The BCC was implemented with significant help from, and coordination with the Center for High Throughput Computing (CHTC). The BCC is configured to allow users to either submit their jobs to only the BCC or to “flock out” to CHTC, offering significant processing capabilities to Biochemistry users.

High Throughput Computing (HTC) is different from High Performance Computing (HPC) that allows computation on large datasets in large memory footprints.

If you require HPC rather than HTC, or if you are not part of the Biochemistry department, you may obtain a free account at the Center for High Throughput Computing (CHTC):

“Standard access to CHTC resources are provided to all UW-Madison researchers, free of charge.”

 


How do I use the Biochemistry Computational Cluster?

Please contact Jean-yves Sgro (jsgro@wisc.edu) with the Biochemistry Computational Research Facility for information on accessing and utilizing the Biochemistry Computational Cluster.

Access and Tutorial:

Specific information and tutorial is available on the page: “Tutorial 1 – Accessing linux cluster and htcondor.”

Please make sure you do not store data on the Biochemistry Computational Cluster (BCC) except data actively needed for running jobs. The BCC data may be erased at any time by administrators.


How powerful is the Biochemistry Computational Cluster?

The Biochemistry Computational Cluster (BCC) currently offers the resources listed below. The BCC is also connected to the Center for High Throughput Computing (CHTC), offering access to multiple UW Grid compute pools.

Biochemistry Computatioal Cluster Resources

  • Submit Node
    • 2 x Intel Xeon E5-2650v2 8-Core 2.60 GHz (3.4GHz Turbo)
    • Over 2TB of SSD based RAID 5 scratch disk space shared with each BCC computational node
    • 128 GB DDR3 1866 ECC/REG Memory
    • 10G Ethernet networking
  • 9 x Dedicated Computation Node
    • 2 x Intel Xeon E5-2680v2 10-Core 2.80 GHz (3.60GHz Turbo)
    • 64 GB DDR3 1866 ECC/REG Memory
    • 1 x NVIDIA Tesla K20M GPU
    • 1 x 240 GB SSD
    • 10G Ethernet networking