CS267 Spring 2006 - Assignment 0
by Abhishek Mishra
As you can see above, my name is Abhishek Mishra but most of my friends call me 'Abhi' (pronounced 'uh-bee' where bee is like the insect). I am a third year EECS student in option (i.e., track) 2E - Robotics/Mechatronics. I've been in research projects in robotics (bipedal walkers), AI (content recommendation), and a couple other things, but haven't done anything involving parallel computing yet. My coursework has been mostly EE but I have a few upper-div CS under my belt as well. My CS interests are HPC, graphics, and OS. I hope that after taking CS267 I will know enough to be able to explore HPC further on my own and become familiar with all the tiers of EECS that are involved in producing and working with a supercomputer.
Parallel Computing Application:
I chose to examine protein folding for this project. Protein folding refers to the process whereby a "protein structure assumes its functional shape or conformation" [Wikipedia :: Protein Folding]. The shape assumed through folding affects the functionality of the protein directly. This process is guided by various factors such as Van der Waals forces, and requires enormous amounts of computing power to simulate properly.
Folding is important for many applications. Understanding this process is important for research on synthetic polymers (engineered at the nanoscale level), nanomachines (which could make use of molecular self-assembly techniques), and pathology (incorrect folding leads to many ailments, including cancer).
Who's Working On It
The most famous protein folding parallel application is Folding at Home [stanford.edu]. This is a distributed computing project that, like Seti@Home [berkeley.edu], divides work into units, sends out work, and then retrieves finished work unit results. Because protein folding doesn't require low latency inter-node communication, this has worked out well and the throughput is over 200 Teraflops [stanford.edu].
Major work on protein folding is also being done elsewhere, such as on Mare Nostrum [wikipedia.org] at the Barcelona Supercomputing Center [bsc.org.es]. This supercomputer ranks 8th [top500.org] on the Top 500 Supercomputer Sites list, as of List 26, Nov. 16th 2005. While 27.910 TeraFlops RMax on Linpack is impressive, Mare Nostrum is interesting for other reasons as well, as we shall see below.
Mare Nostrum is commonly considered to be the highest ranking Commercial-off-the-shelf (COTS) [wikipedia.org] supercomputer on the Top 500 listing. That is, it consists of components available to the public from vendors. In the case of Mare Nostrum, this involves rack-mounted servers and data storage solutions from IBM. This is in contrast to, for example, The Earth Simulator [wikipedia.org], which is a very customized and unique supercomputer.
System Overview: Mare Nostrum consists of 27 compute racks, each with six 7U IBM BladeCenter chassis [ibm.com]. Each chassis holds 14 eServer BladeCenter JS20's [ibm.com]. Each JS20 has two Power PC 970 processors, making for very high computational density - over 1.4 Teraflops per rack. Overall power consumption sits at 600KW, which is relatively small given the performance of the system.
Storage: The storage system consists of IBM TotalStorage DS4100 storage servers [ibm.com], making for a grand total of 236TB of hard drive space. System RAM totals (over all the Blades) 9.6TB.
Interconnect: As with all large computing sites, interconnect is a very big limiting factor. Most top-tier systems do not use standard Ethernet, but instead use many lesser-known protocols such as Myrinet [wikipedia.org] or Quadrics [wikipedia.org]. These switching systems result in faster communication because of reduced overhead. Further speedup is achieved by writing programs that bypass OS calls and directly make calls to use Myrinet. The biggest vendor of Myrinet is Myricom [myricom.com], and their CLOS and SPINE switch enclosures [myricom.com] were used in Mare Nostrum.
Myricom M3-CLOS-ENCL. Image courtesy of Myricom
Operations Racks: Mare Nostrum also has a single operations rack from IBM, consisting of IBM p615 servers [ibm.com]. The operations rack provides a diskless OS images, meaning that no individual blade requires a Linux installation on its hard drive. In fact, the internal hard drives in the blades are mostly there only for future features such as 'checkpointing'. Diskless Image Management (DIM) [bsc.org.es] on Mare Nostrum provides for true hot-swapping.
According to IBM's site [ibm.com] on the various applications of Mare Nostrum in the life sciences, protein folding is one of the major applications run on Mare Nostrum. One very interesting project is from the University of Barcelona's Molecular Modelling and Bioinformatics Unit [ub.es]. It's called MODEL, which stands for Molecular Dynamics Extended Library. This database is designed to contribute to the Protein Data Bank (PDB) [wikipedia.org]. The results from Mare Nostrum can be viewed at the MODEL site here [ub.es]. The result set is very complete, and includes for each protein the standard PDB Code, the Header (classification), graphs and raw data for all significant properties of the protein, and even videos. Unfortunately no information on the application's machine performance is provided in any of the publications.
Animation of PDB Code 153L, HYDROLASE (O-GLYCOSYL). Animation courtesy of University of Barcelona MMB
Portion of Data on PDB Code 153L, HYDROLASE (O-GLYCOSYL). Data courtesy of University of Barcelona MMB
Mare Nostrum is a great success for IBM, the University of Barcelona, and for the few lucky projects that have access to this resource. As the MODEL database indicates, the machine has already contributed a lot to science, and the administrators expect Mare Nostrum to provide top-tier supercomputing power for at least the next five years. Mare Nostrum is also an indicator of the growing ubiquitousness of supercomputers, as it is composed of COTS components and was built in under two months. As the barriers to Moore's Law draw near, systems such as this will become very popular among the scientific community.