Steele (supercomputer)

From Wikipedia, the free encyclopedia

Steele is a supercomputer that was installed at Purdue University on May 5, 2008. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Coates built in 2009, Rossmann built in 2010, and Hansen and Carter built in 2011. Steele was the largest campus supercomputer in the Big Ten outside a national center when built. It ranked 104th on the November 2008 TOP500 Supercomputer Sites list.

Hardware[edit]

Steele consisted of 893 64-bit, 8-core Dell PowerEdge 1950 and nine 64-bit, 8-core Dell PowerEdge 2950 systems with various combinations of 16-32 gigabytes RAM, 160 GB to 2 terabytes of disk, and Gigabit Ethernet and SDR InfiniBand to each node. The cluster had a theoretical peak performance of more than 60 teraflops. Steele and its 7,216 cores replaced the Purdue Lear cluster supercomputer which had 1,024 cores but was substantially slower. Steele is primarily networked utilizing a Foundry Networks BigIron RX-16 switch with a Tyco MRJ-21 wiring system delivering over 900 Gigabit Ethernet connections and eight 10 Gigabit Ethernet uplinks.

Software[edit]

Steele nodes ran Red Hat Enterprise Linux starting with release 4.0[1] and used Portable Batch System Professional 10.4.6 (PBSPro 10.4.6) for resource and job management. The cluster also had compilers and scientific programming libraries installed.

Construction[edit]

The first 812 nodes of Steele were installed in four hours on May 5, 2008,[2] by a team of 200 Purdue computer technicians and volunteers, including volunteers from in-state athletic rival Indiana University. The staff had made a video titled "Installation Day" as a parody of the film Independence Day.[3] The cluster ran 1,400 science and engineering jobs by lunchtime.[4][5] In 2010, Steele was moved to an HP Performance Optimized Datacenter, a self-contained, modular, shipping container-style unit installed on campus in order to make room for new clusters in Purdue's main research computing data center.[6][7][8][9]

Funding[edit]

The Steele supercomputer and Purdue's other clusters were part of the Purdue Community Cluster Program, a partnership between ITaP and Purdue faculty. In Purdue's program, a "community" cluster is funded by hardware money from grants, faculty startup packages, institutional funds and other sources. ITaP's Rosen Center for Advanced Computing administers the community clusters and provides user support. Each faculty partner always has ready access to the capacity he or she purchases and potentially to more computing power when the nodes of other partners are idle. Unused, or opportunistic, cycles from Steele are made available to the National Science Foundation's TeraGrid (now the Extreme Science and Engineering Discovery Environment) system and the Open Science Grid using Condor software. A portion of Steele also was dedicated directly to TeraGrid use.

Users[edit]

Steele users came fields such as aeronautics and astronautics, agriculture, biology, chemistry, computer and information technology, earth and atmospheric sciences, mathematics, pharmacology, statistics, and electrical, materials and mechanical engineering. The cluster was used to design new drugs and materials, to model weather patterns and the effects of global warming, and to engineer future aircraft and nano electronics. Steele also served the Compact Muon Solenoid Tier 2 Center at Purdue, one of the particle physics experiments conducted with the Large Hadron Collider.

DiaGrid[edit]

Unused, or opportunistic, cycles from Steele were made available to the TeraGrid and the Open Science Grid using Condor software. Steele was part of Purdue's distributed computing Condor flock, and the center of DiaGrid, a nearly 43,000-processor Condor-powered distributed computing network for research involving Purdue and partners at nine other campuses.

Naming[edit]

The Steele cluster is named for John M. Steele, Purdue associate professor emeritus of computer science, who was involved with research computing at Purdue almost from its inception. He joined the Purdue staff in 1963 at the Computer Sciences Center associated with the then-new Computer Science Department. He served as the director of the Purdue University Computing Center, the high-performance computing unit at Purdue prior to the Rosen Center for Advanced Computing, from 1988 to 2001 before retiring in 2003. His research interests have been in the areas of computer data communications and computer circuits and systems, including research on an early mobile wireless Internet system.[10]

See also[edit]

References[edit]

  1. ^ Charles Babcock (May 12, 2008). "Purdue IT Staff Builds Supercomputer In A Half Day". Information Week. Retrieved May 27, 2013.
  2. ^ "Odd News: Purdue's big computer assembled fast". United Press International. May 5, 2008. Archived from the original on May 24, 2011. Retrieved May 27, 2013.
  3. ^ "Purdue to install Big Ten's biggest campus computer in just a day". News release. Purdue University. May 1, 2008. Retrieved May 27, 2013.
  4. ^ Nicolas Mokhoff (May 5, 2008). "What's for lunch? Purdue supercomputer ready by noon". EE Times. Retrieved May 24, 2011.
  5. ^ Meranda Watling (May 5, 2008). "Purdue supercomputer Big Ten's biggest". Indianapolis Star. Retrieved May 27, 2013.
  6. ^ Timothy Prickett Morgan (July 28, 2010). "Purdue puts HPC cluster in HP PODs: Boilermakers of a different kind". The Register. Retrieved May 27, 2013.
  7. ^ "Purdue University Increases Research Capabilities with HP Performance-optimized Data Center". News release. HP corporation. July 28, 2010. Retrieved May 27, 2013.
  8. ^ Dian Schaffhauser (August 3, 2010). "Purdue U Goes Modular with HP Data Center". Campus Technology. Retrieved May 27, 2013.
  9. ^ "Purdue moves supercomputer to cutting-edge portable data center -with the computer running". News release. Purdue University. August 31, 2010. Retrieved May 27, 2013.
  10. ^ "John Steele". Web site bio. Purdue University. Retrieved May 27, 2013.

External links[edit]