Advertisement

Energy Department to build two new supercomputers, further exascale computing research

Titan, Oak Ridge National Laboratory’s supercomputer, is going to have a more-powerful sibling in a few years. (Credit: ORNL) Titan, Oak Ridge National Laboratory’s supercomputer, is going to have a more-powerful sibling in a few years. (Credit: ORNL)

The Energy Department announced two nine-figure contract deals Friday aimed at furthering the department’s use of supercomputing.

Secretary of Energy Ernest Moniz announced a $325 million contract for the department’s joint Collaboration of Oak Ridge, Argonne and Lawrence Livermore national laboratories (CORAL), going toward building two new supercomputers for use at its national labs. One computer, Summit, will be housed at Oak Ridge National Laboratory in Oak Ridge, Tennessee, with the other, Sierra, housed at Lawrence Livermore National Laboratory in Livermore, California.

Advertisement

Summit is expected to provide five times the performance of Oak Ridge’s current Titan supercomputer, which now operates at 17.58 petaFLOPS. (One petaFLOP equals one quadrillion floating point operations per second.) Sierra will be seven times more powerful than Livermore’s Sequoia supercomputer, which operates at 16.32 petaFLOPS. According to IBM, which will help construct the new units, the systems will be able to move data at 17 petabytes — equivalent to moving 100 billion Facebook photos — per second.

“[Friday’s] announcement marks a shift from traditional supercomputing approaches that are no longer viable as data grows at enormous rate,” Tom Rosamilia, senior vice president of IBM Systems and Technology Group, said in a statement. “IBM’s Data Centric approach is a new paradigm in computing, marking the future of open computing platforms and capable of addressing the growing rates of data.”

The department also issued $100 million in conjunction with its Fast Forward 2 program, which aims to further develop extreme scale and exascale computing research and development. Conducted jointly by the Energy Department’s Office of Science and the National Nuclear Security Administration, portions of the contract went to AMD, supercomputer manufacturer Cray Inc., IBM, Intel Corp. and NVIDIA Corp.

AMD announced in conjunction with the Energy Department that it received $32 million related to Fast Forward research to collaborate on high-level designs of extreme-scale system. The company’s work will focus on node architecture, memory technology and ways to make exascale computing more energy efficient.

“The biggest hard rock for a technology perspective is getting the energy efficiency where [the government] wants it,” Alan Lee, AMD’s corporate vice president for research and development, told FedScoop.

Advertisement

Lee said the government would like to have exascale computing — computers performing at 1 quintillion floating point operations per second — for use sometime between 2020 and 2022, but it wants it to fit into what Lee called “a 20 megawatt envelope.” That benchmark is daunting. It’s more than double the power currently used by the department’s supercomputers: Titan uses 8.2 megawatts of electricity, and Sequoia uses 7.9 megawatts. In order to generate the 20 megawatts needed to power its iCloud data center in North Carolina, Apple built a 100-acre solar farm.

AMD believes the power management research done over the course of Fast Forward 2 will “change the fundamental ways that we do large scale servers,” Lee said, and eventually trickle down from supercomputing into desktops and mobile devices.

“High-performance computing is an essential component of the science and technology portfolio required to maintain U.S. competitiveness and ensure our economic and national security,” Moniz said in a release. “DOE and its National Labs have always been at the forefront of HPC and we expect that critical supercomputing investments like CORAL and FastForward 2 will again lead to transformational advancements in basic science, national defense, environmental and energy research that rely on simulations of complex physical systems and analysis of massive amounts of data.”

Greg Otto

Written by Greg Otto

Greg Otto is Editor-in-Chief of CyberScoop, overseeing all editorial content for the website. Greg has led cybersecurity coverage that has won various awards, including accolades from the Society of Professional Journalists and the American Society of Business Publication Editors. Prior to joining Scoop News Group, Greg worked for the Washington Business Journal, U.S. News & World Report and WTOP Radio. He has a degree in broadcast journalism from Temple University.

Latest Podcasts