Feeds:
Posts
Comments

Archive for the ‘Petascale Day’ Category

THE ILLINOIS SUPERCOMPUTER NEIGHBORHOOD GROWS

On Thursday, the National Center for Supercomputing Applications at the University of Illinois celebrated the full launch of Blue Waters, their new one-petaflop supercomputer. As part of the ceremony, Governor Pat Quinn declared “Blue Waters Supercomputer Day,” and Senator Dick Durbin saluted the machine and other supercomputers as “the gateway to next-generation research.” The start of 24/7 research was also a proud day for Computation Institute scientists such as Michael Wilde and Daniel Katz, who were involved in getting Blue Waters up and running. Wilde spoke about the supercomputer at the CI’s Petascale Day event last October.

Meanwhile, a couple hundred miles north of Blue Waters, Argonne’s new 10-petaflop supercomputer Mira nears the start of its own full production period later this year. This week, the laboratory released a new timelapse video of the machine’s construction, which you can watch below. But science isn’t waiting for Mira to reach full strength, as demonstrated by this new project on the combustion and detonation of hydrogen-oxygen mixtures — a potential alternative source of fuel.

THE GRAND MOTHER OF CLOUD

In recent years, cloud computing has crossed over from inside baseball IT chatter to the general public. As CI fellow Rob Gardner recently charted, web searches for the term began climbing in 2009 and still vastly out-pace searches for similar buzzwordy topics such as “big data” and “virtualization.” Now that consumers are comfortable with storing files and running programs in the cloud, it’s time for the pioneers of that technology to take their victory laps. One recent round-up of cloud computing mavericks at Forbes tagged CI fellow Kate Keahey as “the grand mother of cloud,” recognizing her early work on infrastructure-as-a-service (Iaas) platforms. Her current project, Nimbus, is dedicated to providing cloud-based infrastructure for scientific laboratories.

OTHER NEWS IN COMPUTATIONAL SCIENCE

A lot of what we know about science may be wrong, but finding those flaws could lead to better discovery in the future. That’s how this article on Txchnologist framed the new Metaknowledge Network led by CI fellow James Evans. “We’re building on decades of this deep work on science and trying to connect it to this computational moment…to get a quantitative understanding of why we have the knowledge we have,” Evans told reporter Rebecca Ruiz.

The open release of data by the city of Chicago hasn’t just improved our understanding of how the city works, but also how we see it. These beautiful visualizations created with the Edifice software (one of the projects at the Open City collaborative) make the neighborhoods of Chicago look like a genomic SNP chip…or an elaborate Lite Brite project.

Many Chicago homes would benefit from improvements that improve energy efficiency, saving them a huge portion of their monthly utility bills. But many residents are unaware of the option or unwilling to bear the up-front expenses needed to retrofit homes to reduce energy usage. According to WBEZ, two University of Chicago students have founded a new startup called Effortless Energy that uses data-mining techniques to both locate and assist these opportunities for conservation and savings.

The “traveling salesman problem” of finding the most efficient route between 20 different cities has long frustrated mathematicians. So English scientists created “programmable goo” to find the shortest route in similar fashion to studies that have used slime mold as navigators. You can read the paper, “Computation of the Traveling Salesman Problem by a Shrinking Blob” at arXiv.

Read Full Post »

F1.medium

Last October, we helped celebrate Petascale Day with a panel on the scientific potential of new supercomputers capable of running more than a thousand trillion floating point operations per second. But the ever-restless high performance computing field is already focused on the next landmark in supercomputing speed, the exascale, more than fifty times faster than the current record holder (Titan, at Oak Ridge National Laboratory). As with the speed of personal computers, supercomputers have been gaining speed and power at a steady rate for decades. But a new article in Science this week suggested that the path to the exascale may not be as smooth as the field has come to expect.

The article, illustrated with an image of a simulated exploding supernova (seen above) by the CI-affiliated Flash Center for Computational Science, details the various barriers facing the transition from petascale to exascale in the United States and abroad. Government funding agencies have yet to throw their full weight behind an exascale development program. Private computer companies are turning their attention away from high performance computing in favor of commercial chips and Big Data. And many experts agree that supercomputers must be made far more energy-efficient before leveling up to the exascale — under current technology, an exascale computer would use enough electricity to power half a million homes.

(more…)

Read Full Post »

Yesterday, we described the awesome power of the new petascale supercomputers, which are capable of performing more than one quadrillion calculations per second. But building these machines is just the beginning — it’s how they’re applied to the great scientific problems of today and the future that will define their legacy. Immense computational power is best used for an immense challenge, such as complex scientific simulations or enormous datasets that would cripple an everyday laptop. Traditionally, astronomy and physics have provided the majority of this kind of work, flush as they are with data collected by telescopes and particle colliders. But as the other three speakers at our Petascale Day event described it, the disciplines of medicine, chemistry, and even business are entering a data-driven phase where they too can take advantage of petascale power.

(more…)

Read Full Post »