Feeds:
Posts
Comments

Archive for the ‘Mira’ Category

tianhe-2-jack-dongarra-pdf-600x0

CHINA’S LATEST SUPERCOMPUTER VICTORY

China’s Milky Way 2 supercomputer was recently declared the fastest supercomputer in the world by industry scorekeeper Top500, the latest move in the increasingly international race for high performance computing supremacy. Late last month, CI Senior Fellow Rick Stevens appeared on Science Friday, alongside Top 500 editor Horst Simon, to talk about why that competition matters, and what the global push for faster computation will do for medicine, engineering and other sciences.

“These top supercomputers are like time machines,” Stevens said. “They give us access to a capability that won’t be broadly available for five to ten years. So whoever has the time machine is able to do experiments, able to see into the future deeper and more clearly than those that don’t have such machines.”

The same time machine metaphor was also picked up by the University of Chicago’s profile of Mira, our local Top500 competitor, which was bumped down to #5 by the Milky Way 2’s top ranking. But there’s no shame in fifth-best, when fifth-best can run 10 quadrillion calculations per second — the equivalent computing power of 58 million iPads. CI Senior Fellow Gregory Voth is quoted about how access to such a world-class resource helps both today and tomorrow’s scientists.

“Having access to a computing resource like Mira provides excellent opportunities and experience for educating up-and-coming young scientists as it forces them to think about how to properly utilize such a grand resource very early in their careers,” Voth says. “This gives them a unique perspective on how to solve challenging scientific problems and puts them in an excellent position to utilize computing hardware being imagined now for tomorrow.”

WHY DATA SCIENCE MUST BE OPEN SCIENCE AND MORE FROM DSSG

The Data Science for Social Good fellowship has reached the halfway point, and the website is starting to fill up with interesting content about the projects. Some fellows have already produced tools for the community to use, such as Paul Meinshausen’s interactive tree map of the City of Chicago’s Data Portal. Instead of a cold, no-frills list of the datasets available for download by the public, Meinshausen’ s map uses color and shape to guide users quickly to the data they are seeking and make rapid comparisons about the size of the dataset. The visualization was popular enough that programmers in Boston and San Francisco quickly applied his code to their own city’s data portals, while another built a common map for every city that uses Socrata software to share its data.

(more…)

Read Full Post »

mira-dedication

Even the world’s fastest supercomputers need some time to prep themselves to join society. After eight months of construction and nearly a year of early research projects testing out its capabilities, the 10-petaflop IBM Blue Gene/Q system finally made its official public bow this Monday in a dedication ceremony at the suburban Argonne campus. At the event, Illinois Senator Dick Durbin said that the current fifth-fastest supercomputer in the world will allow Argonne and the United States as a whole to continue pushing the boundaries of science and reaping the benefits of research.

“Mira ensures the lab remains a linchpin of scientific research, enabling researchers to tackle extremely complex challenges ranging from improving combustion efficiency in car engines to modeling the progression of deadly diseases in the human body,” Durbin said. “High-performance computing is crucial to U.S. economic growth and competitiveness, saving time, money and energy, boosting our national security and strengthening our economy.  If the United States is to remain a leader in the 21st century, we need to continue investing in the science and innovation that will address our growing energy and environmental demands while building the industries of the future.”

The types of projects that will run on the now fully-active Mira demonstrate how the applications of high-performance computing are broader than ever. Beyond more traditional uses in cosmology and physics — such as a simulation of the universe’s expansion or climate modeling — Mira’s 786,000 processors will also be put to work on models of cellular and viral proteins and testing designs for energy-efficient engineering.

“As supercomputers continue to improve, so do the results. Faster and more sophisticated computers mean better simulations and more accurate predictions,” said CI Senior Fellow Rick Stevens. “Mira will help us tackle increasingly complex problems, achieve faster times to solutions and create more robust models of everything from car engines to the human body.”

For more information about Mira and the dedication ceremony, visit the story from the Argonne Newsroom or watch the video below.

Read Full Post »

Newer, faster supercomputers have allowed scientists to create detailed models of blood flow that help doctors understand what happens at the molecular level. (Photo from Argonne)

Newer, faster supercomputers have allowed scientists to create detailed models of blood flow that help doctors understand what happens at the molecular level. (Photo from Argonne)

This week, some 25 cities around the world are hosting events online and offline as part of Big Data Week, described by its organizers as a “global community and festival of data.” The Chicago portion of the event features several people from the Computation Institute, including two panels on Thursday:  “Data Complexity in the Sciences: The Computation Institute” featuring Ian Foster, Charlie Catlett, Rayid Ghani and Bob George, and  “Science Session with the Open Cloud Consortium” featuring Robert Grossman and his collaborators. Both events are in downtown Chicago, free, and you can register at the above links.

But the CI’s participation in Big Data Week started with two webcast presentations on Tuesday and Wednesday that demonstrated the broad scope of the topic. The biggest data of all is being produced by simulations on the world’s fastest supercomputers, including Argonne’s Mira, the fourth-fastest machine in the world. Mira boasts the ability to 10 quadrillion floating point operations per second, but how do you make sense of the terabytes of data such powerful computation produces on a daily basis?

In his talk “Big Vis,” Joseph Insley of Argonne and the CI explained how he and his team has developed equally impressive visualization technology to keep pace with Mira’s data firehose. Tukey, a 96-node visualization cluster, is Mira’s sidekick, sharing the same software and file systems with its big sibling to more easily take in data and transform it into images. Insley demonstrated how visualization was instrumental in two major simulations conducted on Mira: one studying arterial blood flow and aneurysm rupture in the brain, and another on nothing less than the evolution of the entire universe.

(more…)

Read Full Post »

THE ILLINOIS SUPERCOMPUTER NEIGHBORHOOD GROWS

On Thursday, the National Center for Supercomputing Applications at the University of Illinois celebrated the full launch of Blue Waters, their new one-petaflop supercomputer. As part of the ceremony, Governor Pat Quinn declared “Blue Waters Supercomputer Day,” and Senator Dick Durbin saluted the machine and other supercomputers as “the gateway to next-generation research.” The start of 24/7 research was also a proud day for Computation Institute scientists such as Michael Wilde and Daniel Katz, who were involved in getting Blue Waters up and running. Wilde spoke about the supercomputer at the CI’s Petascale Day event last October.

Meanwhile, a couple hundred miles north of Blue Waters, Argonne’s new 10-petaflop supercomputer Mira nears the start of its own full production period later this year. This week, the laboratory released a new timelapse video of the machine’s construction, which you can watch below. But science isn’t waiting for Mira to reach full strength, as demonstrated by this new project on the combustion and detonation of hydrogen-oxygen mixtures — a potential alternative source of fuel.

THE GRAND MOTHER OF CLOUD

In recent years, cloud computing has crossed over from inside baseball IT chatter to the general public. As CI fellow Rob Gardner recently charted, web searches for the term began climbing in 2009 and still vastly out-pace searches for similar buzzwordy topics such as “big data” and “virtualization.” Now that consumers are comfortable with storing files and running programs in the cloud, it’s time for the pioneers of that technology to take their victory laps. One recent round-up of cloud computing mavericks at Forbes tagged CI fellow Kate Keahey as “the grand mother of cloud,” recognizing her early work on infrastructure-as-a-service (Iaas) platforms. Her current project, Nimbus, is dedicated to providing cloud-based infrastructure for scientific laboratories.

OTHER NEWS IN COMPUTATIONAL SCIENCE

A lot of what we know about science may be wrong, but finding those flaws could lead to better discovery in the future. That’s how this article on Txchnologist framed the new Metaknowledge Network led by CI fellow James Evans. “We’re building on decades of this deep work on science and trying to connect it to this computational moment…to get a quantitative understanding of why we have the knowledge we have,” Evans told reporter Rebecca Ruiz.

The open release of data by the city of Chicago hasn’t just improved our understanding of how the city works, but also how we see it. These beautiful visualizations created with the Edifice software (one of the projects at the Open City collaborative) make the neighborhoods of Chicago look like a genomic SNP chip…or an elaborate Lite Brite project.

Many Chicago homes would benefit from improvements that improve energy efficiency, saving them a huge portion of their monthly utility bills. But many residents are unaware of the option or unwilling to bear the up-front expenses needed to retrofit homes to reduce energy usage. According to WBEZ, two University of Chicago students have founded a new startup called Effortless Energy that uses data-mining techniques to both locate and assist these opportunities for conservation and savings.

The “traveling salesman problem” of finding the most efficient route between 20 different cities has long frustrated mathematicians. So English scientists created “programmable goo” to find the shortest route in similar fashion to studies that have used slime mold as navigators. You can read the paper, “Computation of the Traveling Salesman Problem by a Shrinking Blob” at arXiv.

Read Full Post »

Analyticalengine

A model of Charles Babbage’s Analytical Engine (via Wikimedia Commons)

The general public tends to think of supercomputers as the big brothers of their home computers; a larger, faster and more powerful version of more familiar every devices. But in his talk last week for the Argonne National Laboratory’s OutLoud series, CI senior fellow Peter Beckman urged the crowd to think of supercomputers more imaginatively as the real-life version of a common sci-fi device: the time machine.

A modern laptop is faster than the state-of-the-art supercomputer Beckman used at Los Alamos National Laboratory in 1995, he said. That same year, a supercomputer with the computing speed of today’s iPad would rank on the Top 500 list of fastest computers in the world. Beyond raw speed, the programming strategies and hardware architectures developed on the room-sized supercomputers of the last 60 years have eventually trickled down to the consumer, as with the multi-core processors and parallel operations that can be found in new laptops.

As such, “supercomputing is a time machine,” said Beckman, the director of exascale technology and computing at Argonne. “What we build here in the lab, 20 years later ends up in your phone.”

(more…)

Read Full Post »

logo_UrbanCCD

Last week, we announced the newest CI research center, the Urban Center for Computation and Data (UrbanCCD). Led by CI senior fellow Charlie Catlett, the center will bring the latest computational methods to bear on the question of how to intelligently design and manage large and rapidly growing cities around the world. With more cities, including our home of Chicago, releasing open datasets, the UrbanCCD hopes to bring advanced analytics to these new data sources and use them to construct complex models that can simulate the effects of new policies and interventions upon a city’s residents, services and environment.

Since the announcement, news outlets including Crain’s Chicago Business, RedEye and Patch have written articles about UrbanCCD and its mission. The center was also highlighted by UrbanCCD collaborators at the School of the Art Institute of Chicago and Argonne National Laboratory, and endorsed by US Rep. Daniel Lipinski.

For more examples of what cities are doing with open data releases and the applications built upon those data sets, see The Best Open Data Releases of 2012 as decided by Atlantic Cities or WBEZ’s breakdown of the potential for Chicago’s new open data policy.

(more…)

Read Full Post »

A panorama of Mira (courtesy Argonne National Laboratory)

A panorama of Mira (courtesy Argonne National Laboratory)

In the computational world, where speed is king, fifteen zeros is the current frontier. The new wave of petascale supercomputers going online around the world in the coming months are capable of performing at least one quadrillion, or 1,000,000,000,000,000, floating calculations per second. In exponential notation, a quadrillion is shortened to 1 x 10^15, so clever computer scientists declared October 15th (get it?) to be Petascale Day, a celebration of this new computational land speed record and its ability to transform science.

Here at the Computation Institute, we observed the day by hosting a lunch event with the University of Chicago Research Computing Center, putting together a roster of six talks about these powerful machines and the new types of research they will enable. The speakers, who hailed from Argonne National Laboratory, the University of Chicago, and the Computation Institute, talked about the exciting potential of the petascale, as well as the technical challenges scientists face to get the most out of the latest supercomputers.

(more…)

Read Full Post »