Feeds:
Posts
Comments

Archive for the ‘Argonne’ Category

tianhe-2-jack-dongarra-pdf-600x0

CHINA’S LATEST SUPERCOMPUTER VICTORY

China’s Milky Way 2 supercomputer was recently declared the fastest supercomputer in the world by industry scorekeeper Top500, the latest move in the increasingly international race for high performance computing supremacy. Late last month, CI Senior Fellow Rick Stevens appeared on Science Friday, alongside Top 500 editor Horst Simon, to talk about why that competition matters, and what the global push for faster computation will do for medicine, engineering and other sciences.

“These top supercomputers are like time machines,” Stevens said. “They give us access to a capability that won’t be broadly available for five to ten years. So whoever has the time machine is able to do experiments, able to see into the future deeper and more clearly than those that don’t have such machines.”

The same time machine metaphor was also picked up by the University of Chicago’s profile of Mira, our local Top500 competitor, which was bumped down to #5 by the Milky Way 2’s top ranking. But there’s no shame in fifth-best, when fifth-best can run 10 quadrillion calculations per second — the equivalent computing power of 58 million iPads. CI Senior Fellow Gregory Voth is quoted about how access to such a world-class resource helps both today and tomorrow’s scientists.

“Having access to a computing resource like Mira provides excellent opportunities and experience for educating up-and-coming young scientists as it forces them to think about how to properly utilize such a grand resource very early in their careers,” Voth says. “This gives them a unique perspective on how to solve challenging scientific problems and puts them in an excellent position to utilize computing hardware being imagined now for tomorrow.”

WHY DATA SCIENCE MUST BE OPEN SCIENCE AND MORE FROM DSSG

The Data Science for Social Good fellowship has reached the halfway point, and the website is starting to fill up with interesting content about the projects. Some fellows have already produced tools for the community to use, such as Paul Meinshausen’s interactive tree map of the City of Chicago’s Data Portal. Instead of a cold, no-frills list of the datasets available for download by the public, Meinshausen’ s map uses color and shape to guide users quickly to the data they are seeking and make rapid comparisons about the size of the dataset. The visualization was popular enough that programmers in Boston and San Francisco quickly applied his code to their own city’s data portals, while another built a common map for every city that uses Socrata software to share its data.

(more…)

Read Full Post »

mira-dedication

Even the world’s fastest supercomputers need some time to prep themselves to join society. After eight months of construction and nearly a year of early research projects testing out its capabilities, the 10-petaflop IBM Blue Gene/Q system finally made its official public bow this Monday in a dedication ceremony at the suburban Argonne campus. At the event, Illinois Senator Dick Durbin said that the current fifth-fastest supercomputer in the world will allow Argonne and the United States as a whole to continue pushing the boundaries of science and reaping the benefits of research.

“Mira ensures the lab remains a linchpin of scientific research, enabling researchers to tackle extremely complex challenges ranging from improving combustion efficiency in car engines to modeling the progression of deadly diseases in the human body,” Durbin said. “High-performance computing is crucial to U.S. economic growth and competitiveness, saving time, money and energy, boosting our national security and strengthening our economy.  If the United States is to remain a leader in the 21st century, we need to continue investing in the science and innovation that will address our growing energy and environmental demands while building the industries of the future.”

The types of projects that will run on the now fully-active Mira demonstrate how the applications of high-performance computing are broader than ever. Beyond more traditional uses in cosmology and physics — such as a simulation of the universe’s expansion or climate modeling — Mira’s 786,000 processors will also be put to work on models of cellular and viral proteins and testing designs for energy-efficient engineering.

“As supercomputers continue to improve, so do the results. Faster and more sophisticated computers mean better simulations and more accurate predictions,” said CI Senior Fellow Rick Stevens. “Mira will help us tackle increasingly complex problems, achieve faster times to solutions and create more robust models of everything from car engines to the human body.”

For more information about Mira and the dedication ceremony, visit the story from the Argonne Newsroom or watch the video below.

Read Full Post »

Brett-Harris-WebFROM CITY HALL TO HYDE PARK

In its early days, the Urban Center for Computation and Data formed a valuable partnership with the data team installed by Mayor Rahm Emanuel within Chicago’s city government. Leading the city’s efforts to free up data internally and externally was Chief Data and Information Officer Brett Goldstein, an alumnus of the UChicago computer science program and the restaurant reservation startup company OpenTable. Goldstein’s team and UrbanCCD worked together on the SmartData platform proposal that was chosen for a $1 million grant in the Bloomberg Mayors Challenge earlier this year, and Goldstein was the keynote speaker at the first workshop of the Urban Sciences Research Coordination Workshop in February.

So we are very excited about the news that Goldstein will soon be joining the University of Chicago as the inaugural Fellow in Urban Science at the Harris School of Public Policy. Goldstein will continue to work with UrbanCCD researchers on the SmartData platform and other projects, while also helping with the launch of a masters degree in computation and public policy and the Urban Technology Innovators’ Conference, a new initiative organized by Chicago Harris and the City of Chicago that seeks to create a peer-learning network for municipal technology innovators.

“Chicago Harris demonstrates a commitment to rigorous interdisciplinary scholarship, with strong partnerships with the Department of Computer Science and the Computation Institute, and a desire to advance the field of data science, especially so it can be leveraged it for public service,” Goldstein said. “I am excited about the opportunity to continue working to meld urban science and data analytics and work with this impressive community of faculty and staff.”

You can read more coverage of Goldstein’s move and career so far at Crain’s Chicago Business and Government Technology.

(more…)

Read Full Post »

rick-stevens-dataThe exascale — one million trillion calculations per second — is the next landmark in the perpetual race for computing power. Although this speed is 500 times faster than the world’s current leading supercomputers and many technical challenges remain, experts predict that the exascale will likely be reached by 2020. But while the United States is used to being the frontrunner in high-performance computing achievement, this leg of the race will feature intense competition from Japan, China and Europe. In order to pass the exascale barrier first and reap the application rewards in energy, medicine and engineering research, government funding is critical.

On Capitol Hill yesterday, CI Senior Fellow Rick Stevens testified to this urgency as part of a Congressional Subcommittee on Energy hearing, “America’s Next Generation Supercomputer: The Exascale Challenge.” The hearing was related to the American High-End Computing Leadership Act [pdf], a bill proposed by Rep. Randy Hultgren of Illinois to improve the HPC research program of the Department of Energy and make a renewed push for exascale research in the United States. You can watch archived video of the hearing here, and Stevens’ prepared opening statement is reproduced in full below.

=====

Thank you Chairman Lummis, Ranking Member Swalwell, and Members of the Subcommittee. I appreciate this opportunity to talk to you about the future of high performance computing research and development, and about the importance of U.S. leadership in the development and deployment of Exascale computing.

I am Rick Stevens, the Associate Laboratory Director responsible for Computing, Environment, and Life Sciences research at Argonne National Laboratory. My laboratory operates one of the two Leadership Class computing systems for DOE’s Office of Science. My own research focuses on finding new ways to increase the impact of computation on science – from the development of new more powerful computer systems to the creation of large-scale applications for computational genomics targeting research in energy, the environment and infectious disease. I also am a Professor at the University of Chicago in the Department of Computer Science, where I hold senior fellow appointments in the University’s Computation Institute and the Institute for Genomics and Systems Biology.

I believe that advancing American leadership in high-performance computing is vital to our national interest. High-performance computing is a critical technology for the nation. It is the underlying foundation for advanced modeling and simulation and big data applications.

(more…)

Read Full Post »

Angry_Birds_Land_Särkänniemi_8Many of us carry a computer in our pocket that’s as powerful as the supercomputers of the late 1980’s. Many of us also mostly use that revolutionary device to slingshot cartoon birds at evil pigs. Smartphones have undoubtedly improved and changed our lives in many different ways, yet the potential of these mobile computers to benefit science and humanity has often been overshadowed by their talent for eating up free time with a silly game. But as CI fellow T. Andrew Binkowski said in his (flood-delayed) talk for Faculty Technology Day on May 8th, there are few reasons why the power of smartphone apps can’t also be harnessed for teaching and research in an academic context.

In general, the world of smartphone apps is a cruel and competitive ecosystem. Almost 1 million apps are available in Apple’s App Store, which has seen some 50 billion downloads since its launch in 2008. Due to this scale, Binkowski said he often warns people that no matter how good their app idea is, it’s very likely that somebody else has already created and released something similar. Often, it’s the design, marketing and support of the app that separates it from a crowd of lookalike releases — Angry Birds wasn’t even the first game where a player flings animals at buildings, and yet it is now the most successful franchise in iOS history.

For your typical developer, that makes selling your app “the hardest 99 cents you will ever earn,” Binkowski said. But for academic apps meant for the classroom or laboratory, that fierce competition is irrelevant.

“A lot of educators have goals to better reach out and connect with students, facilitate research, or something as simple as improving communication,” Binkowski said. “This removes a lot of the burdens and constraints of developing an app. If you want an app for your lab and you’re the only one in the world doing this research, you don’t have to worry about this fiercely competitive marketplace. You can build something that just helps you.”

(more…)

Read Full Post »

The Alliant FX/8, an early parallell supercomputer.

The Alliant FX/8, an early parallell supercomputer.

For the last few decades, parallelism has been the secret weapon of computing. Based on the theory that large problems could be solved faster if they are chopped up into smaller problems performed simultaneously, parallel computing has driven supercomputers to their current petascale power. Recently, the concept has spread to consumer computers as well, as clock speed limitations of single processors led manufacturers to switch to multi-core chips combining 2, 4 or 8 CPUs. But in the early 1980’s, when Argonne National Laboratory created its Argonne Leadership Research Facility, the path of parallelism was not so clear.

The origin, subsequent impact and future role of this technology were the topics of discussion at the Thirty Years of Parallel Computing at Argonne symposium, held over two days earlier this week. Luminaries of the computer industry and research community — many of them Argonne alumni or collaborators — met on the Argonne campus to share stories of the laboratory’s instrumental role in nurturing parallel computers and the software they use, and how the approach helped to create the computational science of today and tomorrow.

From a modern perspective, it was hard to spot the world-changing potential in Jack Dongarra’s pictures and descriptions of the earliest Argonne parallel computers, which more resembled washer-dryers than today’s sleek, gargantuan supercomputers. The diversity of parallel machines purchased by the ACRF — 13 in its first 8 years, Dongarra said — reflected the excitement and uncertainty about parallel computing in those early days.

“We knew that parallel computing was the way HPC was going to be done in the future,” said Paul Messina, director of science at what is now known as the Argonne Leadership Computing Facility. “But there was no clear winner in terms of parallel architectures.”

(more…)

Read Full Post »

Newer, faster supercomputers have allowed scientists to create detailed models of blood flow that help doctors understand what happens at the molecular level. (Photo from Argonne)

Newer, faster supercomputers have allowed scientists to create detailed models of blood flow that help doctors understand what happens at the molecular level. (Photo from Argonne)

This week, some 25 cities around the world are hosting events online and offline as part of Big Data Week, described by its organizers as a “global community and festival of data.” The Chicago portion of the event features several people from the Computation Institute, including two panels on Thursday:  “Data Complexity in the Sciences: The Computation Institute” featuring Ian Foster, Charlie Catlett, Rayid Ghani and Bob George, and  “Science Session with the Open Cloud Consortium” featuring Robert Grossman and his collaborators. Both events are in downtown Chicago, free, and you can register at the above links.

But the CI’s participation in Big Data Week started with two webcast presentations on Tuesday and Wednesday that demonstrated the broad scope of the topic. The biggest data of all is being produced by simulations on the world’s fastest supercomputers, including Argonne’s Mira, the fourth-fastest machine in the world. Mira boasts the ability to 10 quadrillion floating point operations per second, but how do you make sense of the terabytes of data such powerful computation produces on a daily basis?

In his talk “Big Vis,” Joseph Insley of Argonne and the CI explained how he and his team has developed equally impressive visualization technology to keep pace with Mira’s data firehose. Tukey, a 96-node visualization cluster, is Mira’s sidekick, sharing the same software and file systems with its big sibling to more easily take in data and transform it into images. Insley demonstrated how visualization was instrumental in two major simulations conducted on Mira: one studying arterial blood flow and aneurysm rupture in the brain, and another on nothing less than the evolution of the entire universe.

(more…)

Read Full Post »

Older Posts »