Archive for the ‘Science as a Service’ Category



China’s Milky Way 2 supercomputer was recently declared the fastest supercomputer in the world by industry scorekeeper Top500, the latest move in the increasingly international race for high performance computing supremacy. Late last month, CI Senior Fellow Rick Stevens appeared on Science Friday, alongside Top 500 editor Horst Simon, to talk about why that competition matters, and what the global push for faster computation will do for medicine, engineering and other sciences.

“These top supercomputers are like time machines,” Stevens said. “They give us access to a capability that won’t be broadly available for five to ten years. So whoever has the time machine is able to do experiments, able to see into the future deeper and more clearly than those that don’t have such machines.”

The same time machine metaphor was also picked up by the University of Chicago’s profile of Mira, our local Top500 competitor, which was bumped down to #5 by the Milky Way 2’s top ranking. But there’s no shame in fifth-best, when fifth-best can run 10 quadrillion calculations per second — the equivalent computing power of 58 million iPads. CI Senior Fellow Gregory Voth is quoted about how access to such a world-class resource helps both today and tomorrow’s scientists.

“Having access to a computing resource like Mira provides excellent opportunities and experience for educating up-and-coming young scientists as it forces them to think about how to properly utilize such a grand resource very early in their careers,” Voth says. “This gives them a unique perspective on how to solve challenging scientific problems and puts them in an excellent position to utilize computing hardware being imagined now for tomorrow.”


The Data Science for Social Good fellowship has reached the halfway point, and the website is starting to fill up with interesting content about the projects. Some fellows have already produced tools for the community to use, such as Paul Meinshausen’s interactive tree map of the City of Chicago’s Data Portal. Instead of a cold, no-frills list of the datasets available for download by the public, Meinshausen’ s map uses color and shape to guide users quickly to the data they are seeking and make rapid comparisons about the size of the dataset. The visualization was popular enough that programmers in Boston and San Francisco quickly applied his code to their own city’s data portals, while another built a common map for every city that uses Socrata software to share its data.


Read Full Post »

CERN is known as the current world epicenter of particle physics, the home of the Large Hadron Collider and thousands of scientists expanding our knowledge of the universe’s most basic ingredients. For one day earlier this month, the Geneva, Switzerland laboratory was also a meeting place for scientists, philosophers, musicians, animators and even Will.I.Am to share their grand ideas for the first ever TEDxCERN event. Among the speakers riffing on the theme of “Multiplying Dimensions” was CI Director Ian Foster, who presented his vision for The Discovery Cloud and accelerating the pace of science by bringing advanced data and computation tools to the smaller laboratories and citizen scientists of the world.

What we need to do is to in a sense create a new set of cloud services which do for science what the myriad of business cloud services do for business. We might call it the discovery cloud. It would be a set of services that take on, automate, and allow people to handle or outsource many of the routine activities that currently dominate research…I believe if we do that right, we can really make a transformative difference in how people do science.

You can watch a full video of Foster’s presentation below:

International Science Grid This Week also covered Foster’s talk and another given a day earlier to the information technology team at CERN. In that speech, Foster delivered a similar message about the need to bring advanced cyberinfrastructre to the “99%” of laboratories who can’t afford to build international data grids akin to what CERN used in its discovery of the Higgs boson.

“We have managed to create exceptional infrastructure for the 1%, but what about the rest?” asks Foster. “We have big science, but small labs. How do we deliver cyber infrastructure to small groups? They need something that is frictionless, affordable and sustainable.”

Read Full Post »

©CERN Photo: Brice Maximilien / Laurent Egli — at CERN.

©CERN Photo: Brice Maximilien / Laurent Egli — at CERN.


We were thrilled to spend Friday morning with the folks at TEDxCERN via webcast, enjoying fascinating talks by CI director Ian Foster and several other amazing scientists and educators. Foster’s talk focused on “The Discovery Cloud,” the idea that many complex and time-consuming research tasks can be moved to cloud-based tools, freeing up scientists to accelerate the pace of discovery. We’ll post the video when it’s up, but for now, enjoy this great animation produced for the conference by TED-Ed explaining grid computing, cloud computing and big data.


Speaking of CERN, ISGTW ran a lengthy profile of the computing grid that powers the particle physics research on their Large Hadron Collider. In the three years since the LHC started running, it has produced 70 petabytes of data, which is subsequently distributed around the world to over 150 sites for coordinated and parallel analysis. As Wired wrote back in 2004, the LHC grid was built on Globus Toolkit, created by Ian Foster and Carl Kesselman, “the Lewis and Clark of grid computing.”

Some of the science-as-a-service ideas Foster discussed in his TEDxCERN talk were brought up a week earlier by Renee DiResta in O’Reilly Radar. Companies that provide 3D microscopic scanning, data platforms for computational biology or drug discovery and even connections with freelance scientists are featured.

Computation is eating science, and that’s a good thing…but funding agencies and researchers need to change or be digested, writes Will Schroeder at Kitware.

Read Full Post »

image descriptionPeople who work in laboratories take a lot of things for granted. When they come into work in the morning, they expect the equipment to have power, the sink to produce hot and cold water, and the internet and e-mail to be functional. Because these routine services are taken care of “behind the scenes” by facilities and IT staff, scientists can get started right away on their research.

But increasingly, scientists are hitting a new speed bump in their day-to-day activities: the storage, movement and analysis of data. As datasets grow far beyond what can easily be handled on a single desktop computer and long-distance collaborations become increasingly common, frustrated researchers find themselves spending more and more time and money on data management. To get the march of science back up to top speed, new services must be provided that make handling data as simple as switching on the lights.

That mission was the common thread through the second day of the GlobusWorld conference, an annual meeting for the makers and users of the data management service, held this year at Argonne National Laboratory. As Globus software has evolved from enabling the grids that connect computing centers around the world to a cloud-based service for moving and sharing data, the focus has shifted from large, Big Science collaborations to individual researchers. Easing the headache for those smaller laboratories with little to no IT budget can make a big impact on the pace of their science, said Ian Foster, Computation Institute Director and Globus co-founder, in his keynote address.

“We are sometimes described as plumbers,” Foster said. “We are trying to build software and services that automate activities that get in the way of discovery and innovation in research labs, that no one wants to be an expert in, that people find time-consuming and painful to do themselves, and that can be done more effectively when automated. By providing the right services, we believe we can accelerate discovery and reduce costs, which are often two sides of the same coin.”


Read Full Post »

698px-PC_remote_control_20101107“Software as a Service,” or SaaS, is a concept that has revolutionized the way people use their computers. Every time you check your e-mail on Gmail, stream a movie over Netflix or customize a radio station on Pandora, you’re accessing an SaaS through a browser that saves you the trouble of installing programs and storing data locally on your own computer. In an essay written for O’Reilly Radar, Renee DiResta argues for swapping out the first S in SaaS for “science,” creating online tools for scientists to outsource time-intensive and expensive processes such as specialized experiments and data sharing, storage and analysis.

Perhaps we can facilitate scientific progress by streamlining the process. Science as a service (SciAAS?) will enable researchers to save time and money without compromising quality. Making specialized resources and institutional expertise available for hire gives researchers more flexibility. Core facilities that own equipment can rent it out during down time, helping to reduce their own costs. The promise of science as a service is a future in which research is more efficient, creative, and collaborative.

In the comments to the article and on his blog, CI director Ian Foster responded, agreeing that science as a service has the power to free up researchers’ time and budget and talking about one of the CI’s own SciAAS initiatives:

This article echoes several themes that I speak to often. In my conception, every researcher is an entrepreneur, and researchers, like entrepreneurs, should be able to run their (virtual) operations from coffee shops. Science as a service frees researchers to work when and where they want, while also saving them time and money.


Read Full Post »