Feeds:
Posts
Comments

Archive for the ‘Uncategorized’ Category

When the New York Times ran their investigative report in September on the massive amount of energy used by data centers, it drew widespread criticism from people within the information technology industry. While nobody involved with the operation or engineering of those data centers denied that they use a lot of resources, many experts took offense at the article’s suggestion that the industry wasn’t interested in finding solutions. “The assertions made in it essentially paint our engineers and operations people as a bunch of idiots who are putting together rows and rows of boxes on data centers and not caring what this costs to their businesses, nay, to the planet,” wrote computer scientist Diego Doval, “And nothing could be further from the truth.”

That statement was backed up by a talk given last week by Hewlett Packard Labs Fellow Partha Ranganathan, who told a room of computer science students and researchers about his company’s efforts to develop “energy-aware computing.” Ranganathan made the argument for more efficient supercomputers and data centers not just on the merits of environmental benefits, but also as an essential hurdle that must be cleared for computing speed to continue the exponential march charted by Moore’s Law. As the field hopes to push through the petascale to the exascale and beyond, Ranganathan said that the “power wall” – the energy required for power and cooling — was becoming a fundamental limit to capacity. So one of the greatest challenges the IT field currently faces is how to deliver faster and faster performance at both low cost and high sustainability.

(more…)

Read Full Post »

A panorama of Mira (courtesy Argonne National Laboratory)

A panorama of Mira (courtesy Argonne National Laboratory)

In the computational world, where speed is king, fifteen zeros is the current frontier. The new wave of petascale supercomputers going online around the world in the coming months are capable of performing at least one quadrillion, or 1,000,000,000,000,000, floating calculations per second. In exponential notation, a quadrillion is shortened to 1 x 10^15, so clever computer scientists declared October 15th (get it?) to be Petascale Day, a celebration of this new computational land speed record and its ability to transform science.

Here at the Computation Institute, we observed the day by hosting a lunch event with the University of Chicago Research Computing Center, putting together a roster of six talks about these powerful machines and the new types of research they will enable. The speakers, who hailed from Argonne National Laboratory, the University of Chicago, and the Computation Institute, talked about the exciting potential of the petascale, as well as the technical challenges scientists face to get the most out of the latest supercomputers.

(more…)

Read Full Post »

The 2012 IEEE International Conference on eScience is taking place in Chicago this year, and we’ll be there Wednesday through Friday to report on talks about the latest in computational research. We’ll update the blog throughout the conference (subject to wifi and electrical outlet availability), and will tweet from the talks @Comp_Inst using the hashtag #eScience.

What to Do (and Say) When the Models Aren’t Good Enough (8:30 – 10:00)

The place where most people encounter computational models in their daily life is via the weather forecast. The meteorologist on the morning news or the information in the Weather app is working with numbers generated by computer models that analyze satellite data and predict out the next 24 hours or longer to some degree of probability. As everyone knows, these weather forecasts aren’t always right, despite centuries of science studying weather patterns and coming up with supposedly better ways of predicting whether it’s going to rain tomorrow or not.

(more…)

Read Full Post »

The 2012 IEEE International Conference on eScience is taking place in Chicago this year, and we’ll be there Wednesday through Friday to report on talks about the latest in computational research. We’ll update the blog throughout the conference (subject to wifi and electrical outlet availability), and will tweet from the talks@Comp_Inst using the hashtag #eScience.

How to Get to All That Data (and When Do the Robots Take Over) (1:00 – 3:00)

A lot of information was shared this afternoon at the conference about the voting habits of people living in Melbourne, Australia. Two different, but related projects from Down Under — the Australian Urban Research Infrastructure Network and esocialscience.org — demonstrated their web-based portals for sharing datasets collected about the country, and both chose to map the distribution of voters for the two major parties in Australia, the Labour party and the Liberals (who are actually conservative, we learned). The presentations, by Gerson Galang at the University of Melbourne and Nigel Ward of the University of Queensland, showed both the mountains of data available to researchers with a few clicks in their browser and the very complicated machinery “under the hood” that makes such voluminous information — along with the analysis and visualization tools often needed by those researchers — so easily accessible.

(more…)

Read Full Post »

The 2012 IEEE International Conference on eScience is taking place in Chicago this year, and we’ll be there Wednesday through Friday to report on talks about the latest in computational research. We’ll update the blog throughout the conference (subject to wifi and electrical outlet availability), and will tweet from the talks @Comp_Inst using the hashtag #eSci12.

Paving Future Cities with Open Data (Panel 2:00 – 3:00)

As the Earth’s population increases, the world is urbanizing at an accelerating rate. Currently, half of the people on planet live in cities, but that number is expected to grow to 70 percent in the coming decades. Booming populations in China and India have driven rapid urban development at a rate unprecedented in human history. Simultaneously, existing cities are releasing more data about their infrastructure than ever before, on everything from crime to public transit performance to snow plow geotracking.

So now is the perfect time for computational scientists to get involved with designing and building better cities, and that was the topic of a panel moderated by Computation Institute Senior Fellow Charlie Catlett. With representatives from IBM and Chicago City Hall and a co-founder of EveryBlock, the panel brought together experts who have already started digging into city data to talk about both the potential and the precautions inherent within.

(more…)

Read Full Post »

A vast amount of scientific knowledge is inaccessible to the scientific community due to the lack of computational resources or tools for small laboratories to share or analyze experimental results. With a new grant from the National Science Foundation, the Computation Institute will collaborate with leading institutions to look for ways that software can bring this data out of hiding, revealing untapped value in the “long tail” of scientific research.

The one-year, $500,000 planning grant enables investigators at the Computation Institute, University of California, Los Angeles, University of Arizona, University of Washington and University of Southern California to lay the groundwork for a proposed Institute for Empowering Long Tail Research as part of the NSF’s Scientific Software Innovation Institutes program. Researchers will engage with scientists from fields such as biodiversity, economics and metagenomics to determine the optimal solutions for the increasingly challenging data and computational demands upon smaller laboratories.

(more…)

Read Full Post »

A large chunk of a government’s budget can be traced back to a small number of frequently used, expensive programs. These can include the costs of adult and juvenile incarceration, foster care for endangered children, or safety net services such as treatment for mental health or substance abuse for poor individuals. These programs don’t operate in isolation; many individuals or families in one of the above programs will also be in at least one more at some point in their lives. Finding these social service “hotspots” could allow governments to more effectively distribute resources, reducing costs without sacrificing services at a time when budgets are especially tight.

But the data from each of these programs are walled off in different departments, such as the Departments of Corrections or Children and Family Services, with limited to no sharing across bureaucratic lines. In his Sept. 27 talk at the Computation Institute, Robert Goerge, a senior research fellow at the University of Chicago’s Chapin Hall, described how integrating these silos of public sector data can inform more efficient government spending, and how computation can help.

(more…)

Read Full Post »

Older Posts »