Archive for the ‘MCS’ Category


Last week, we announced the newest CI research center, the Urban Center for Computation and Data (UrbanCCD). Led by CI senior fellow Charlie Catlett, the center will bring the latest computational methods to bear on the question of how to intelligently design and manage large and rapidly growing cities around the world. With more cities, including our home of Chicago, releasing open datasets, the UrbanCCD hopes to bring advanced analytics to these new data sources and use them to construct complex models that can simulate the effects of new policies and interventions upon a city’s residents, services and environment.

Since the announcement, news outlets including Crain’s Chicago Business, RedEye and Patch have written articles about UrbanCCD and its mission. The center was also highlighted by UrbanCCD collaborators at the School of the Art Institute of Chicago and Argonne National Laboratory, and endorsed by US Rep. Daniel Lipinski.

For more examples of what cities are doing with open data releases and the applications built upon those data sets, see The Best Open Data Releases of 2012 as decided by Atlantic Cities or WBEZ’s breakdown of the potential for Chicago’s new open data policy.


Read Full Post »

(from Michael Engel and Hans-Rainer Trebin. Structural complexity in monodisperse systems of isotropic particles. Zeitschrift für Kristallographie, 223:721–725, 2008)

A crystal structure equivalent to β-Uranium identified as forming in the LJG system.

Most people think that scientists spend all of their time conducting experiments. But the less glamorous side of science comes after the experiments are done, as scientists laboriously comb through the data their work created. As new technologies make laboratory procedures faster and automatic, more and more of a scientist’s time is spent on the often tedious task of analyzing data. In order to accelerate the speed of discovery, use resources more efficiently and avoid burning out graduate students, new ways of automating data analysis need to be found.

Carolyn Phillips, a Computation Institute staff member and postdoctoral fellow at Argonne National Laboratory, presented one solution to this data analysis traffic jam in her talk at the CI on December 14th.  Phillips works with scientists studying nanoscale self-assembly, the ability of small, simple molecules to form incredibly complex patterns with no external influence. Many researchers in this realm are using computer simulations to understand how self-assembly works and figure out new ways of harnessing it for use in the design of drugs, materials and cleaner energy sources. But these simulations can produce a flood of data, most of which still needs to be sorted manually and analyzed by slow, distractable humans before the next round of simulations can be run – a problem Phillips sought to fix.

“I wanted to find a way to stop having simulations depend on me all the time,” Phillips said. “How do I automate myself; how do I automate my judgment?”


Read Full Post »