Feeds:
Posts
Comments

Archive for the ‘Medicine’ Category

cancer-classifiers-web

Finding a better way to fight cancer doesn’t always mean discovering a new drug or surgical technique. Sometimes just defining the disease in greater detail can make a big difference. A more specific diagnosis may allow a physician to better tailor a patient’s treatment, using available therapies proven to work better on a specific subtype of disease or avoiding unnecessary complications for less aggressive cases.

“Finding better ways to stratify kids when they present and decide who needs more therapy and who needs less therapy is one of the ways in which we’ve gotten much better at treating pediatric cancer,” said Samuel Volchenboum, Computation Institute Fellow, Assistant Professor of Pediatrics at Comer Children’s Hospital and Director of the UChicago Center for Research Informatics. “For example, kids can be put in one of several different groups for leukemia, and each group has its own treatment course.”

Classically, patients have been sorted into risk or treatment groups based on demographic factors such as age or gender, and relatively simple results from laboratory tests or biopsies. Because cancer is a genetic disease, physicians hope that genetic factors will point the way to even more precise classifications. Yet despite this promise, many of the “genetic signatures” found to correlate with different subtypes of cancer are too complex – involving dozens or hundreds of genes – for clinical use and difficult to validate across patient populations.

(more…)

Read Full Post »

Forest of synthetic pyramidal dendrites grown using Cajal's laws of neuronal branching (Wikimedia Commons)

Forest of synthetic pyramidal dendrites grown using Cajal’s laws of neuronal branching (Wikimedia Commons)

Trauma surgeons know how to fix gunshot wounds, lacerations and broken bones. It’s what comes afterwards that really worries them. Even after the initial injury is treated, patients are at risk for secondary issues such as infection, sepsis and organ failure. While the biological pathways involved in these processes have been well studied and characterized, effective interventions to reliably stop the dangerous cascade have yet to be discovered.

“It was very frustrating for me to not have the drugs and tools necessary to fix what I thought was actually going wrong with those patients,” said trauma surgeon and CI senior fellow Gary An, in his University of Chicago Alumni Weekend UnCommon Core talk. “Often we know what will happen, but we have no way to stop it.”

The current fashionable approach to such intractable problems in medicine and other fields is Big Data, where answers hiding in massive datasets will be uncovered by advanced analytic methods. But quoting Admiral Ackbar, An warned the audience that this approach alone “is a trap,” generating a multitude of correlations and hypotheses that don’t always translate into real world applications.

“What it wants to appeal to is magic…if you can get enough data and a big powerful computer, an answer will magically appear,” said An, an associate professor of surgery at University of Chicago Medicine. “That’s fine if you want to diagnose or characterize. But if we want to engineer interventions to be able to manipulate systems, we need to have presumptions of mechanistic causality; we need to be able to test hypotheses.”

(more…)

Read Full Post »

ChicagoBlue SmallWatch or listen to the news in any city and you’ll be fed a stream of numbers: traffic times, weather forecasts, sports scores and financial reports. All this data gives a quick, surface snapshot of the city on any given day — what happened last night, what’s happening right now, what will happen over the next 24 hours. But a city’s health is harder to put a figure on, either because of the complexity of data, the scarcity of data or the hiding of data behind locked doors. At the University of Chicago last week, a panel of researchers in medicine and the social sciences discussed how the health numbers of Chicago and other cities can be both collected and applied, enabling research on unprecedented scales and empowering citizens to improve their own wellbeing.

The panel, “Methods, Data, Infrastructure for the Study of Health in Cities,” was part of the broader Health in Cities event, one of four Urban Forums held by the University of Chicago Urban Network encompassing the impressive breadth of city research on campus. Among the participants were several scientists who currently collaborating with CI researchers on how to use computation to better collect, analyze and share data. Kate Cagney, an associate professor of sociology and health studies, is working with the Urban Center for Computation and Data on their efforts to help plan and study the massive new Lakeside development taking shape on Chicago’s South Side. Her team will conduct interviews of residents in the neighborhoods surrounding Lakeside both before and after construction to assess how many aspects of their lives — including health — are affected by this enormous addition to the city’s landscape.

“We have an opportunity to study the impact of building a neighborhood from the ashes,” Cagney said. “New computational and data-intensive science techniques now exist to organize and analyze many disparate data sets, and these will allow for the study of Lakeside in unprecedented detail and produce insights due to real-time data.”

(more…)

Read Full Post »

Graphic by Ana Marija Sokovic

Graphic by Ana Marija Sokovic

When Charles Darwin took his historic voyage aboard the HMS Beagle from 1831 to 1836, “big data” was measured in pages. On his travels, the young naturalist produced at least 20 field notebooks, zoological and geological diaries, a catalogue of the thousands of specimens he brought back and a personal journal that would later be turned into The Voyage of the Beagle. But it took more than two decades for Darwin to process all of that information and into his theory of natural selection and the publication of On the Origin of Species.

While biological data may have since transitioned from analog pages to digital bits, extracting knowledge from data has only become more difficult as datasets have grown larger and larger. To wedge open this bottleneck, the University of Chicago Biological Sciences Division and the Computation Institute launched their very own Beagle — a 150-teraflop Cray XE6 supercomputer that ranks among the most powerful machines dedicated to biomedical research. Since the Beagle’s debut in 2010, over 300 researchers from across the University have run more than 80 projects on the system, yielding over 30 publications.

“We haven’t had to beat the bushes for users; we went up to 100 percent usage on day one, and have held pretty steady since that time,” said CI director Ian Foster in his opening remarks. “Supercomputers have a reputation as being hard to use, but  because of the Beagle team’s efforts, because the machine is well engineered, and because the community was ready for it, we’ve really seen rapid uptake of the computer.”

A sampler of those projects was on display last week as part of the first Day of the Beagle symposium, an exploration of scientific discovery on the supercomputer. The projects on display covered the very big — networks of genes, regulators and diseases built by UIC’s Yves Lussier — to the very small — atomic models of molecular motion in immunological factors, cell structures and cancer drugs. Beagle’s flexibility in handling projects from across the landscape of biology and medicine ably demonstrated how computation has solidified into a key branch of research in these disciplines alongside traditional theory and experimentation.

(more…)

Read Full Post »

Newer, faster supercomputers have allowed scientists to create detailed models of blood flow that help doctors understand what happens at the molecular level. (Photo from Argonne)

Newer, faster supercomputers have allowed scientists to create detailed models of blood flow that help doctors understand what happens at the molecular level. (Photo from Argonne)

This week, some 25 cities around the world are hosting events online and offline as part of Big Data Week, described by its organizers as a “global community and festival of data.” The Chicago portion of the event features several people from the Computation Institute, including two panels on Thursday:  “Data Complexity in the Sciences: The Computation Institute” featuring Ian Foster, Charlie Catlett, Rayid Ghani and Bob George, and  “Science Session with the Open Cloud Consortium” featuring Robert Grossman and his collaborators. Both events are in downtown Chicago, free, and you can register at the above links.

But the CI’s participation in Big Data Week started with two webcast presentations on Tuesday and Wednesday that demonstrated the broad scope of the topic. The biggest data of all is being produced by simulations on the world’s fastest supercomputers, including Argonne’s Mira, the fourth-fastest machine in the world. Mira boasts the ability to 10 quadrillion floating point operations per second, but how do you make sense of the terabytes of data such powerful computation produces on a daily basis?

In his talk “Big Vis,” Joseph Insley of Argonne and the CI explained how he and his team has developed equally impressive visualization technology to keep pace with Mira’s data firehose. Tukey, a 96-node visualization cluster, is Mira’s sidekick, sharing the same software and file systems with its big sibling to more easily take in data and transform it into images. Insley demonstrated how visualization was instrumental in two major simulations conducted on Mira: one studying arterial blood flow and aneurysm rupture in the brain, and another on nothing less than the evolution of the entire universe.

(more…)

Read Full Post »

r-BLOOMBERG4-large570A CITY PROJECT BATTLE ROYALE

As the keynote speaker at the Urban Sciences Research Coordination Network kickoff last Friday, the City of Chicago’s Brett Goldstein presented a blizzard of exciting city projects at various states of development. One slightly-under-wraps project Goldstein touched upon was the SmartData platform, an ambitious plan to craft a new tool for decision-making and city services out of the abundant raw material of city data. In collaboration with the Computation Institute and the Urban Center for Computation and Data, the city’s Innovation and Technology team hopes to create a tool that will analyze the city’s many large datasets in real time to help the city respond to challenges more quickly and efficiently, while providing frequently updated, useful information to its citizens.

Wednesday, that exciting new effort was announced as a finalist in the Bloomberg Philanthropies Mayors Challenge, a competition among ideas proposed by cities across the United States. As part of the judging, the public is invited to vote for their favorite project among the 20 finalists at the Huffington Post. We’re biased of course, but to help make the case for Chicago’s project, you can read more about the SmartData platform here, or watch a video about the concept featuring Mayor Rahm Emanuel below.

(more…)

Read Full Post »

ModelHumans have a visual bias, even hundreds of thousands of years after our pattern recognition skills evolved due to prehistoric habits of hunting and predator avoidance. In a newspaper or a scientific article, a well-designed graphic or picture can often convey information more quickly and efficiently than raw data or a lengthy chunk of text. And as the era of data science is dawning, the interpretative role of visualization is more important than ever. It’s hard to even imagine the size of a petabyte of data, much less the complex analysis necessary to extract knowledge from the flood of information within.

Fortunately, scientists and engineers were studying this need for visualization long before Big Data became a buzzword. The Electronic Visualization Laboratory, housed at the University of Illinois at Chicago, has been active in this field long enough to have done special effects work on the original Star Wars. EVL researchers have pioneered methods in computer animation, virtual reality and touchscreen displays, and adapted those technologies for use by scientists in academia and industry. But in EVL director Jason Leigh‘s talk at the University of Chicago Medical Center on January 29th, the killer app he focused the most on was almost as old as those hunter-gatherer ancestral humans: collaboration.

(more…)

Read Full Post »

Older Posts »