Feeds:
Posts
Comments

Archive for the ‘Biology’ Category

cancer-classifiers-web

Finding a better way to fight cancer doesn’t always mean discovering a new drug or surgical technique. Sometimes just defining the disease in greater detail can make a big difference. A more specific diagnosis may allow a physician to better tailor a patient’s treatment, using available therapies proven to work better on a specific subtype of disease or avoiding unnecessary complications for less aggressive cases.

“Finding better ways to stratify kids when they present and decide who needs more therapy and who needs less therapy is one of the ways in which we’ve gotten much better at treating pediatric cancer,” said Samuel Volchenboum, Computation Institute Fellow, Assistant Professor of Pediatrics at Comer Children’s Hospital and Director of the UChicago Center for Research Informatics. “For example, kids can be put in one of several different groups for leukemia, and each group has its own treatment course.”

Classically, patients have been sorted into risk or treatment groups based on demographic factors such as age or gender, and relatively simple results from laboratory tests or biopsies. Because cancer is a genetic disease, physicians hope that genetic factors will point the way to even more precise classifications. Yet despite this promise, many of the “genetic signatures” found to correlate with different subtypes of cancer are too complex – involving dozens or hundreds of genes – for clinical use and difficult to validate across patient populations.

(more…)

Read Full Post »

roux-kchannel-web

By Kevin Jiang, University of Chicago Medicine

Just 12 molecules of water cause the long post-activation recovery period required by potassium ion channels before they can function again. Using molecular simulations that modeled a potassium channel and its immediate cellular environment, atom for atom, University of Chicago scientists have revealed this new mechanism in the function of a nearly universal biological structure, with implications ranging from fundamental biology to the design of pharmaceuticals. Their findings were published online July 28 in Nature.

“Our research clarifies the nature of this previously mysterious inactivation state. This gives us better understanding of fundamental biology and should improve the rational design of drugs, which often target the inactivated state of channels” said Benoît Roux, PhD, professor of biochemistry and molecular biology at the University of Chicago and senior fellow at the Computation Institute.

Potassium channels, present in the cells of virtually living organisms, are core components in bioelectricity generation and cellular communication. Required for functions such as neural firing and muscle contraction, they serve as common targets in pharmaceutical development.

(more…)

Read Full Post »

Forest of synthetic pyramidal dendrites grown using Cajal's laws of neuronal branching (Wikimedia Commons)

Forest of synthetic pyramidal dendrites grown using Cajal’s laws of neuronal branching (Wikimedia Commons)

Trauma surgeons know how to fix gunshot wounds, lacerations and broken bones. It’s what comes afterwards that really worries them. Even after the initial injury is treated, patients are at risk for secondary issues such as infection, sepsis and organ failure. While the biological pathways involved in these processes have been well studied and characterized, effective interventions to reliably stop the dangerous cascade have yet to be discovered.

“It was very frustrating for me to not have the drugs and tools necessary to fix what I thought was actually going wrong with those patients,” said trauma surgeon and CI senior fellow Gary An, in his University of Chicago Alumni Weekend UnCommon Core talk. “Often we know what will happen, but we have no way to stop it.”

The current fashionable approach to such intractable problems in medicine and other fields is Big Data, where answers hiding in massive datasets will be uncovered by advanced analytic methods. But quoting Admiral Ackbar, An warned the audience that this approach alone “is a trap,” generating a multitude of correlations and hypotheses that don’t always translate into real world applications.

“What it wants to appeal to is magic…if you can get enough data and a big powerful computer, an answer will magically appear,” said An, an associate professor of surgery at University of Chicago Medicine. “That’s fine if you want to diagnose or characterize. But if we want to engineer interventions to be able to manipulate systems, we need to have presumptions of mechanistic causality; we need to be able to test hypotheses.”

(more…)

Read Full Post »

Graphic by Ana Marija Sokovic

Graphic by Ana Marija Sokovic

When Charles Darwin took his historic voyage aboard the HMS Beagle from 1831 to 1836, “big data” was measured in pages. On his travels, the young naturalist produced at least 20 field notebooks, zoological and geological diaries, a catalogue of the thousands of specimens he brought back and a personal journal that would later be turned into The Voyage of the Beagle. But it took more than two decades for Darwin to process all of that information and into his theory of natural selection and the publication of On the Origin of Species.

While biological data may have since transitioned from analog pages to digital bits, extracting knowledge from data has only become more difficult as datasets have grown larger and larger. To wedge open this bottleneck, the University of Chicago Biological Sciences Division and the Computation Institute launched their very own Beagle — a 150-teraflop Cray XE6 supercomputer that ranks among the most powerful machines dedicated to biomedical research. Since the Beagle’s debut in 2010, over 300 researchers from across the University have run more than 80 projects on the system, yielding over 30 publications.

“We haven’t had to beat the bushes for users; we went up to 100 percent usage on day one, and have held pretty steady since that time,” said CI director Ian Foster in his opening remarks. “Supercomputers have a reputation as being hard to use, but  because of the Beagle team’s efforts, because the machine is well engineered, and because the community was ready for it, we’ve really seen rapid uptake of the computer.”

A sampler of those projects was on display last week as part of the first Day of the Beagle symposium, an exploration of scientific discovery on the supercomputer. The projects on display covered the very big — networks of genes, regulators and diseases built by UIC’s Yves Lussier — to the very small — atomic models of molecular motion in immunological factors, cell structures and cancer drugs. Beagle’s flexibility in handling projects from across the landscape of biology and medicine ably demonstrated how computation has solidified into a key branch of research in these disciplines alongside traditional theory and experimentation.

(more…)

Read Full Post »

CGI for Science

An image from a model of how endophilin sculpts membrane vesicles into a network of tubules. (Mijo Simunovic/CMTS)

An image from a model of how endophilin sculpts membrane vesicles into a network of tubules. (Mijo Simunovic/CMTS)

Computer graphics have greatly expanded the possibilities of cinema. Special effects using CGI (computer-generated imagery) today enable directors to shoot scenes that were once considered impossible or impractical, from interstellar combat to apocalyptic action sequences to fantastical digital characters that realistically interact with human actors.

In science, computer graphics are also creating sights that have never been seen before. But where movie special effects artists are realizing the vision of a screenwriter and director, scientific computer models are inspiring new discoveries by revealing a restless molecular world we cannot yet see with the naked eye.

Using computers to peer into this hidden universe was the theme of CI faculty and senior fellow Gregory Voth‘s Chicago Council on Science and Technology talk last week, titled Molecular Modeling: A Window to the Biochemical World. Scientists at Voth’s Center for Multiscale Theory and Simulation use computers to recreate real-world physics and produce awe-inspiring, intricate images, pushing the frontiers of discovery one femtosecond and nanometer at a time.

[Some of those images, including the one above by Mijo Simunovic, were on display as a “Science as Art” gallery, which you can view in a slideshow here.]

“The computer simulation allows us to make a movie, if you will, but it’s a movie describing what the laws of physics tells us,” Voth said. “It’s not a movie where we tell the computer we want this figure to run and shoot this figure. We don’t know what’s going happen. We know the equations, we feed them in [to a supercomputer], and we solve those equations…and we can reach scales we never dreamed of reaching before.”

(more…)

Read Full Post »

“We know more about the movement of celestial bodies than about the soil underfoot.”

Leonardo da Vinci never gave a TED talk, but if he did, that quote from around the beginning of the 16th century might have been a good tweetable soundbite. Five centuries later, da Vinci’s statement still holds true, and it was there for CI Senior Fellow Rick Stevens to pluck as the epigraph for his talk in November 2012 at the TEDxNaperville conference. Stevens used his 18 minutes on the TED stage to talk about the Earth Microbiome Project, an international effort “to systematically study the smallest life forms on earth to build a comprehensive database to capture everything we can learn about these organisms.”

Stevens talks about how little we know about the estimated 1 billion species of microbes on Earth (“In one kilogram of soil there are more microbes than there are stars in our galaxy,” he says), and how citizen science, high-throughput genomics and supercomputing are coming together to finally reveal this vast ecosystem — a process he likens to reconstructing the front page of the newspaper using only the firehose of Twitter and Facebook posts. In 5-10 years, Stevens says, microbiology will finally exceed astronomy, with enormous implications for our understanding of the world around us.

You can watch video of Stevens’ talk below:

Read Full Post »

ele12081-fig-0001

When more dimensions are added, food webs quickly grow more complex. (From Eklöf et al, 2013)

Ecosystems are a chaotic battle royale, with predators and prey, plants and animals, competitors and allies all fighting it out to eat or be eaten. But the food webs scientists typically put together are deceptively tidy diagrams, with simple arrows connecting diners to their natural food options. Ecologists readily admit that a true representation of an ecosystem’s network would be multi-dimensional, simultaneously taking into account multiple traits for each species involved. But just how many dimensions would such a model need to accurately depict the complexity of a large ecosystem? 10? 100? 1000?

In a new paper published this week in Ecology Letters, a team led by scientists at the Computation Institute and University of Chicago calculate that number – and find that it is surprisingly low. Using data collected by their co-authors on 200 different food webs, ranging from the Caribbean reef to New Zealand grasslands to an Arctic Ocean inlet, Anna Eklöf, Stefano Allesina and colleagues looked for the minimum number of dimensions and traits needed to accurately describe a food network. The findings may save ecologists time and effort in revealing the structure underlying an ecosystem, and also help scientists build computational models that can make predictions about an ecosystem’s future.

“To collect this kind of data takes ages to do,” said Eklöf. “If we can find some common rules about these networks, then we can apply them to larger networks. We can also learn about the function of networks, and what happens to networks when we disturb them in different ways.” (more…)

Read Full Post »

Older Posts »