Feeds:
Posts
Comments

Archive for the ‘Imaging-Based Science’ Category

lafferty-mansueto

John Lafferty in the Mansueto Library. (Photo by Jason Smith)

Learning a subject well means moving beyond the recitation of facts to a deeper knowledge that can be applied to new problems. Designing computers that can transcend rote calculations to more nuanced understanding has challenged scientists for years. Only in the past decade have researchers’ flexible, evolving algorithms—known as machine learning—matured from theory to everyday practice, underlying search and language-translation websites and the automated trading strategies used by Wall Street firms.

These applications only hint at machine learning’s potential to affect daily life, according to John Lafferty, the Louis Block Professor in Statistics and Computer Science. With his two appointments, Lafferty bridges these disciplines to develop theories and methods that expand the horizon of machine learning to make predictions and extract meaning from data.

“Computer science is becoming more focused on data rather than computation, and modern statistics requires more computational sophistication to work with large data sets,” Lafferty says. “Machine learning draws on and pushes forward both of these disciplines.”

(more…)

Read Full Post »

snap_plant

The robotic system in action (from Vision Systems Design)

Among scientific disciplines, botany might be considered one of the least tech-minded branches, concerned as it is with the natural world of plant life. But like the rest of biology, botany is quickly moving into the types of large-scale experiments that require more sophisticated and advanced techniques. In many botany labs, high-throughput sequencers generate data at unprecedented rates about plant genomics for many different species. However, this genetic bounty creates a new bottleneck, as the complementary studies examining how those genes control plant traits still proceed at a speed closer to that of old-fashioned fieldwork. That old cliche “watching the grass grow” is not compatible with fast-paced science.

To help bring phenotype closer to the pace of genotype, Nicola Ferrier has equipped botanists with powerful new lab assistants: robots. Ferrier, now an engineer at Argonne National Laboratory, worked with University of Wisconsin botanists to design better equipment for monitoring the growth of the plant species Arabidopsis thaliana. Arabidopsis is a small flowering plant popular as a laboratory model species, in part because of its relatively small genome of roughly 27,000 genes. Eventually, scientists would like to find the role of each of those genes on aspects of the plant’s phenotype, such as root gravitropism, how the roots grow in response to gravity.

But the popular method – a computer-controlled camera to monitor the growth of one Arabidopsis seedling at a time – was far too slow to monitor tens of thousands of mutants. So Ferrier helped the laboratory of Edgar Spalding replace their single-camera system with a “robotic machine vision platform” capable of monitoring up to 144 seedlings simultaneously as their gravity is artificially changed (by rotating the dish 90 degrees).

(more…)

Read Full Post »

Belshazzar's Feast (Rembrandt, circa 1635-1638)

Belshazzar’s Feast (Rembrandt, circa 1635-1638)

They say a picture is worth a thousand words. But if your camera is good enough, the photos it takes could also be worth billions of data points. As digital cameras grew increasingly popular over the last two decades, they also became exponentially more powerful in terms of their image resolution. The highest-end cameras today can claim 50 gigapixel resolution, meaning they are capable of taking images made up of 50 billion pixels. Many of these incredible cameras are so advanced that they have out-paced the resolution of the displays used to view their images – and the ability of humans to find meaningful information within their borders.

Closing this gap was the focus of Amitabh Varshney‘s talk for the Research Computing Center’s Show and Tell: Visualizing the Life of the Mind series in late February. Varshney, a professor of computer science at the University of Maryland-College Park, discussed the visual component of today’s big data challenges and the solutions that scientists are developing to help extract maximum value out of the new wave of ultra-detailed images — a kind of next-level Where’s Waldo? search. The methods he discussed combine some classic psychology about how vision and attention works in humans with advanced computational techniques.

As the centerpiece of the talk, Varshney displayed a 5 gigapixel photo of Mt. Whitney in California. If you knew what to look for, the amount of detail was incredible – Varshney could zoom in thousands of times on a given region of the photograph to show a group of hikers or a bear walking up the side of a mountain. But when you don’t already know what interesting information such a complex image contains, the search can be tedious and frustrating as you zoom in and laboriously check every individual pixel.

(more…)

Read Full Post »