The Power of Parallels

Published: Jun 21, 2009

(the power of parallels with IBM technology)

UMBC’s Multicore Computational Center (MC2) unleashes new energies in the race to make computers faster.

By Joab Jackson ’90

Here’s a Silicon Valley secret: Computer microprocessors aren’t getting any faster. The limits of processor technology have been reached. If they ran any speedier, they’d melt their cases. And all those snazzy new desktop and laptop computers on sale at the local consumer electronics store? They seem sprightlier not because they have faster chips, but because they have more of them. New computers usually come with either two or four processors, or “cores.”

And even then the difference in speed may not be all that impressive. Computer makers are learning what restaurant owners already know: Hiring a second cook doesn’t mean pasta will boil more quickly.

The exciting news is that UMBC is on the cutting edge of a discipline, called parallel programming, that may show Silicon Valley – and computer programmers worldwide – how to speed things up.

Parallel processing is a way for computers to spread their applications across multiple processors so that they will run faster. Exploring how to do it – or parallel programming – is the mission of UMBC’s Multicore Computational Center, or MC2.

In July 2007, IBM gave UMBC computer science professors Milton Halem and Yelena Yesha a grant to launch the center with cash and equipment that have totaled more than $1 million over the past three years. Supporting funding from NASA also helped the effort.

“Not only are we ahead of the curve,” says Charles Nicholas, chair of the department of computer science and electrical engineering, “but we hope to stay ahead of the curve…. The partnerships with IBM will let us keep the technologies up to date.”

Halem says that government and private enterprise are in dire need of “trained graduate students who know how to apply the new methods of parallel programming to the problems they face,” Halem says. “We’re one of the few schools in the nation that is teaching these courses.”

Researchers who are using the MC2 are also excited by the chance to map out this hitherto unexplored territory in processing.

“We don’t have a complete picture of how it will work, but it is definitely the trend we’re trying to catch,” says Yesha. “And UMBC is definitely on the frontier of this development.”

Blades and Building

For all the potential power that it offers, the MC2 is not meant to dazzle the naked eye. The center is buried deep in the Information Technology/Engineering (ITE) building on UMBC’s campus in a windowless room. Inside that space are stacked clusters of about 50 IBM ultra-thin computers, called blade servers.

IBM donated a number of these high-powered computers that contain an innovative new type of chip, called the Cell Broadband Engine. The Cell chip, which is used to power PlayStation3, is actually a collection of eight different processors, all in a single package. It also serves as a good introduction for students to parallel processing.

The center was the brainchild of Yesha and Halem, who envisioned a powerful center to spur research and teaching. Its establishment, says Yesha, is a tangible sign of how far UMBC has come in the past 20 years – since the days when students logged on to the school’s VAX mainframe system by going into the basement of the library and grabbing a seat behind a monochrome terminal.

Yesha arrived at UMBC in 1989 as an assistant professor after getting her Ph.D. in computer science at the Ohio State University. Once she settled in, Yesha honed her skills in the sub-discipline of distributed systems, or computing systems tied together from multiple, and sometimes geographically dispersed, components. In 1994, she started to lend a hand to NASA’s Goddard Space Flight Center, located a few miles south of the UMBC campus, as director of the Center of Excellence in Space Data and Information Sciences (CESDIS) unit.

Her mission? Solving large problems with parallel programming. “I appreciated the power of big supercomputers, and worked with supercomputers for a number of years,” she says.

It was at CESDIS that Yesha met Halem, who was the assistant chief information research scientist and chief information officer at Goddard. Halem also saw the potential power of parallel processing, having overseen the construction of the first supercomputer built entirely from thousands of processors, called Goodyear. (It is now in the Smithsonian.)

When Halem retired from NASA in 2002, he signed on at UMBC to teach and continue his research. Of like minds, Halem and Yesha won grants and government awards to expand research into parallel programming, with MC2 as the culmination of this work.

All Together Now

Computer industry companies such as Intel and Microsoft have also begun to fund research into parallel programming, but computer science research faculty member John Dorband, who is MC2’s chief computational scientist, bluntly says that “the results are rather mediocre.”

As the former head of system software research for the NASA Goddard Space Flight Center, Dorband is entitled to talk some smack. He also knows a thing or two about how to get computers to pull together as a single entity.

In the early 1990s, while Dorband was working at CESDIS, he and two other colleagues refined a way to lash low-cost computers together so that they work in harmony as a single machine. Called “Beowulf Clustering,” the approach can be used to build machines as powerful as the dedicated supercomputers used for weather forecasting and other humongous jobs, but at a fraction of a supercomputer’s multi-million-dollar price tag.

It is hard to overestimate the influence Beowulf has had in supercomputing. Today, over 80 percent of the world’s 500 most powerful supercomputers are clusters. And the center is hoping to apply the lessons that Dorband and others have learned in supercomputing to making more common applications, such as your spreadsheet or e-mail reader, run faster.

Using multiple processors at once can be a challenge for several reasons, Dorband explains. For one, you don’t want to break up the job in such a way that whatever gains in speed achieved would be offset by the additional work needed to manage the job across all the processors. Also, how do you get two different processors to communicate with one another the results of their computations?

These are the types of tricky problems that students – and their professors – tackle at the MC2.

From Master to Student

The Multicore Computational Center is already having a big impact on teaching at UMBC. The university offers a number of parallel programming classes with hands-on experience – including an elective class for undergraduates.

“Basically we are creating people who are able to take advantage of this thing. There are very few experts in this field,” Yesha says.

Nicholas concurs, observing that MC2’s presence on campus is “giving students access to hardware and to interesting problems that otherwise they wouldn’t have come across.”

The MC2 has also come in handy in research. For David Chapman, a UMBC graduate student, the center has been an invaluable resource for understanding how to work on such large datasets. Chapman’s latest research project will show how using the cell processors could help the search engine giant Google – and other search engines – index the Web more quickly.

“The architecture is very different. It forces the programmer to design the problem around the hardware,” Chapman said of the Cell processor. “If you write your code one way, it runs slower than a regular processor, but if you rewrite it for the machine, it runs 100 times faster. It’s a tricky thing.”

UMBC researchers also use the MC2’s cluster for numerous scientific and mathematical projects. The muscle of many processors has been especially useful for those projects that involve summarizing complex relations within sprawling data sets.

For instance, Halem led an effort to develop a system that would analyze large sets of infrared earth imagery to show how the climate changes in a given region over a period of time. Such work, done in conjunction with NASA Goddard, has already helped better characterize recent fluctuations in global temperature. Another UMBC professor, Tim Finin, plans to use the cluster to analyze how people interact with each other across the hundreds of thousands of Web logs (or blogs) on the Internet. Who takes the lead in spurring online communication?

In particular, Finin is looking to develop ways of having a computer automatically identify who the most influential individuals are across many different communities, from knitting aficionados to wine lovers. When people post and comment and point to each other’s blogs, they leave behind links. These links allow Finin and his team to identify leading members of these groups by looking to where the links point.

This is easy to do with a few blogs, but for thousands, the work can expand rapidly. To do this, the group uses what is known as matrix-multiplication, a tedious process of multiplying one large group of numbers with another large group of numbers.

Fortunately, it is a problem that can be broken into smaller subsets – one for each computer core. Now Finin’s research group wants to expand the research to hundreds of thousands of blogs. For this, the multicore computers would be essential.

Another project Finin has embarked on with the MC2 involves helping computers reason more deeply about human writing and speech. Can a computer tell that a newspaper article is about basketball star Michael Jordan – and not about Michael Jordan, the English soccer goalkeeper?

Finin is developing a way that computers can use a large number of descriptive words, such as an online encyclopedia, to build up vocabulary and make meaningful relations between different sets of words. (Such as “Michael Jordan” and “basketball.”)

“These uses were not possible before without having access to this computational power,” Yesha says. Knowing a thing or two about working in tandem, the center’s managers are now banding with the Georgia Institute of Technology, the University of California San Diego, and the University of Minnesota to start a multi-institution “National Center of Excellence” for parallel programming. The group has applied to the National Science Foundation for funding of this project. If this partnership moves forward, it will no doubt push the frontiers of parallel processing even further.

“The uniqueness of the UMBC facility is not just the iron and the configuration, but also the intellect and the brain power we have in terms of our staff,” Yesha says. “They know how to not only configure and operate it, but also take advantage of it in a number of different dimensions.”

Tags:

Scroll to Top