Skip to main content

Hot Chips: Managing Moore’s Law

By Marc Airhart, College of Natural Sciences
Published: April 15 on UT Know

This month marks the 50th anniversary of Moore’s Law, an observation that every couple of years, computer chip manufacturers manage to squeeze twice as many transistors onto a computer chip. Moore’s Law embodies the exponential increase in raw computing power that unleashed a blizzard of tech innovations.

From the Internet to electronic prosthetic legs to smart phones, many wonderful things became possible because, for decades, each time the number of transistors — the tiny on-off switches that perform calculations and temporarily store information — increased, the resulting chip was faster.

But that all changed about a decade ago.

As the chips became more and more dense, the more electric power they used and the more heat they generated. An Intel executive predicted in 2001 that, unless something changed, by 2010 chips would be as hot as nuclear reactors. But long before they reach those temperatures, chips cease to function properly. To work around that problem, chip designers throttled back the voltages driving the chips.

The net result: Even though chips keep getting denser, the parts that do the calculations — the processors — aren’t getting any faster. Chip designers have eked out modest improvements in processing power, but heat remains a central challenge.

Scientists and engineers at The University of Texas at Austin are exploring some clever ways to beat the heat and help make the next great leap in processor speeds. They’re attempting to create low-power transistors, smarter software and chips that can be reconfigured for specific applications.

Less is Moore

Transistors get hot because they use electrons, subatomic particles that carry an electric charge, to perform calculations and transfer information. As electrons flow, they bump into atoms and — just like rubbing two hands together — give off heat.

Allan MacDonald, a professor in the Department of Physics, is hunting for a material in which electrons could essentially slide in a group along a slippery path absent friction.

“I try to invent new states of matter,” he says, “ones that haven’t been studied before.”

According to theoretical predictions, these states of matter, called exciton condensates, should be possible to cook up. But so far, no one has found the right recipe. MacDonald is a member of the UT Austin-based South West Academy of Nanoelectronics (SWAN), an industry-funded consortium focused on developing new materials for low-power transistors to replace the traditional ones made of silicon. He uses the tools of quantum physics to predict which materials might have the right properties for a better transistor. Other members of the SWAN team try to create these states of matter and see how well their properties match the predictions.

 

“At the end of the day, you hope to make an impact,” says Sanjay Banerjee, professor in the Department of Electrical and Computer Engineering and director of SWAN. “If I could be part of a team that invented the next transistor, that would be extremely gratifying.”

Another strategy they’re trying exploits the fact that electrons don’t just carry an electric charge, they also have a spin. Just as a spinning top can go clockwise or counterclockwise, an electron in a magnetic field can have one of two different spins. So instead of the zeros and ones of computer logic being represented by electron charges, they could be represented by electron spins. Because the electrons wouldn’t have to flow through an electric circuit, they wouldn’t encounter the same frictional forces as they would in traditional transistors and wouldn’t generate as much heat.

This new class of circuits made using electron spins is known as spintronics. Physicists have found materials with these properties, but turning them into functioning devices has proven devilishly hard.

Close Enough

Another way to keep computer chips cooler, while getting more work done, is through smarter software. Keshav Pingali, a professor in the Department of Computer Science and Institute for Computational Engineering and Sciences, says that for many applications, computers don’t have to be perfectly precise. For example, rendering an image on a cell phone.

“You could render that image precisely, but maybe you could produce an image that looks just as good to the human eye with half the energy,” says Pingali. “If your eye can’t tell the difference, why bother? It’s just a waste of power and energy.”

Other ideas include streamlining Internet search engines so that they produce three or four pages of highly ranked and useful results, plenty for everyday use, without attempting to generate thousands of increasingly less useful results.

Pingali and graduate student Xin Sui are evaluating a dozen computer programs that do a range of tasks, from machine learning to rendering images, in order to find the places where “close enough” still gives a good result and significantly cuts energy consumption and heat.

Shorter Commutes

To hear Derek Chiou tell it, even the simplest instruction performed on a computer — say adding two numbers together — is a big complicated mess of breaking that instruction down into subtasks, figuring out how to time the subtasks so that they finish in the right order, storing and retrieving little chunks of information scattered across many physical locations, and more.

All this running around is a terrible waste of time and energy. It’s the price we pay for having general-purpose computers, the kind that can run pretty much any kind of software you want.

Chiou, an associate professor in the Department of Electrical and Computer Engineering, says if you were only running one application and could wire together the appropriate functional units in the order in which they are needed, you could cut out a lot of waste. You could design a little widget that implements that application in the most efficient way. You could do things perhaps a thousand times more efficiently than executing the same application on a general-purpose computer. But a specialized chip is expensive to make and it would only run the application it was designed to run.

So Chiou and his colleagues are developing computer systems that are a compromise between these two extremes of completely general-purpose and completely specialized computers.

One way to do that is to implement applications using chips that contain something called reconfigurable logic. You can think of a general-purpose computer chip as a microscopic city with houses, stores, hospitals and fire stations all connected by a series of roads. With a standard computer chip, those roads and buildings are permanent. Thus, if your house and your work are far apart, you are in for a long commute every day.

With reconfigurable logic, you can put different buildings and roads on any piece of land so that cars (or data) flow more efficiently. You could rearrange where your home is relative to your work to minimize your commute.

Chiou, currently on leave from the university, is working at Microsoft using reconfigurable logic to improve the speed and energy efficiency of the company’s data centers.

Fast Enough?

You might well ask, “Why should I care if computers keep getting faster? They’re fast enough to type documents, play video games and watch streaming videos.”

“Sure, for these conventional applications, today’s computers are fast enough,” acknowledges Banerjee. “But what people don’t realize is that, as you go up the performance curve, new applications become possible.”

He points to efforts to emulate an entire human brain inside a computer, develop automated image-recognition systems, improve weather forecasts and create new medical diagnostics. He also speaks of the “Internet of Things,” a vision for the future in which many of the everyday objects around us have the ability to sense the environment, communicate with each other and work together for the benefit of society.

Banerjee says these applications — and many more we can’t even begin to dream of — will rely on even more compact and speedy computers.

Stretched Thin

Last February, UT Austin researchers announced the creation of the first transistors made of silicene, an atom-thin form of silicon. Electrons can cruise through silicene without encountering as many obstacles as in thicker blocks of silicon, which could lead to dramatically faster and more energy-efficient computer chips. But challenges remain, according to team leader Deji Akinwande. Silicene is notoriously difficult to work with because of its complexity and instability when exposed to air.

Lab to Market

In 2012, the university established a center to develop new ways to manufacture high-tech products such as energy-efficient computer chips, implantable medical devices, wearable sensors or flexible computers and batteries. The National Science Foundation awarded the university an $18.5 million grant over five years to create and lead the Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT) center. Roger Bonnecaze and S.V. Sreenivasan, professors in theCockrell School of Engineering, lead the center. The overarching goal is to create high speed, low cost nanomanufacturing systems and take these advances from the lab to the marketplace.