Friday, February 25, 2005

Tech trends will topple tradition

Tech trends will topple tradition

By Ron Wilson
EE Times

January 10, 2005 (9:00 AM EST)


OUTLOOK 2005
Where to look for earthshaking technology developments? Probably the best place to start is with the roadblocks that appear to stand in the way of traditional progress. There seem to be three of them, which the industry is approaching at considerable velocity. One is the diminishing progress in making CPUs faster. Another is the inability of manufacturing to keep up with the exponential growth in the complexity of systems. And the third is the seemingly insurmountable barrier between microelectronic and living systems.

For several years, there has been a grassroots movement to perform supercomputing problems on multiprocessing systems in which "multiple" means thousands, or even millions. Welcome to the world of peer computing.

The concept is disarmingly simple. There are millions of PCs, workstations and servers in the world, most of which sit unconscionably idle most of the time. If pieces of an enormous computing task could be dispatched over the Internet to some of these machines — say, a few tens of thousands — and if the pieces ran in the background, so that the users weren't inconvenienced, a lot of computing work could be done essentially for free.

This is exactly the way the Search for Extraterrestrial Intelligence (SETI) at Home project works. Most of the people who run SETI are volunteers. But there are also commercial uses of grid networks, as such Internet-linked communities of computers are known. United Devices (Austin, Texas), which provided the supervisory software for SETI, is a commercial enterprise that sells grid-computing systems to commercial clients.

Of course, there is fine print in the tale, too. One obvious issue is that massive networks of loosely coupled computers are useful only if the application lends itself to massive parallelism.

These are the applications that Gordon Bell, senior researcher at Microsoft Corp.'s Bay Area Research Center, calls "embarrassingly parallel." In the SETI program, for instance, essentially the same relatively simple calculations are being performed on enormous numbers of relatively small data sets. The only communication necessary between the peer computer and the supervisor, once the data is delivered to the peer, is a simple "Yes, this one is interesting" or "Nope." The application is ideal for a loosely coupled network of peers.

Stray from that ideal situation, though, and things start to get complicated. Bell pointed out that bandwidth is so limited in wide-area networks, and latency so large and unpredictable, that any need for tight coupling between the peers renders the approach impractical. And of course, the individual task size has to fit in the background on the individual peer systems.

Is it possible to work around these limitations? Bell was guardedly pessimistic. "After two decades of building multicomputers — aka clusters that have relatively long latency among the nodes — the programming problem appears to be as difficult as ever," Bell wrote in an e-mail interview. The only progress, he said, has been to standardize on Beowulf — which specifies the minimum hardware and software requirements for Linux-based computer clusters — and MPI, a standard message-passing interface for them, "as a way to write portable programs that help get applications going, and help to establish a platform for ISVs [independent software vendors]."

Will we find ways to make a wider class of problems highly parallel? "I'm not optimistic about a silver bullet here," Bell replied. "To steal a phrase, it's hard work — really hard work."

But Bell does point to a few areas of interest. One is the observation that peer networks can work as pipelined systems just as well as parallel systems, providing that the traffic through the pipeline is not too high in bandwidth and the pipeline is tolerant of the WAN's latencies.

Will peer networks replace supercomputers? In the general case, Bell believes not. Technology consultant and architecture guru of long standing John Mashey agrees. "Anybody who's ever done serious high-performance computing knows that getting enough bandwidth to the data is an issue for lots of real problems," Mashey wrote. In some cases, creating a private network may be the only way to get the bandwidth and latency necessary to keep the computation under control. But that of course limits the number of peers that can be added to the system. And there are also issues of trust, security and organization to be faced.

But even within these limitations, it seems likely that peer computing on a massive scale will play an increasing role in the attack on certain types of problems. It may well be that our understanding of proteins, modeling of stars and galaxies, and synthesis of human thought may all depend on the use of peer networks to go where no individual computer or server farm can take us.

Some systems are too complex to be organized by an outside agent. Others — nanosystems — may be too small to be built by external devices. These problems lie within the realm of the second technology guess we are offering, the technology of self-assembling systems. Like peer-computing networks, self-assembling systems exist in specific instances today, although much more in the laboratory than on the Web. And like peer networks, self-assembling systems promise to break through significant barriers — at least in some instances — either of enormous complexity or of infinitesimal size.

One way of looking at self-assembling systems is through a series of criteria. As a gross generalization, a self-assembling system is made up of individual components that can either move themselves or alter their functions, that can connect to each other and that can sense where they are in the system that is assembling itself. The components must do those things without outside intervention.

The guiding example for much of the work in this area is that ultimate self-assembling system, the biological organism. Not by coincidence, much of the existing work in self-assembling systems is not in electronics or robotics but in a new field called synthetic biology.

In essence, synthetic biology has tried to create (or discover) a series of amino acids that can act as building blocks for assembling DNA sequences with specific, predictable functions — DNA that will produce specific proteins when inserted into a living cell.

But according to Radhika Nagpal, assistant professor in computer science at Harvard University, the biological work is spilling over into electronics as well. Researchers are working on getting biomolecules to assemble themselves into predictable patterns while carrying along electronic components. Thus, the underlying pattern of molecules would be reflected in the organization of the electronics. Working in another direction, Harvard researcher George Whitesides has been experimenting with two-dimensional electronic circuits that can assemble themselves into three-dimensional circuits.

Much work is also being done on a larger scale, said Nagpal. Self-organizing robotic systems comprising from tens to perhaps a hundred modules have been built. While all of these projects are very much in the research arena, the individuals manning them work with actual hardware — if we can lump DNA into that category — not simply simulation models.

Nor is the work part of some futuristic scenario. "Some of it is nearer than you might think," Nagpal said.

Researchers make rat brain neurons interact with FET array at Max Planck Institute.

The nanotechnology area, though, remains longer-term. Few if any physical examples of self-assembling nanodevices exist today. But many of the principles being developed both in the synthetic-biology arena and in the work on selective-affinity self-assembly for electronic circuits may eventually prove applicable to nanoscale problems.

The final barrier for a breakthrough technology, and the one that is quite possibly the furthest away, is the barrier that separates electronic from living systems. One can envision electronic devices that can directly recognize or act upon living cells or perhaps even individual proteins. Such technology would make possible entirely new applications in medical analysis — identifying a marker protein or a virus in a blood sample, for instance — and in therapy. But the ability to directly interface electronics to cells would also make possible a long-held dream of science-fiction writers: electronic systems that communicate directly with the central nervous systems of humans, bypassing missing limbs or inadequate sense organs.

In this area too, there is science where there used to be science fiction. ICs have been fabricated for some time that are capable of sensing gross properties of chemical solutions, such as pH, the measure of acidity. But more to the point, researchers at the Interuniversity Microelectronics Center (IMEC; Leuven, Belgium) have been working on ICs that can steer individual protein molecules about on the surface of the die, moving them to a detection site where their presence can be recorded. To start the process, researchers first attach a magnetic nanobead to the protein. Then they manipulate a magnetic field to move the molecule. The detection is done by a spin-valve sensor.

Even more exciting work has been reported by IMEC and — at the forthcoming International Solid-State Circuits Conference — will be reported by the Max Planck Institute for Biochemistry (Munich, Germany). Both organizations have reportedly succeeded in fabricating ICs of their respective designs that comprise an array of sensors and transistors. The sensors can detect the electrical "action potentials" generated by neurons and the transistors can stimulate the neurons directly. Living neuron cells have been placed on the surface of the chip, stimulated and sensed. The Max Planck Institute claims to have grown neurons on the surface of a chip as well.

This is a technology of obvious potential, but with a long way to go. For one thing, the physical interface between electronic circuits and biochemical solutions — let alone living cells — is always problematic, according to Luke Lee, assistant professor of bioengineering and director of the Biomolecular Nanotechnology Center at the University of California, Berkeley. After the mechanisms have been understood and the sensors designed, there is still the problem of keeping the chemicals from destroying the chip. So even simple sensors are not a slam dunk.

Moving more delicate creations, such as neuron/chip interfaces, into production is even more problematic. One obvious issue is that the neurons you want to interface to aren't the ones you can extract and put on a chip — they are individuals among millions in a bundle of nerve fibers in a living body. But Lee pointed out that there are repeatability issues even with the in vitro work that is being reported now. It is still at the level of elegant demonstrations, not widely reproducible experiments with consistent results. "I am concerned that many people overpromise nanobiotechnology without really knowing the limitations of nano- and microfabrication," said Lee.


No comments:

Post a Comment