Wednesday, March 23, 2005

Computer Reuse Through Linux

One day I found a laptop dumped in recycle bin at my office. It is Toshiba Tecra 8100 with Pentium3 450 MHz, 64-MB RAM and 12 MB harddisk. It had Windows NT in it. I took it home, reformatted it with my SuSE Linux 9.2. Well, the memory seemed not enough, so I went to ebay.com and found somebody was selling the 128 RAM SIMM100 for around 20 bucks. Not bad, I though. So I bought it and installed it on the laptop.

I got 192 MB now so I could run the GUI (I use KDE, but also added many GNOME libraries to run GNOME-based applications). I then downloaded the latest kernel available at that time (2.6.10). I also bought Cisco Linksys PCMCIA WLAN card (WPC54 SpeedBooster). Unfortunately, Linksys has not provided the driver for Linux yet, but luckily there is ndiswrapper downloadable somewhere. So I copied the driver for Windows to my Linux, run the ndiswrapper and ...voila, the wireless card worked. Well, still had problem here. Apparently, there was a conflict between ndiswrapper and ndiswrapper. I rebuilt the kernel and disabled sound drivers, but still sporadically the ndiswrapper did work very well (sometimes, the WLAN lost connection). For your info, I rebuilt them with specific processor criterias enabled, such as mcpu=pentium3 -msse and mfpmath=sse.

A few days I go, I gave a try to use kernel 2.6.11.5. I even rebuilt Krolltech's QT and many libraries. After many days of recompiling, I successfuly made the wireless work with sound drivers. I was one of the happiest days in my life making reuse the old laptop. I have been using the laptop for many of my daily activities, including browsing, reading emails and even a lightweight server. Yes, it is a server. Imagine if I use Windows for this purpose, I might have burned the laptop to the hell for its slowiness.

The laptop has SSH server, FTP, Telnet and many other services. I even also connect my external USB harddrive, thus I got additional 12 GB of space for storage. Not bad at all.

At work, I also partition my other laptop (it is IBM T30 with 512 MB RAM and 40 GB of total space). I parition 6 GB for Linux, and the rest for Windows 2000. You know what? I ended up using Linux for my work activities almost everyday. Linux is really cool, and I have learned a lot about many things because of opensource applications and tools available from the internet.

I really thank people outthere who have developed such great operating systems, applications and tools.

Thursday, March 3, 2005

Got Answer from one of the 'Hacker'

A few weeks ago I modified an article about SHA-1 on www.wikipedia.org by adding a link to another page telling a brief biography about one of the SHA-1 hackers, a chinese researcher name Xiaoyun Wang. After a few minutes, somebody removed the link and even the new page I added due to infringement of copyrighted materials.

I was suprised, but then I sent email to the researcher asking wether she objects my writing. I got a reply few days later saying that her team and the university (Shandong University of China) are going to create a new website dedicated to this security stuff. Well, she did not really answer my questions, but at least I got a response from an expert and Ph.D in security.

Let's wait and see how their website and papers will look like.

Rebuilding KDE made Easy!

After a few months not checking KDE website (www.kde.org), two days ago I revisited the site and found an interesting tool for KDE 3.3.2: Konstruct. The tool is easy to use and is designed to build (checking components and to download the missing ones, configure them, compile and link the whole libraries and component).

The only command I needed to execute is:

cd konstruct/meta/kde; make install

So easy to build now. If I recall, it was giving me hard time to recompile my KDE (it was 3.3) on my IBM Laptop T30. I had to download all *.bz2 files (plus qt-x11 libraries), extract them, reconfigure and compile one by one.

I am still having problem though when compile them on my 'free' Toshiba Laptop Tecra8100. Somehow, one component (Kppp) complains about 'regfree' and some other procedures altough I have double checked the Kpp*.cpp has "#include ". Anybody knows how to resolve it?

Friday, February 25, 2005

Crack in SHA-1 code 'stuns' Security Gurus

Three chinese researchers said on February 14, 2005 that they have compromised the SHA-1 hashing algorithm at the core of many of today's mainstream security products.

In the wake of the news, some cryptographers called for an accelerated transition to more robust algorithms and a fundamental rethinking of the underlying hashing techiques.

"We've lost our safety margin, and we are on the edge," said William Burr, who manages the security technology group at the National Institute of Standards and Technology (NIST).

"This will create big waves, in my opinion," said the celebrated cryptographer and co-inventor of SHA-1 (Shamir hashing Alg.), Adi Shamir. "This break of SHA-1 is stunning," concurred Ronald Rivers, a professor at MIT who co-developed the RSA with Shamir.

RSA is a public-key cryptosystem for both encryption and authentication; it was invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman [RSA78]. Details on the algorithm can be found in various places. RSA is combined with the SHA1 hashing function to sign a message in this signature suite. It must be infeasible for anyone to either find a message that hashes to a given value or to find two messages that hash to the same value. If either were feasible, an intruder could attach a false message onto Alice's signature. The hash functions SHA1 has been designed specifically to have the property that finding a match is infeasible, and is therefore considered suitable for use in this role.

One or more certificates may accompany a digital signature. A certificate is a signed document that binds the public key to the identity of a party. Its purpose is to prevent someone from impersonating someone else. If a certificate is present, the recipient (or a third party) can check that the public key belongs to a named party, assuming the certifier's public key is itself trusted. These certificates can be held in the Attribution Information section of the DSig 1.0 Signature Block Extension and thus passed along with the signature to aid in validating it. (See section Attribution Information section in the DSig 1.0 Specification.)

The signature section of the DSig 1.0 Signature Block Extension is defined in the DSig 1.0 Specification. For the RSA-SHA1 signature suite, the signature section has the following required and optional fields.

Who are these three chinese researchers? One of the member, Lisa Yin was a Ph.D student who studied under Ronald Rivest (RSA inventor) at MIT. Another one was responsible for cracking the earlier MD5 hashing algorithm (also developed by Rivest in 1991) which happened in August 2004.

To learn more about MD5, please visit http://en.wikipedia.org/wiki/MD5. For RSA: http://en.wikipedia.org/wiki/RSA, and for SHA-1: http://en.wikipedia.org/wiki/SHA-1

The open-source code version of the algorithm can be found in http://www.cr0.net:8040/code/crypto/sha1/. Samir et.al published their paper at ACM forum: The RSA Encryption Algorithm, R.L. Rivest, A. Shamir, L.M. Adleman, "A method of Obtaining Digital Signatures and Public-Key Cryptosystems", Communications of the ACM, v. 21, n. 2, Feb. 1978, pp 120-126.

Electronics, biology: twins under the skin

By Chappell Brown
EE Times
February 09, 2004 (10:33 AM EST)

Like the twin strands of a double helix, electronics and biotechnology are joining forces in a technological explosion that experts say will dwarf what is possible for either one of them alone.

Hints of this pairing can be seen in the economic recovery that's now taking hold. One peculiarity that hasn't grabbed many headlines is biotech's role in pulling Silicon Valley out of its three-year slump. A report last month from the nonprofit organization Joint Venture: Silicon Valley Network points up this fact, showing that venture funding in biotech startups rose from 7 percent in 2000 to 24 percent last year while investment in information technology startups fell from 10 percent to 4 percent over the same period. The immediate question is whether this is a temporary anomaly or the emergence of a major trend.

Certainly computers, biochips, robotics and data sharing over the Internet have been important tools in accelerating biological and medical research, and it should be no surprise that new application areas and markets would grow around them. The view from inside the engineering cubicle might be something like, "Yes, we have created a revolutionary technology that creates new markets-biomedicine is simply one area that benefits from advances in VLSI."

But a long-term perspective suggests a tighter linkage between electronics technology and molecular biology. Indeed, it could be argued that the second half of the 20th century forged not one but two digital revolutions, fueled by two fundamental breakthroughs: transistorized digital computers and the cracking of the genetic code. The latter advance showed that the genome was transmitted through the generations by means of digital storage in the DNA molecule.

In the following decades, both developments matured at an increasingly rapid pace. Digital circuits were inspired by crude models of the nervous system (see story, below). Although the models turned out to be wrong in many respects, technologists discovered that digital representation brings the advantages of simplicity, stability and an ability to control errors. Those same properties have made DNA the viable and stable core of living systems for billions of years.

But the nervous system is only one component of the body that is encoded in DNA, which somehow not only represents the information for building the basic components of cells, but also encodes the entire process of assembling highly complex multicellular machines. The growth process is an amazing feat of bootstrapping from the genetic code to functioning organisms. Essentially, an organism is a molecular digital computer that constructs itself as part of the execution of its code.

Leroy Hood, director of the Institute for Systems Biology (Seattle), believes that science aided by computers and VLSI technology will achieve major breakthroughs in reverse-engineering the cell's assembly processes. The fallout will be new circuit and computational paradigms along with nanoscale mechanisms for building highly compact molecular computing machines.

"There will be a convergence between information technology and biotechnology that will go both ways," said Hood. "We can use new computational tools to understand the biological computational complexities of cells, and when we understand the enormous integrative powers of gene regulatory networks we will have insights into fundamentally new approaches to digital computing and IT."

But cell machinery can also be enlisted in the kind of nanostructure work that is currently done manually with tools such as the atomic-force microscope. "The convergence of materials science and biotech is going to be great, and we will be able to learn from living organisms how they construct proteins that do marvelous things and self-assemble," Hood said. "There will be lessons about how to design living computer chips that can self-assemble and have enormous capacity."

Hood is credited with inventing the automated DNA-sequencing systems that were the first step in accelerating the decoding of the human genome. Accomplished two years ahead of schedule thanks to many enhancements to the process, including MEMS-based microfluidic chips, the achievement has stimulated efforts to take on the far more complex task of decoding protein functions.

Hood's institute, which was founded in 2000, is one example of a wave of similar organizations springing up across the United States. The idea is to engage a diverse group of specialists-mechanical and electronic engineers, computer scientists, chemists, molecular biologists-in the effort to decode the cellular-growth process. Stanford University's BIO-X Biosiences Initiative, for example, is dedicated to linking life sciences, physical sciences, medicine and engineering. The Department of Energy's Pacific Northwest National Laboratory has its Biomolecular Systems Initiative, Princeton University its Lewis-Sigler Institute for Integrative Genomics. Harvard Medical School now has a systems-biology department, and MIT has set up its Computational and Systems Biology Initiative (CSBi).

Proteins have remarkable chemical versatility and go far beyond other molecules as chemical catalysts, said Peter Sorger, director of MIT's CSBi. But applications of their properties will have to contend with a difficult cost differential between medical and industrial products.

"Using proteins as catalysts was the absolute beginning of the biotech industry. We know that proteins are the most extraordinary catalysts ever developed. The problem is that most of the chemical industry is a low-margin business and biology has historically been very expensive," Sorger explained.

While organic catalysts derived from oil are not as efficient, the low cost of producing them has kept proteins out of the field. "Most of the applications of proteins to new kinds of polymers, new plastics, biodegradable materials, etc. have all been limited by the fundamental economic problem that oil is so darn cheap," he said. "As a result, bioengineered materials are only used in very high-end, specialized applications."

However, Sorger believes that such bioengineered products will arrive, probably first in biomedical applications, which will then spawn low-end mass-market products. He used the example of Velcro, which was devised as an aid to heart surgery and later became a common material in a wide range of commercial goods. Sorger is looking forward to nanotechnology applications, the assembly of materials and circuits using biological processes, as the first direct applications of protein engineering outside of the biomedical field.

Sorger cited the work of MIT researcher Angela Belcher as an example of the technological spin-offs that will come from attempts to understand cellular processes. Working in the cross-disciplinary areas of inorganic chemistry, biochemistry, molecular biology and electrical engineering, Belcher has found ways to enlist cellular processes to assemble structures from inorganic molecules. By understanding how cells direct the assembly of their structural components, Belcher is finding ways to assemble artificial materials from inorganic nanoclusters that can function as displays, sensors or memory arrays. Another interdisciplinary group at MIT is putting together a library of biological components that engineers could used to build artificial organisms able to accomplish specific nanoscale tasks.

Underlying the excitement surrounding the merger of digital electronics systems and molecular digital organisms are the dramatic capabilities of lab-on-a-chip chemical-analysis systems, automated data extraction and supercomputer data processing. These technologies are part of what made it possible to sequence the entire human genome. A benchmark for the rapid progress promised by those tools may be the announcement by three biotech companies late last year of single chips containing the human DNA molecule in addressable format-the human genome on a chip. That might compare to the advent of of the CPU-on-a-chip, which catalyzed the VLSI revolution in the mid-1970s.

The barrier to moving this capability forward lies in the physical differences between DNA and the proteins it codes. Proteins are built from DNA sequences as linear sequences of amino acids that then spontaneously fold into complicated 3-D shapes. And the process becomes more complex as proteins begin to interact with one another. For example, there is a feedback loop in which proteins regulate the further expression of proteins by DNA. As a result, there are no parallel fluidic-array techniques to accelerate the analysis of protein families. "These technologies have a long way to go. I don't see any fundamental breakthroughs [in protein analysis] in the next few years, but in 10 years, who knows?" said Steven Wiley, director of the Biomolecular Systems Initiative at Pacific Northwest National Laboratory. "There are a lot of smart people out there working on this."

The fundamental challenge is the dynamic aspect of protein function. "DNA is static; once you sequence it, you have it," Wiley said. But proteins "are constantly interacting, so you have to run multiple experiments to observe all their functions and you end up with multiple terabytes of information. So, how are you going to manage and analyze all this information?"

But the excitement generated by recent successes with the genome is contagious. Plans are afoot to decode the "language" of proteins, making their functions widely available to engineers; anyone with a personal computer and a modem can access the human genome over the Internet; lab-on-a-chip technology continues to reduce the cost of bioexperimentation while ramping up throughput. And there is venture capital funding out there.

Tech trends will topple tradition

Tech trends will topple tradition

By Ron Wilson
EE Times

January 10, 2005 (9:00 AM EST)


OUTLOOK 2005
Where to look for earthshaking technology developments? Probably the best place to start is with the roadblocks that appear to stand in the way of traditional progress. There seem to be three of them, which the industry is approaching at considerable velocity. One is the diminishing progress in making CPUs faster. Another is the inability of manufacturing to keep up with the exponential growth in the complexity of systems. And the third is the seemingly insurmountable barrier between microelectronic and living systems.

For several years, there has been a grassroots movement to perform supercomputing problems on multiprocessing systems in which "multiple" means thousands, or even millions. Welcome to the world of peer computing.

The concept is disarmingly simple. There are millions of PCs, workstations and servers in the world, most of which sit unconscionably idle most of the time. If pieces of an enormous computing task could be dispatched over the Internet to some of these machines — say, a few tens of thousands — and if the pieces ran in the background, so that the users weren't inconvenienced, a lot of computing work could be done essentially for free.

This is exactly the way the Search for Extraterrestrial Intelligence (SETI) at Home project works. Most of the people who run SETI are volunteers. But there are also commercial uses of grid networks, as such Internet-linked communities of computers are known. United Devices (Austin, Texas), which provided the supervisory software for SETI, is a commercial enterprise that sells grid-computing systems to commercial clients.

Of course, there is fine print in the tale, too. One obvious issue is that massive networks of loosely coupled computers are useful only if the application lends itself to massive parallelism.

These are the applications that Gordon Bell, senior researcher at Microsoft Corp.'s Bay Area Research Center, calls "embarrassingly parallel." In the SETI program, for instance, essentially the same relatively simple calculations are being performed on enormous numbers of relatively small data sets. The only communication necessary between the peer computer and the supervisor, once the data is delivered to the peer, is a simple "Yes, this one is interesting" or "Nope." The application is ideal for a loosely coupled network of peers.

Stray from that ideal situation, though, and things start to get complicated. Bell pointed out that bandwidth is so limited in wide-area networks, and latency so large and unpredictable, that any need for tight coupling between the peers renders the approach impractical. And of course, the individual task size has to fit in the background on the individual peer systems.

Is it possible to work around these limitations? Bell was guardedly pessimistic. "After two decades of building multicomputers — aka clusters that have relatively long latency among the nodes — the programming problem appears to be as difficult as ever," Bell wrote in an e-mail interview. The only progress, he said, has been to standardize on Beowulf — which specifies the minimum hardware and software requirements for Linux-based computer clusters — and MPI, a standard message-passing interface for them, "as a way to write portable programs that help get applications going, and help to establish a platform for ISVs [independent software vendors]."

Will we find ways to make a wider class of problems highly parallel? "I'm not optimistic about a silver bullet here," Bell replied. "To steal a phrase, it's hard work — really hard work."

But Bell does point to a few areas of interest. One is the observation that peer networks can work as pipelined systems just as well as parallel systems, providing that the traffic through the pipeline is not too high in bandwidth and the pipeline is tolerant of the WAN's latencies.

Will peer networks replace supercomputers? In the general case, Bell believes not. Technology consultant and architecture guru of long standing John Mashey agrees. "Anybody who's ever done serious high-performance computing knows that getting enough bandwidth to the data is an issue for lots of real problems," Mashey wrote. In some cases, creating a private network may be the only way to get the bandwidth and latency necessary to keep the computation under control. But that of course limits the number of peers that can be added to the system. And there are also issues of trust, security and organization to be faced.

But even within these limitations, it seems likely that peer computing on a massive scale will play an increasing role in the attack on certain types of problems. It may well be that our understanding of proteins, modeling of stars and galaxies, and synthesis of human thought may all depend on the use of peer networks to go where no individual computer or server farm can take us.

Some systems are too complex to be organized by an outside agent. Others — nanosystems — may be too small to be built by external devices. These problems lie within the realm of the second technology guess we are offering, the technology of self-assembling systems. Like peer-computing networks, self-assembling systems exist in specific instances today, although much more in the laboratory than on the Web. And like peer networks, self-assembling systems promise to break through significant barriers — at least in some instances — either of enormous complexity or of infinitesimal size.

One way of looking at self-assembling systems is through a series of criteria. As a gross generalization, a self-assembling system is made up of individual components that can either move themselves or alter their functions, that can connect to each other and that can sense where they are in the system that is assembling itself. The components must do those things without outside intervention.

The guiding example for much of the work in this area is that ultimate self-assembling system, the biological organism. Not by coincidence, much of the existing work in self-assembling systems is not in electronics or robotics but in a new field called synthetic biology.

In essence, synthetic biology has tried to create (or discover) a series of amino acids that can act as building blocks for assembling DNA sequences with specific, predictable functions — DNA that will produce specific proteins when inserted into a living cell.

But according to Radhika Nagpal, assistant professor in computer science at Harvard University, the biological work is spilling over into electronics as well. Researchers are working on getting biomolecules to assemble themselves into predictable patterns while carrying along electronic components. Thus, the underlying pattern of molecules would be reflected in the organization of the electronics. Working in another direction, Harvard researcher George Whitesides has been experimenting with two-dimensional electronic circuits that can assemble themselves into three-dimensional circuits.

Much work is also being done on a larger scale, said Nagpal. Self-organizing robotic systems comprising from tens to perhaps a hundred modules have been built. While all of these projects are very much in the research arena, the individuals manning them work with actual hardware — if we can lump DNA into that category — not simply simulation models.

Nor is the work part of some futuristic scenario. "Some of it is nearer than you might think," Nagpal said.

Researchers make rat brain neurons interact with FET array at Max Planck Institute.

The nanotechnology area, though, remains longer-term. Few if any physical examples of self-assembling nanodevices exist today. But many of the principles being developed both in the synthetic-biology arena and in the work on selective-affinity self-assembly for electronic circuits may eventually prove applicable to nanoscale problems.

The final barrier for a breakthrough technology, and the one that is quite possibly the furthest away, is the barrier that separates electronic from living systems. One can envision electronic devices that can directly recognize or act upon living cells or perhaps even individual proteins. Such technology would make possible entirely new applications in medical analysis — identifying a marker protein or a virus in a blood sample, for instance — and in therapy. But the ability to directly interface electronics to cells would also make possible a long-held dream of science-fiction writers: electronic systems that communicate directly with the central nervous systems of humans, bypassing missing limbs or inadequate sense organs.

In this area too, there is science where there used to be science fiction. ICs have been fabricated for some time that are capable of sensing gross properties of chemical solutions, such as pH, the measure of acidity. But more to the point, researchers at the Interuniversity Microelectronics Center (IMEC; Leuven, Belgium) have been working on ICs that can steer individual protein molecules about on the surface of the die, moving them to a detection site where their presence can be recorded. To start the process, researchers first attach a magnetic nanobead to the protein. Then they manipulate a magnetic field to move the molecule. The detection is done by a spin-valve sensor.

Even more exciting work has been reported by IMEC and — at the forthcoming International Solid-State Circuits Conference — will be reported by the Max Planck Institute for Biochemistry (Munich, Germany). Both organizations have reportedly succeeded in fabricating ICs of their respective designs that comprise an array of sensors and transistors. The sensors can detect the electrical "action potentials" generated by neurons and the transistors can stimulate the neurons directly. Living neuron cells have been placed on the surface of the chip, stimulated and sensed. The Max Planck Institute claims to have grown neurons on the surface of a chip as well.

This is a technology of obvious potential, but with a long way to go. For one thing, the physical interface between electronic circuits and biochemical solutions — let alone living cells — is always problematic, according to Luke Lee, assistant professor of bioengineering and director of the Biomolecular Nanotechnology Center at the University of California, Berkeley. After the mechanisms have been understood and the sensors designed, there is still the problem of keeping the chemicals from destroying the chip. So even simple sensors are not a slam dunk.

Moving more delicate creations, such as neuron/chip interfaces, into production is even more problematic. One obvious issue is that the neurons you want to interface to aren't the ones you can extract and put on a chip — they are individuals among millions in a bundle of nerve fibers in a living body. But Lee pointed out that there are repeatability issues even with the in vitro work that is being reported now. It is still at the level of elegant demonstrations, not widely reproducible experiments with consistent results. "I am concerned that many people overpromise nanobiotechnology without really knowing the limitations of nano- and microfabrication," said Lee.