Friday, February 25, 2005

Crack in SHA-1 code 'stuns' Security Gurus

Three chinese researchers said on February 14, 2005 that they have compromised the SHA-1 hashing algorithm at the core of many of today's mainstream security products.

In the wake of the news, some cryptographers called for an accelerated transition to more robust algorithms and a fundamental rethinking of the underlying hashing techiques.

"We've lost our safety margin, and we are on the edge," said William Burr, who manages the security technology group at the National Institute of Standards and Technology (NIST).

"This will create big waves, in my opinion," said the celebrated cryptographer and co-inventor of SHA-1 (Shamir hashing Alg.), Adi Shamir. "This break of SHA-1 is stunning," concurred Ronald Rivers, a professor at MIT who co-developed the RSA with Shamir.

RSA is a public-key cryptosystem for both encryption and authentication; it was invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman [RSA78]. Details on the algorithm can be found in various places. RSA is combined with the SHA1 hashing function to sign a message in this signature suite. It must be infeasible for anyone to either find a message that hashes to a given value or to find two messages that hash to the same value. If either were feasible, an intruder could attach a false message onto Alice's signature. The hash functions SHA1 has been designed specifically to have the property that finding a match is infeasible, and is therefore considered suitable for use in this role.

One or more certificates may accompany a digital signature. A certificate is a signed document that binds the public key to the identity of a party. Its purpose is to prevent someone from impersonating someone else. If a certificate is present, the recipient (or a third party) can check that the public key belongs to a named party, assuming the certifier's public key is itself trusted. These certificates can be held in the Attribution Information section of the DSig 1.0 Signature Block Extension and thus passed along with the signature to aid in validating it. (See section Attribution Information section in the DSig 1.0 Specification.)

The signature section of the DSig 1.0 Signature Block Extension is defined in the DSig 1.0 Specification. For the RSA-SHA1 signature suite, the signature section has the following required and optional fields.

Who are these three chinese researchers? One of the member, Lisa Yin was a Ph.D student who studied under Ronald Rivest (RSA inventor) at MIT. Another one was responsible for cracking the earlier MD5 hashing algorithm (also developed by Rivest in 1991) which happened in August 2004.

To learn more about MD5, please visit http://en.wikipedia.org/wiki/MD5. For RSA: http://en.wikipedia.org/wiki/RSA, and for SHA-1: http://en.wikipedia.org/wiki/SHA-1

The open-source code version of the algorithm can be found in http://www.cr0.net:8040/code/crypto/sha1/. Samir et.al published their paper at ACM forum: The RSA Encryption Algorithm, R.L. Rivest, A. Shamir, L.M. Adleman, "A method of Obtaining Digital Signatures and Public-Key Cryptosystems", Communications of the ACM, v. 21, n. 2, Feb. 1978, pp 120-126.

Electronics, biology: twins under the skin

By Chappell Brown
EE Times
February 09, 2004 (10:33 AM EST)

Like the twin strands of a double helix, electronics and biotechnology are joining forces in a technological explosion that experts say will dwarf what is possible for either one of them alone.

Hints of this pairing can be seen in the economic recovery that's now taking hold. One peculiarity that hasn't grabbed many headlines is biotech's role in pulling Silicon Valley out of its three-year slump. A report last month from the nonprofit organization Joint Venture: Silicon Valley Network points up this fact, showing that venture funding in biotech startups rose from 7 percent in 2000 to 24 percent last year while investment in information technology startups fell from 10 percent to 4 percent over the same period. The immediate question is whether this is a temporary anomaly or the emergence of a major trend.

Certainly computers, biochips, robotics and data sharing over the Internet have been important tools in accelerating biological and medical research, and it should be no surprise that new application areas and markets would grow around them. The view from inside the engineering cubicle might be something like, "Yes, we have created a revolutionary technology that creates new markets-biomedicine is simply one area that benefits from advances in VLSI."

But a long-term perspective suggests a tighter linkage between electronics technology and molecular biology. Indeed, it could be argued that the second half of the 20th century forged not one but two digital revolutions, fueled by two fundamental breakthroughs: transistorized digital computers and the cracking of the genetic code. The latter advance showed that the genome was transmitted through the generations by means of digital storage in the DNA molecule.

In the following decades, both developments matured at an increasingly rapid pace. Digital circuits were inspired by crude models of the nervous system (see story, below). Although the models turned out to be wrong in many respects, technologists discovered that digital representation brings the advantages of simplicity, stability and an ability to control errors. Those same properties have made DNA the viable and stable core of living systems for billions of years.

But the nervous system is only one component of the body that is encoded in DNA, which somehow not only represents the information for building the basic components of cells, but also encodes the entire process of assembling highly complex multicellular machines. The growth process is an amazing feat of bootstrapping from the genetic code to functioning organisms. Essentially, an organism is a molecular digital computer that constructs itself as part of the execution of its code.

Leroy Hood, director of the Institute for Systems Biology (Seattle), believes that science aided by computers and VLSI technology will achieve major breakthroughs in reverse-engineering the cell's assembly processes. The fallout will be new circuit and computational paradigms along with nanoscale mechanisms for building highly compact molecular computing machines.

"There will be a convergence between information technology and biotechnology that will go both ways," said Hood. "We can use new computational tools to understand the biological computational complexities of cells, and when we understand the enormous integrative powers of gene regulatory networks we will have insights into fundamentally new approaches to digital computing and IT."

But cell machinery can also be enlisted in the kind of nanostructure work that is currently done manually with tools such as the atomic-force microscope. "The convergence of materials science and biotech is going to be great, and we will be able to learn from living organisms how they construct proteins that do marvelous things and self-assemble," Hood said. "There will be lessons about how to design living computer chips that can self-assemble and have enormous capacity."

Hood is credited with inventing the automated DNA-sequencing systems that were the first step in accelerating the decoding of the human genome. Accomplished two years ahead of schedule thanks to many enhancements to the process, including MEMS-based microfluidic chips, the achievement has stimulated efforts to take on the far more complex task of decoding protein functions.

Hood's institute, which was founded in 2000, is one example of a wave of similar organizations springing up across the United States. The idea is to engage a diverse group of specialists-mechanical and electronic engineers, computer scientists, chemists, molecular biologists-in the effort to decode the cellular-growth process. Stanford University's BIO-X Biosiences Initiative, for example, is dedicated to linking life sciences, physical sciences, medicine and engineering. The Department of Energy's Pacific Northwest National Laboratory has its Biomolecular Systems Initiative, Princeton University its Lewis-Sigler Institute for Integrative Genomics. Harvard Medical School now has a systems-biology department, and MIT has set up its Computational and Systems Biology Initiative (CSBi).

Proteins have remarkable chemical versatility and go far beyond other molecules as chemical catalysts, said Peter Sorger, director of MIT's CSBi. But applications of their properties will have to contend with a difficult cost differential between medical and industrial products.

"Using proteins as catalysts was the absolute beginning of the biotech industry. We know that proteins are the most extraordinary catalysts ever developed. The problem is that most of the chemical industry is a low-margin business and biology has historically been very expensive," Sorger explained.

While organic catalysts derived from oil are not as efficient, the low cost of producing them has kept proteins out of the field. "Most of the applications of proteins to new kinds of polymers, new plastics, biodegradable materials, etc. have all been limited by the fundamental economic problem that oil is so darn cheap," he said. "As a result, bioengineered materials are only used in very high-end, specialized applications."

However, Sorger believes that such bioengineered products will arrive, probably first in biomedical applications, which will then spawn low-end mass-market products. He used the example of Velcro, which was devised as an aid to heart surgery and later became a common material in a wide range of commercial goods. Sorger is looking forward to nanotechnology applications, the assembly of materials and circuits using biological processes, as the first direct applications of protein engineering outside of the biomedical field.

Sorger cited the work of MIT researcher Angela Belcher as an example of the technological spin-offs that will come from attempts to understand cellular processes. Working in the cross-disciplinary areas of inorganic chemistry, biochemistry, molecular biology and electrical engineering, Belcher has found ways to enlist cellular processes to assemble structures from inorganic molecules. By understanding how cells direct the assembly of their structural components, Belcher is finding ways to assemble artificial materials from inorganic nanoclusters that can function as displays, sensors or memory arrays. Another interdisciplinary group at MIT is putting together a library of biological components that engineers could used to build artificial organisms able to accomplish specific nanoscale tasks.

Underlying the excitement surrounding the merger of digital electronics systems and molecular digital organisms are the dramatic capabilities of lab-on-a-chip chemical-analysis systems, automated data extraction and supercomputer data processing. These technologies are part of what made it possible to sequence the entire human genome. A benchmark for the rapid progress promised by those tools may be the announcement by three biotech companies late last year of single chips containing the human DNA molecule in addressable format-the human genome on a chip. That might compare to the advent of of the CPU-on-a-chip, which catalyzed the VLSI revolution in the mid-1970s.

The barrier to moving this capability forward lies in the physical differences between DNA and the proteins it codes. Proteins are built from DNA sequences as linear sequences of amino acids that then spontaneously fold into complicated 3-D shapes. And the process becomes more complex as proteins begin to interact with one another. For example, there is a feedback loop in which proteins regulate the further expression of proteins by DNA. As a result, there are no parallel fluidic-array techniques to accelerate the analysis of protein families. "These technologies have a long way to go. I don't see any fundamental breakthroughs [in protein analysis] in the next few years, but in 10 years, who knows?" said Steven Wiley, director of the Biomolecular Systems Initiative at Pacific Northwest National Laboratory. "There are a lot of smart people out there working on this."

The fundamental challenge is the dynamic aspect of protein function. "DNA is static; once you sequence it, you have it," Wiley said. But proteins "are constantly interacting, so you have to run multiple experiments to observe all their functions and you end up with multiple terabytes of information. So, how are you going to manage and analyze all this information?"

But the excitement generated by recent successes with the genome is contagious. Plans are afoot to decode the "language" of proteins, making their functions widely available to engineers; anyone with a personal computer and a modem can access the human genome over the Internet; lab-on-a-chip technology continues to reduce the cost of bioexperimentation while ramping up throughput. And there is venture capital funding out there.

Tech trends will topple tradition

Tech trends will topple tradition

By Ron Wilson
EE Times

January 10, 2005 (9:00 AM EST)


OUTLOOK 2005
Where to look for earthshaking technology developments? Probably the best place to start is with the roadblocks that appear to stand in the way of traditional progress. There seem to be three of them, which the industry is approaching at considerable velocity. One is the diminishing progress in making CPUs faster. Another is the inability of manufacturing to keep up with the exponential growth in the complexity of systems. And the third is the seemingly insurmountable barrier between microelectronic and living systems.

For several years, there has been a grassroots movement to perform supercomputing problems on multiprocessing systems in which "multiple" means thousands, or even millions. Welcome to the world of peer computing.

The concept is disarmingly simple. There are millions of PCs, workstations and servers in the world, most of which sit unconscionably idle most of the time. If pieces of an enormous computing task could be dispatched over the Internet to some of these machines — say, a few tens of thousands — and if the pieces ran in the background, so that the users weren't inconvenienced, a lot of computing work could be done essentially for free.

This is exactly the way the Search for Extraterrestrial Intelligence (SETI) at Home project works. Most of the people who run SETI are volunteers. But there are also commercial uses of grid networks, as such Internet-linked communities of computers are known. United Devices (Austin, Texas), which provided the supervisory software for SETI, is a commercial enterprise that sells grid-computing systems to commercial clients.

Of course, there is fine print in the tale, too. One obvious issue is that massive networks of loosely coupled computers are useful only if the application lends itself to massive parallelism.

These are the applications that Gordon Bell, senior researcher at Microsoft Corp.'s Bay Area Research Center, calls "embarrassingly parallel." In the SETI program, for instance, essentially the same relatively simple calculations are being performed on enormous numbers of relatively small data sets. The only communication necessary between the peer computer and the supervisor, once the data is delivered to the peer, is a simple "Yes, this one is interesting" or "Nope." The application is ideal for a loosely coupled network of peers.

Stray from that ideal situation, though, and things start to get complicated. Bell pointed out that bandwidth is so limited in wide-area networks, and latency so large and unpredictable, that any need for tight coupling between the peers renders the approach impractical. And of course, the individual task size has to fit in the background on the individual peer systems.

Is it possible to work around these limitations? Bell was guardedly pessimistic. "After two decades of building multicomputers — aka clusters that have relatively long latency among the nodes — the programming problem appears to be as difficult as ever," Bell wrote in an e-mail interview. The only progress, he said, has been to standardize on Beowulf — which specifies the minimum hardware and software requirements for Linux-based computer clusters — and MPI, a standard message-passing interface for them, "as a way to write portable programs that help get applications going, and help to establish a platform for ISVs [independent software vendors]."

Will we find ways to make a wider class of problems highly parallel? "I'm not optimistic about a silver bullet here," Bell replied. "To steal a phrase, it's hard work — really hard work."

But Bell does point to a few areas of interest. One is the observation that peer networks can work as pipelined systems just as well as parallel systems, providing that the traffic through the pipeline is not too high in bandwidth and the pipeline is tolerant of the WAN's latencies.

Will peer networks replace supercomputers? In the general case, Bell believes not. Technology consultant and architecture guru of long standing John Mashey agrees. "Anybody who's ever done serious high-performance computing knows that getting enough bandwidth to the data is an issue for lots of real problems," Mashey wrote. In some cases, creating a private network may be the only way to get the bandwidth and latency necessary to keep the computation under control. But that of course limits the number of peers that can be added to the system. And there are also issues of trust, security and organization to be faced.

But even within these limitations, it seems likely that peer computing on a massive scale will play an increasing role in the attack on certain types of problems. It may well be that our understanding of proteins, modeling of stars and galaxies, and synthesis of human thought may all depend on the use of peer networks to go where no individual computer or server farm can take us.

Some systems are too complex to be organized by an outside agent. Others — nanosystems — may be too small to be built by external devices. These problems lie within the realm of the second technology guess we are offering, the technology of self-assembling systems. Like peer-computing networks, self-assembling systems exist in specific instances today, although much more in the laboratory than on the Web. And like peer networks, self-assembling systems promise to break through significant barriers — at least in some instances — either of enormous complexity or of infinitesimal size.

One way of looking at self-assembling systems is through a series of criteria. As a gross generalization, a self-assembling system is made up of individual components that can either move themselves or alter their functions, that can connect to each other and that can sense where they are in the system that is assembling itself. The components must do those things without outside intervention.

The guiding example for much of the work in this area is that ultimate self-assembling system, the biological organism. Not by coincidence, much of the existing work in self-assembling systems is not in electronics or robotics but in a new field called synthetic biology.

In essence, synthetic biology has tried to create (or discover) a series of amino acids that can act as building blocks for assembling DNA sequences with specific, predictable functions — DNA that will produce specific proteins when inserted into a living cell.

But according to Radhika Nagpal, assistant professor in computer science at Harvard University, the biological work is spilling over into electronics as well. Researchers are working on getting biomolecules to assemble themselves into predictable patterns while carrying along electronic components. Thus, the underlying pattern of molecules would be reflected in the organization of the electronics. Working in another direction, Harvard researcher George Whitesides has been experimenting with two-dimensional electronic circuits that can assemble themselves into three-dimensional circuits.

Much work is also being done on a larger scale, said Nagpal. Self-organizing robotic systems comprising from tens to perhaps a hundred modules have been built. While all of these projects are very much in the research arena, the individuals manning them work with actual hardware — if we can lump DNA into that category — not simply simulation models.

Nor is the work part of some futuristic scenario. "Some of it is nearer than you might think," Nagpal said.

Researchers make rat brain neurons interact with FET array at Max Planck Institute.

The nanotechnology area, though, remains longer-term. Few if any physical examples of self-assembling nanodevices exist today. But many of the principles being developed both in the synthetic-biology arena and in the work on selective-affinity self-assembly for electronic circuits may eventually prove applicable to nanoscale problems.

The final barrier for a breakthrough technology, and the one that is quite possibly the furthest away, is the barrier that separates electronic from living systems. One can envision electronic devices that can directly recognize or act upon living cells or perhaps even individual proteins. Such technology would make possible entirely new applications in medical analysis — identifying a marker protein or a virus in a blood sample, for instance — and in therapy. But the ability to directly interface electronics to cells would also make possible a long-held dream of science-fiction writers: electronic systems that communicate directly with the central nervous systems of humans, bypassing missing limbs or inadequate sense organs.

In this area too, there is science where there used to be science fiction. ICs have been fabricated for some time that are capable of sensing gross properties of chemical solutions, such as pH, the measure of acidity. But more to the point, researchers at the Interuniversity Microelectronics Center (IMEC; Leuven, Belgium) have been working on ICs that can steer individual protein molecules about on the surface of the die, moving them to a detection site where their presence can be recorded. To start the process, researchers first attach a magnetic nanobead to the protein. Then they manipulate a magnetic field to move the molecule. The detection is done by a spin-valve sensor.

Even more exciting work has been reported by IMEC and — at the forthcoming International Solid-State Circuits Conference — will be reported by the Max Planck Institute for Biochemistry (Munich, Germany). Both organizations have reportedly succeeded in fabricating ICs of their respective designs that comprise an array of sensors and transistors. The sensors can detect the electrical "action potentials" generated by neurons and the transistors can stimulate the neurons directly. Living neuron cells have been placed on the surface of the chip, stimulated and sensed. The Max Planck Institute claims to have grown neurons on the surface of a chip as well.

This is a technology of obvious potential, but with a long way to go. For one thing, the physical interface between electronic circuits and biochemical solutions — let alone living cells — is always problematic, according to Luke Lee, assistant professor of bioengineering and director of the Biomolecular Nanotechnology Center at the University of California, Berkeley. After the mechanisms have been understood and the sensors designed, there is still the problem of keeping the chemicals from destroying the chip. So even simple sensors are not a slam dunk.

Moving more delicate creations, such as neuron/chip interfaces, into production is even more problematic. One obvious issue is that the neurons you want to interface to aren't the ones you can extract and put on a chip — they are individuals among millions in a bundle of nerve fibers in a living body. But Lee pointed out that there are repeatability issues even with the in vitro work that is being reported now. It is still at the level of elegant demonstrations, not widely reproducible experiments with consistent results. "I am concerned that many people overpromise nanobiotechnology without really knowing the limitations of nano- and microfabrication," said Lee.


Alcatel & Microsoft develop IP Television

Alcatel and Microsoft Corp. announced Tuesday (Feb. 22) a global collaboration agreement to accelerate the availability of Internet Protocol Television (IPTV) services for broadband operators world-wide.

Under the agreement, the companies will team to develop an integrated IPTV delivery solution leveraging Alcatel's leadership in broadband, IP networking, development, and integration of end-to-end multimedia and video solutions, and Microsoft's expertise in TV software solutions and connected-entertainment experiences across consumer devices.

The companies believe the integrated solution can help service providers reduce deployment costs and shorten time-to-market for IPTV services as they transition to mass-market deployments of IPTV. On-demand video streaming applications, interactive TV, video and voice communications, photo, music and home video sharing, and online gaming are some services that consumers could receive through their multimedia-connected home networks.

Joint initiatives being pursed by both companies include developing custom applications to meet the unique needs of different cultures and markets globally; enhancing application and network resilience for better reliability in large-scale deployment; and integrating content, security and digital rights management to ensure secure delivery of high-quality content throughout the home.

Saturday, January 8, 2005

Lies in Scientific Researches by West

Inventions by Muslims

Roger Bacon is credited with drawing a flying apparatus as is Leonardo da Vinci. Actually Ibn Firnas of Islamic Spain invented, constructed, and tested a flying machine in the 800's A.D. Roger Bacon learned of flying machines from Arabic references to Ibn Firnas' machine. The latter's invention antedates Bacon by 500 years and Da Vinci by some 700 years.


It is taught that glass mirrors were first produced in Venice, the year 1291. Glass mirrors were actually in use in Islamic Spain as early as the 11th century. The Venetians learned the art of fine glass production from Syrian artisans during the 9th and 10th centuries.


It is taught that before the 14th century, the only type of clock available was the water clock. In 1335, a large mechanical clock was erected in Milan, Italy and it was claimed as the first mechanical clock. However, Spanish Muslim engineers produced both large and small mechanical clocks that were weight-driven. This knowledge was transmitted to Europe via Latin translations of Islamic books on mechanics, which contained designs and illustrations of epi-cyclic and segmental gears. One such clock included a mercury escapement. Europeans directly copied the latter type during the 15th century. In addition, during the 9th century, Ibn Firnas of Islamic Spain, according to Will Durant, invented a watch-like device which kept accurate time. The Muslims also constructed a variety of highly accurate astronomical clocks for use in their observatories.


In the 17th century, the pendulum was said to be developed by Galileo during his teenage years. He noticed a chandelier swaying as it was being blown by the wind. As a result, he went home and invented the pendulum. However, the pendulum was actually discovered by Ibn Yunus al-Masri during the 10th century. He was the first to study and document a pendulums oscillatory motion. Muslim physiscists introduced its value for use in clocks the 15th century.


It is taught that Johannes Gutenburg of Germany invented the movable type and printing press during the 15th century. In 1454, Gutenberg did develop the most sophisticated printing press of the Middle Ages. But a movable brass type was in use in Islamic Spain 100 years prior, which is where the West's first printing devices were made.


It is taught that Isaac Newton's 17th century study of lenses, light, and prisms form the foundation of the modern science of optics. Actually al-Haythem in the 11th century determined virtually everything that Newton advanced regarding optics centuries prior and is regarded by numerous authorities as the "founder of optics." There is little doubt that Newton was influenced by him. Al-Haytham was the most quoted physicist of the Middle Ages. His works were utilized and quoted by a greater number of European scholars during the 16th and 17th centuries than those of Newton and Galileo combined.


The English scholar Roger Bacon (d. 1292) first mentioned glass lenses for improving vision. At nearly the same time, eyeglasses could be found in use both in China and Europe. Ibn Firnas of Islamic Spain invented eyeglasses during the 9th century, and they were manufactured and sold throughout Spain for over two centuries. Any mention of eyeglasses by Roger Bacon was simply a regurgitation of the work of al-Haytham (d. 1039), whose research Bacon frequently referred to.


Isaac Newton is said to have discovered during the 17th century that white light consists of various rays of colored light. This discovery was made in its entirety by al-Haytham (1lth century) and Kamal ad-Din (14th century). Newton did make original discoveries, but this was not one of them.


The concept of the finite nature of matter was first introduced by Antoine Lavoisier during the 18th century. He discovered that, although matter may change its form or shape, its mass always remains the same. Thus, for instance, if water is heated to steam, if salt is dissolved in water, or if a piece of wood is burned to ashes, the total mass remains unchanged. The principles of this discovery were elaborated centuries before by Islamic Persia's great scholar, al-Biruni (d. 1050). Lavoisier was a disciple of the Muslim chemists and physicists and referred to their books frequently.


It is taught that the Greeks were the developers of trigonometry. Trigonometry remained largely a theoretical science among the Greeks. Muslim scholars developed it to a level of modern perfection, although the weight of the credit must be given to al-Battani. The words describing the basic functions of this science, sine, cosine and tangent, are all derived from Arabic terms. Thus, original contributions by the Greeks in trigonometry were minimal.


It is taught that a Dutchman, Simon Stevin, first developed the use of decimal fractions in mathematics in 1589. He helped advance the mathematical sciences by replacing the cumbersome fractions, for instance, 1/2, with decimal fractions, for example, 0.5. Muslim mathematicians were the first to utilize decimals instead of fractions on a large scale. Al-Kashi's book, Key to Arithmetic, was written at the beginning of the 15th century and was the stimulus for the systematic application of decimals to whole numbers and fractions thereof. It is highly probable that Stevin imported the idea to Europe from al-Kashi's work.


The first man to utilize algebraic symbols was the French mathematician, Francois Vieta. In 1591, he wrote an algebra book describing equations with letters such as the now familiar x and y's. Asimov says that this discovery had an impact similar to the progression from Roman numerals to Arabic numbers. Muslim mathematicians, the inventors of algebra, introduced the concept of using letters for unknown variables in equations as early as the 9th century A.D. Through this system, they solved a variety of complex equations, including quadratic and cubic equations. They used symbols to develop and perfect the binomial theorem.


It is taught that the difficult cubic equations (x to the third power) remained unsolved until the 16th century when Niccolo Tartaglia, an Italian mathematician, solved them. Muslim mathematicians solved cubic equations as well as numerous equations of even higher degrees with ease as early as the 10th century.


The concept that numbers could be less than zero, that is negative numbers, was unknown until 1545 when Geronimo Cardano introduced the idea. Muslim mathematicians introduced negative numbers for use in a variety of arithmetic functions at least 400 years prior to Cardano.


In 1614, John Napier invented logarithms and logarithmic tables. Muslim mathematicians invented logarithms and produced logarithmic tables several centuries prior. Such tables were common in the Muslim world as early as the 13th century.


During the 17th century Rene Descartes made the discovery that algebra could be used to solve geometrical problems. By this, he greatly advanced the science of geometry. Mathematicians of the Islamic Empire accomplished precisely this as early as the 9th century A.D. Thabit bin Qurrah was the first to do so, and he was followed by Abu'l Wafa, whose 10th century book utilized algebra to advance geometry into an exact and simplified science.


Isaac Newton, during the 17th century, developed the binomial theorem, which is a crucial component for the study of algebra. Hundreds of Muslim mathematicians utilized and perfected the binomial theorem. They initiated its use for the systematic solution of algebraic problems during the 10th century (or prior).


No improvement had been made in the astronomy of the ancients during the Middle Ages regarding the motion of planets until the 13th century. Then Alphonso the Wise of Castile (Middle Spain) invented the Aphonsine Tables, which were more accurate than Ptolemy's. Muslim astronomers made numerous improvements upon Ptolemy's findings as early as the 9th century. They were the first astronomers to dispute his archaic ideas. In their critic of the Greeks, they synthesized proof that the sun is the center of the solar system and that the orbits of the earth and other planets might be elliptical. They produced hundreds of highly accurate astronomical tables and star charts. Many of their calculations are so precise that they are regarded as contemporary. The Alphonsine Tables are little more than copies of works on astronomy transmitted to Europe via Islamic Spain, i.e. the Toledo Tables.


Gunpowder was developed in the Western world as a result of Roger Bacon's work in 1242. The first usage of gunpowder in weapons was when the Chinese fired it from bamboo shoots in attempt to frighten Mongol conquerors. They produced it by adding sulfur and charcoal to saltpeter. The Chinese developed saltpeter for use in fireworks and knew of no tactical military use for gunpowder, nor did they invent its formula. Research by Reinuad and Fave have clearly shown that gunpowder was formulated initially by Muslim chemists. Further, these historians claim that the Muslims developed the first firearms. Notably, Muslim armies used grenades and other weapons in their defense of Algericus against the Franks during the 14th century. Jean Mathes indicates that the Muslim rulers had stockpiles of grenades, rifles, crude cannons, incendiary devices, sulfur bombs and pistols decades before such devices were used in Europe. The first mention of a cannon was in an Arabic text around 1300 A.D. Roger Bacon learned of the formula for gunpowder from Latin translations of Arabic books. He brought forth nothing original in this regard.


The Chinese who may have been the first to use it for navigational purposes sometime between 1000 and 1100 A.D invented the compass. The earliest reference to its use in navigation was by the Englishman, Alexander Neckam (1157-1217). Muslim geographers and navigators learned of the magnetic needle, possibly from the Chinese, and were the first to use magnetic needles in navigation. They invented the compass and passed the knowledge of its use in navigation to the West. European navigators relied on Muslim pilots and their instruments when exploring unknown territories. Gustav Le Bon claims that the magnetic needle and compass were entirely invented by the Muslims and that the Chinese had little to do with it. Neckam, as well as the Chinese, probably learned of it from Muslim traders. It is noteworthy that the Chinese improved their navigational expertise after they began interacting with the Muslims during the 8th century.


The first man to classify the races was the German Johann F. Blumenbach, who divided mankind into white, yellow, brown, black and red peoples. Muslim scholars of the 9th through 14th centuries invented the science of ethnography. A number of Muslim geographers classified the races, writing detailed explanations of their unique cultural habits and physical appearances. They wrote thousands of pages on this subject. Blumenbach's works were insignificant in comparison.


The science of geography was revived during the 15th, 16th and 17th centuries when the ancient works of Ptolemy were discovered. The Crusades and the Portuguese/Spanish expeditions also contributed to this reawakening. The first scientifically based treatise on geography were produced during this period by Europe's scholars. Muslim geographers produced untold volumes of books on the geography of Africa, Asia, India, China and the Indies during the 8th through 15th centuries. These writings included the world's first geographical encyclopedias, almanacs and road maps. Ibn Battutah's 14th century masterpieces provide a detailed view of the geography of the ancient world. The Muslim geographers of the 10th through 15th centuries far exceeded the output by Europeans regarding the geography of these regions well into the 18th century. The Crusades led to the destruction of educational institutions, their scholars and books. They brought nothing substantive regarding geography to the Western world.


It is taught that Robert Boyle in the 17th century originated the science of chemistry. A variety of Muslim chemists, including ar-Razi, al-Jabr, al-Biruni and al-Kindi, performed scientific experiments in chemistry some 700 years prior to Boyle. Durant writes that the Muslims introduced the experimental method to this science. Humboldt regards the Muslims as the founders of chemistry.


It is taught that Leonardo da Vinci (16th century) fathered the science of geology when he noted that fossils found on mountains indicated a watery origin of the earth. Al-Biruni (1lth century) made precisely this observation and added much to it, including a huge book on geology, hundreds of years before Da Vinci was born. Ibn Sina noted this as well (see pages 100-101). It is probable that Da Vinci first learned of this concept from Latin translations of Islamic books. He added nothing original to their findings.


The first mention of the geological formation of valleys was in 1756, when Nicolas Desmarest proposed that they were formed over a long periods of time by streams. Ibn Sina and al-Biruni made precisely this discovery during the 11th century (see pages 102 and 103), fully 700 years prior to Desmarest.


Galileo (17th century) was the world's first great experimenter. Al-Biruni (d. 1050) was the world's first great experimenter. He wrote over 200 books, many of which discuss his precise experiments. His literary output in the sciences amounts to some 13,000 pages, far exceeding that written by Galileo or, for that matter, Galileo and Newton combined.


The Italian Giovanni Morgagni is regarded as the father of pathology because he was the first to correctly describe the nature of disease. Islam's surgeons were the first pathologists. They fully realized the nature of disease and described a variety of diseases to modern detail. Ibn Zuhr correctly described the nature of pleurisy, tuberculosis and pericarditis. Az-Zahrawi accurately documented the pathology of hydrocephalus (water on the brain) and other congenital diseases. Ibn al-Quff and Ibn an-Nafs gave perfect descriptions of the diseases of circulation. Other Muslim surgeons gave the first accurate descriptions of certain malignancies, including cancer of the stomach, bowel, and esophagus. These surgeons were the originators of pathology, not Giovanni Morgagni.


It is taught that Paul Ehrlich (19th century) is the originator of drug chemotherapy, that is the use of specific drugs to kill microbes. Muslim physicians used a variety of specific substances to destroy microbes. They applied sulfur topically specifically to kill the scabies mite. Ar-Razi (10th century) used mercurial compounds as topical antiseptics.


Purified alcohol, made through distillation, was first produced by Arnau de Villanova, a Spanish alchemist in 1300 A.D. Numerous Muslim chemists produced medicinal-grade alcohol through distillation as early as the 10th century and manufactured on a large scale the first distillation devices for use in chemistry. They used alcohol as a solvent and antiseptic.


C.W. Long, an American in 1845, conducted the first surgery performed under inhalation anesthesia. Six hundred years prior to Long, Islamic Spain's Az-Zahrawi and Ibn Zuhr, among other Muslim surgeons, performed hundreds of surgeries under inhalation anesthesia with the use of narcotic-soaked sponges which were placed over the face.


During the 16th century Paracelsus invented the use of opium extracts for anesthesia. Muslim physicians introduced the anesthetic value of opium derivatives during the Middle Ages. The Greeks originally used opium as an anesthetic agent. Paracelus was a student of Ibn Sina's works from which it is almost assured that he derived this idea.


Humphrey Davy and Horace Wells invented modern anesthesia in the 19th century. Modern anesthesia was discovered, mastered, and perfected by Muslim anesthetists 900 years before the advent of Davy and Wells. They utilized oral as well as inhalant anesthetics.


The concept of quarantine was first developed in 1403. In Venice, a law was passed preventing strangers from entering the city until a certain waiting period had passed. If by then no sign of illness could be found, they were allowed in. The concept of quarantine was first introduced in the 7th century A.D. by the prophet Muhammad (P.B.U.H.), who wisely warned against entering or leaving a region suffering from plague. As early as the 10th century, Muslim physicians innovated the use of isolation wards for individuals suffering with communicable diseases.


The scientific use of antiseptics in surgery was discovered by the British surgeon Joseph Lister in 1865. As early as the 10th century, Muslim physicians and surgeons were applying purified alcohol to wounds as an antiseptic agent. Surgeons in Islamic Spain utilized special methods for maintaining antisepsis prior to and during surgery. They also originated specific protocols for maintaining hygiene during the post-operative period. Their success rate was so high that dignitaries throughout Europe came to Cordova, Spain, to be treated at what was comparably the "Mayo Clinic" of the Middle Ages.


It is taught that In 1545, the scientific use of surgery was advanced by the French surgeon Ambroise Pare. Prior to him, surgeons attempted to stop bleeding through the gruesome procedure of searing the wound with boiling oil. Pare stopped the use of boiling oils and began ligating arteries. He is considered the "father of rational surgery." Pare was also one of the first Europeans to condemn such grotesque "surgical" procedures as trepanning (see reference #6, pg. 110). Islamic Spain's illustrious surgeon, az-Zahrawi (d. 1013), began ligating arteries with fine sutures over 500 years prior to Pare. He perfected the use of Catgut, that is suture made from animal intestines. Additionally, he instituted the use of cotton plus wax to plug bleeding wounds. The full details of his works were made available to Europeans through Latin translations. Despite this, barbers and herdsmen continued to be the primary individuals practicing the "art" of surgery for nearly six centuries after az-Zahrawi's death. Pare himself was a barber, albeit more skilled and conscientious than the average ones. Included in az-Zahrawi's legacy are dozens of books. His most famous work is a 30-volume treatise on medicine and surgery. His books contain sections on preventive medicine, nutrition, cosmetics, drug therapy, surgical technique, anesthesia, pre and post-operative care as well as drawings of some 200 surgical devices, many of which he invented. The refined and scholarly az-Zahrawi must be regarded as the father and founder of rational surgery, not the uneducated Pare.


William Harvey, during the early 17th century, discovered that blood circulates. He was the first to correctly describe the function of the heart, arteries, and veins. Rome's Galen had presented erroneous ideas regarding the circulatory system, and Harvey was the first to determine that blood is pumped throughout the body via the action of the heart and the venous valves. Therefore, he is regarded as the founder of human physiology. In the 10th century, Islam's ar-Razi wrote an in-depth treatise on the venous system, accurately describing the function of the veins and their valves. Ibn an-Nafs and Ibn al-Quff (13th century) provided full documentation that the blood circulates and correctly described the physiology of the heart and the function of its valves 300 years before Harvey. William Harvey was a graduate of Italy's famous Padua University at a time when the majority of its curriculum was based upon Ibn Sina's and ar-Razi's textbooks.


The first pharmacopoeia (book of medicines) was published by a German scholar in 1542. According to World Book Encyclopedia, the science of pharmacology was begun in the 1900's as an offshoot of chemistry due to the analysis of crude plant materials. Chemists, after isolating the active ingredients from plants, realized their medicinal value. According to the eminent scholar of Arab history, Phillip Hitti, the Muslims, not the Greeks or Europeans, wrote the first "modern" pharmacopoeia. The science of pharmacology was originated by Muslim physicians during the 9th century. They developed it into a highly refined and exact science. Muslim chemists, pharmacists, and physicians produced thousands of drugs and/or crude herbal extracts one thousand years prior to the supposed birth of pharmacology. During the 14th century Ibn Baytar wrote a monumental pharmacopoeia listing some 1400 different drugs. Hundreds of other pharmacopoeias were published during the Islamic Era. It is likely that the German work is an offshoot of that by Ibn Baytar, which was widely circulated in Europe.


It is taught that the discovery of the scientific use of drugs in the treatment of specific diseases was made by Paracelsus, the Swiss-born physician, during the 16th century. He is also credited with being the first to use practical experience as a determining factor in the treatment of patients rather than relying exclusively on the works of the ancients. Ar-Razi, Ibn Sina, al-Kindi, Ibn Rushd, az-Zahrawi, Ibn Zuhr, Ibn Baytar, Ibn al-Jazzar, Ibn Juljul, Ibn al-Quff, Ibn an-Nafs, al-Biruni, Ibn Sahl and hundreds of other Muslim physicians mastered the science of drug therapy for the treatment of specific symptoms and diseases. In fact, this concept was entirely their invention. The word "drug" is derived from Arabic. Their use of practical experience and careful observation was extensive. Muslim physicians were the first to criticize ancient medical theories and practices. Ar-Razi devoted an entire book as a critique of Galen's anatomy. The works of Paracelsus are insignificant compared to the vast volumes of medical writings and original findings accomplished by the medical giants of Islam.


The first sound approach to the treatment of disease was made by a German, Johann Weger, in the 1500's. Harvard's George Sarton says that modern medicine is entirely an Islamic development while emphasizing that Muslim physicians of the 9th through 12th centuries were precise, scientific, rational, and sound in their approach. Johann Weger was among thousands of Europeans physicians during the 15th through 17th centuries that were taught the medicine of ar-Razi and Ibn Sina. He contributed nothing original.


Medical treatment for the insane was modernized by Philippe Pinel when in 1793 he operated France's first insane asylum. As early as the 1lth century, Islamic hospitals maintained special wards for the insane. They treated them kindly and presumed their disease was real at a time when the insane were routinely burned alive in Europe as witches and sorcerers. A curative approach was taken for mental illness and for the first time in history, the mentally ill were treated with supportive care, drugs, and psychotherapy. Every major Islamic city maintained an insane asylum where patients were treated at no charge. In fact, the Islamic system for the treatment of the insane excels in comparison to the current model, as it was more humane and was highly effective as well.


It is taught that kerosene was first produced by an Englishman, Abraham Gesner in 1853. He distilled it from asphalt. Muslim chemists produced kerosene as a distillate from petroleum products over 1,000 years prior to Gesner (see Encyclopaedia Britannica under the heading, Petroleum).

Friday, January 7, 2005

Virtual Realism

With current advances in graphics, either sophisticated algorithms, superfast Graphic Processor Unit (GPU) and other advanced technologies that support them, I predict soon we will have what is called "Virtual Realism". The concept is that "movie" makers create graphical scenes as game developers/animators do. The scenes themself are constructed only of triangles/vertices, color codes, ray information and others. This data are then streamed to the Internet. People who want to watch the movie will just click a link, and then the server will start stream the information.

The data received is then decoded by computer on the watcher's TV. The computer then will reconstruct the information back to a real animation. As the graphic techonologies will be so advanced, the scenes will really look realistic. The benefits of this mechanism is that people will possibly be able to change the scenario/stories, actors/actresses, voices so on. A lot of possibilities and sure it will be more interactive and fun for us.

What about the streaming bandwidth? As the data transmission only involves 'codes', not real images of scenes, a lot less information will be transmitted hence the bandwidth can be used for other data transmission (such as audio, subtitles, other information etc.) Quality? Of course the quality will be high, even may be higher than current HD quality because the system will just regenerate the scenes (noiseless) compared to displaying the images from the original source.