Friday, February 25, 2005

Tech trends will topple tradition

Tech trends will topple tradition

By Ron Wilson
EE Times

January 10, 2005 (9:00 AM EST)


OUTLOOK 2005
Where to look for earthshaking technology developments? Probably the best place to start is with the roadblocks that appear to stand in the way of traditional progress. There seem to be three of them, which the industry is approaching at considerable velocity. One is the diminishing progress in making CPUs faster. Another is the inability of manufacturing to keep up with the exponential growth in the complexity of systems. And the third is the seemingly insurmountable barrier between microelectronic and living systems.

For several years, there has been a grassroots movement to perform supercomputing problems on multiprocessing systems in which "multiple" means thousands, or even millions. Welcome to the world of peer computing.

The concept is disarmingly simple. There are millions of PCs, workstations and servers in the world, most of which sit unconscionably idle most of the time. If pieces of an enormous computing task could be dispatched over the Internet to some of these machines — say, a few tens of thousands — and if the pieces ran in the background, so that the users weren't inconvenienced, a lot of computing work could be done essentially for free.

This is exactly the way the Search for Extraterrestrial Intelligence (SETI) at Home project works. Most of the people who run SETI are volunteers. But there are also commercial uses of grid networks, as such Internet-linked communities of computers are known. United Devices (Austin, Texas), which provided the supervisory software for SETI, is a commercial enterprise that sells grid-computing systems to commercial clients.

Of course, there is fine print in the tale, too. One obvious issue is that massive networks of loosely coupled computers are useful only if the application lends itself to massive parallelism.

These are the applications that Gordon Bell, senior researcher at Microsoft Corp.'s Bay Area Research Center, calls "embarrassingly parallel." In the SETI program, for instance, essentially the same relatively simple calculations are being performed on enormous numbers of relatively small data sets. The only communication necessary between the peer computer and the supervisor, once the data is delivered to the peer, is a simple "Yes, this one is interesting" or "Nope." The application is ideal for a loosely coupled network of peers.

Stray from that ideal situation, though, and things start to get complicated. Bell pointed out that bandwidth is so limited in wide-area networks, and latency so large and unpredictable, that any need for tight coupling between the peers renders the approach impractical. And of course, the individual task size has to fit in the background on the individual peer systems.

Is it possible to work around these limitations? Bell was guardedly pessimistic. "After two decades of building multicomputers — aka clusters that have relatively long latency among the nodes — the programming problem appears to be as difficult as ever," Bell wrote in an e-mail interview. The only progress, he said, has been to standardize on Beowulf — which specifies the minimum hardware and software requirements for Linux-based computer clusters — and MPI, a standard message-passing interface for them, "as a way to write portable programs that help get applications going, and help to establish a platform for ISVs [independent software vendors]."

Will we find ways to make a wider class of problems highly parallel? "I'm not optimistic about a silver bullet here," Bell replied. "To steal a phrase, it's hard work — really hard work."

But Bell does point to a few areas of interest. One is the observation that peer networks can work as pipelined systems just as well as parallel systems, providing that the traffic through the pipeline is not too high in bandwidth and the pipeline is tolerant of the WAN's latencies.

Will peer networks replace supercomputers? In the general case, Bell believes not. Technology consultant and architecture guru of long standing John Mashey agrees. "Anybody who's ever done serious high-performance computing knows that getting enough bandwidth to the data is an issue for lots of real problems," Mashey wrote. In some cases, creating a private network may be the only way to get the bandwidth and latency necessary to keep the computation under control. But that of course limits the number of peers that can be added to the system. And there are also issues of trust, security and organization to be faced.

But even within these limitations, it seems likely that peer computing on a massive scale will play an increasing role in the attack on certain types of problems. It may well be that our understanding of proteins, modeling of stars and galaxies, and synthesis of human thought may all depend on the use of peer networks to go where no individual computer or server farm can take us.

Some systems are too complex to be organized by an outside agent. Others — nanosystems — may be too small to be built by external devices. These problems lie within the realm of the second technology guess we are offering, the technology of self-assembling systems. Like peer-computing networks, self-assembling systems exist in specific instances today, although much more in the laboratory than on the Web. And like peer networks, self-assembling systems promise to break through significant barriers — at least in some instances — either of enormous complexity or of infinitesimal size.

One way of looking at self-assembling systems is through a series of criteria. As a gross generalization, a self-assembling system is made up of individual components that can either move themselves or alter their functions, that can connect to each other and that can sense where they are in the system that is assembling itself. The components must do those things without outside intervention.

The guiding example for much of the work in this area is that ultimate self-assembling system, the biological organism. Not by coincidence, much of the existing work in self-assembling systems is not in electronics or robotics but in a new field called synthetic biology.

In essence, synthetic biology has tried to create (or discover) a series of amino acids that can act as building blocks for assembling DNA sequences with specific, predictable functions — DNA that will produce specific proteins when inserted into a living cell.

But according to Radhika Nagpal, assistant professor in computer science at Harvard University, the biological work is spilling over into electronics as well. Researchers are working on getting biomolecules to assemble themselves into predictable patterns while carrying along electronic components. Thus, the underlying pattern of molecules would be reflected in the organization of the electronics. Working in another direction, Harvard researcher George Whitesides has been experimenting with two-dimensional electronic circuits that can assemble themselves into three-dimensional circuits.

Much work is also being done on a larger scale, said Nagpal. Self-organizing robotic systems comprising from tens to perhaps a hundred modules have been built. While all of these projects are very much in the research arena, the individuals manning them work with actual hardware — if we can lump DNA into that category — not simply simulation models.

Nor is the work part of some futuristic scenario. "Some of it is nearer than you might think," Nagpal said.

Researchers make rat brain neurons interact with FET array at Max Planck Institute.

The nanotechnology area, though, remains longer-term. Few if any physical examples of self-assembling nanodevices exist today. But many of the principles being developed both in the synthetic-biology arena and in the work on selective-affinity self-assembly for electronic circuits may eventually prove applicable to nanoscale problems.

The final barrier for a breakthrough technology, and the one that is quite possibly the furthest away, is the barrier that separates electronic from living systems. One can envision electronic devices that can directly recognize or act upon living cells or perhaps even individual proteins. Such technology would make possible entirely new applications in medical analysis — identifying a marker protein or a virus in a blood sample, for instance — and in therapy. But the ability to directly interface electronics to cells would also make possible a long-held dream of science-fiction writers: electronic systems that communicate directly with the central nervous systems of humans, bypassing missing limbs or inadequate sense organs.

In this area too, there is science where there used to be science fiction. ICs have been fabricated for some time that are capable of sensing gross properties of chemical solutions, such as pH, the measure of acidity. But more to the point, researchers at the Interuniversity Microelectronics Center (IMEC; Leuven, Belgium) have been working on ICs that can steer individual protein molecules about on the surface of the die, moving them to a detection site where their presence can be recorded. To start the process, researchers first attach a magnetic nanobead to the protein. Then they manipulate a magnetic field to move the molecule. The detection is done by a spin-valve sensor.

Even more exciting work has been reported by IMEC and — at the forthcoming International Solid-State Circuits Conference — will be reported by the Max Planck Institute for Biochemistry (Munich, Germany). Both organizations have reportedly succeeded in fabricating ICs of their respective designs that comprise an array of sensors and transistors. The sensors can detect the electrical "action potentials" generated by neurons and the transistors can stimulate the neurons directly. Living neuron cells have been placed on the surface of the chip, stimulated and sensed. The Max Planck Institute claims to have grown neurons on the surface of a chip as well.

This is a technology of obvious potential, but with a long way to go. For one thing, the physical interface between electronic circuits and biochemical solutions — let alone living cells — is always problematic, according to Luke Lee, assistant professor of bioengineering and director of the Biomolecular Nanotechnology Center at the University of California, Berkeley. After the mechanisms have been understood and the sensors designed, there is still the problem of keeping the chemicals from destroying the chip. So even simple sensors are not a slam dunk.

Moving more delicate creations, such as neuron/chip interfaces, into production is even more problematic. One obvious issue is that the neurons you want to interface to aren't the ones you can extract and put on a chip — they are individuals among millions in a bundle of nerve fibers in a living body. But Lee pointed out that there are repeatability issues even with the in vitro work that is being reported now. It is still at the level of elegant demonstrations, not widely reproducible experiments with consistent results. "I am concerned that many people overpromise nanobiotechnology without really knowing the limitations of nano- and microfabrication," said Lee.


Alcatel & Microsoft develop IP Television

Alcatel and Microsoft Corp. announced Tuesday (Feb. 22) a global collaboration agreement to accelerate the availability of Internet Protocol Television (IPTV) services for broadband operators world-wide.

Under the agreement, the companies will team to develop an integrated IPTV delivery solution leveraging Alcatel's leadership in broadband, IP networking, development, and integration of end-to-end multimedia and video solutions, and Microsoft's expertise in TV software solutions and connected-entertainment experiences across consumer devices.

The companies believe the integrated solution can help service providers reduce deployment costs and shorten time-to-market for IPTV services as they transition to mass-market deployments of IPTV. On-demand video streaming applications, interactive TV, video and voice communications, photo, music and home video sharing, and online gaming are some services that consumers could receive through their multimedia-connected home networks.

Joint initiatives being pursed by both companies include developing custom applications to meet the unique needs of different cultures and markets globally; enhancing application and network resilience for better reliability in large-scale deployment; and integrating content, security and digital rights management to ensure secure delivery of high-quality content throughout the home.

Saturday, January 8, 2005

Lies in Scientific Researches by West

Inventions by Muslims

Roger Bacon is credited with drawing a flying apparatus as is Leonardo da Vinci. Actually Ibn Firnas of Islamic Spain invented, constructed, and tested a flying machine in the 800's A.D. Roger Bacon learned of flying machines from Arabic references to Ibn Firnas' machine. The latter's invention antedates Bacon by 500 years and Da Vinci by some 700 years.


It is taught that glass mirrors were first produced in Venice, the year 1291. Glass mirrors were actually in use in Islamic Spain as early as the 11th century. The Venetians learned the art of fine glass production from Syrian artisans during the 9th and 10th centuries.


It is taught that before the 14th century, the only type of clock available was the water clock. In 1335, a large mechanical clock was erected in Milan, Italy and it was claimed as the first mechanical clock. However, Spanish Muslim engineers produced both large and small mechanical clocks that were weight-driven. This knowledge was transmitted to Europe via Latin translations of Islamic books on mechanics, which contained designs and illustrations of epi-cyclic and segmental gears. One such clock included a mercury escapement. Europeans directly copied the latter type during the 15th century. In addition, during the 9th century, Ibn Firnas of Islamic Spain, according to Will Durant, invented a watch-like device which kept accurate time. The Muslims also constructed a variety of highly accurate astronomical clocks for use in their observatories.


In the 17th century, the pendulum was said to be developed by Galileo during his teenage years. He noticed a chandelier swaying as it was being blown by the wind. As a result, he went home and invented the pendulum. However, the pendulum was actually discovered by Ibn Yunus al-Masri during the 10th century. He was the first to study and document a pendulums oscillatory motion. Muslim physiscists introduced its value for use in clocks the 15th century.


It is taught that Johannes Gutenburg of Germany invented the movable type and printing press during the 15th century. In 1454, Gutenberg did develop the most sophisticated printing press of the Middle Ages. But a movable brass type was in use in Islamic Spain 100 years prior, which is where the West's first printing devices were made.


It is taught that Isaac Newton's 17th century study of lenses, light, and prisms form the foundation of the modern science of optics. Actually al-Haythem in the 11th century determined virtually everything that Newton advanced regarding optics centuries prior and is regarded by numerous authorities as the "founder of optics." There is little doubt that Newton was influenced by him. Al-Haytham was the most quoted physicist of the Middle Ages. His works were utilized and quoted by a greater number of European scholars during the 16th and 17th centuries than those of Newton and Galileo combined.


The English scholar Roger Bacon (d. 1292) first mentioned glass lenses for improving vision. At nearly the same time, eyeglasses could be found in use both in China and Europe. Ibn Firnas of Islamic Spain invented eyeglasses during the 9th century, and they were manufactured and sold throughout Spain for over two centuries. Any mention of eyeglasses by Roger Bacon was simply a regurgitation of the work of al-Haytham (d. 1039), whose research Bacon frequently referred to.


Isaac Newton is said to have discovered during the 17th century that white light consists of various rays of colored light. This discovery was made in its entirety by al-Haytham (1lth century) and Kamal ad-Din (14th century). Newton did make original discoveries, but this was not one of them.


The concept of the finite nature of matter was first introduced by Antoine Lavoisier during the 18th century. He discovered that, although matter may change its form or shape, its mass always remains the same. Thus, for instance, if water is heated to steam, if salt is dissolved in water, or if a piece of wood is burned to ashes, the total mass remains unchanged. The principles of this discovery were elaborated centuries before by Islamic Persia's great scholar, al-Biruni (d. 1050). Lavoisier was a disciple of the Muslim chemists and physicists and referred to their books frequently.


It is taught that the Greeks were the developers of trigonometry. Trigonometry remained largely a theoretical science among the Greeks. Muslim scholars developed it to a level of modern perfection, although the weight of the credit must be given to al-Battani. The words describing the basic functions of this science, sine, cosine and tangent, are all derived from Arabic terms. Thus, original contributions by the Greeks in trigonometry were minimal.


It is taught that a Dutchman, Simon Stevin, first developed the use of decimal fractions in mathematics in 1589. He helped advance the mathematical sciences by replacing the cumbersome fractions, for instance, 1/2, with decimal fractions, for example, 0.5. Muslim mathematicians were the first to utilize decimals instead of fractions on a large scale. Al-Kashi's book, Key to Arithmetic, was written at the beginning of the 15th century and was the stimulus for the systematic application of decimals to whole numbers and fractions thereof. It is highly probable that Stevin imported the idea to Europe from al-Kashi's work.


The first man to utilize algebraic symbols was the French mathematician, Francois Vieta. In 1591, he wrote an algebra book describing equations with letters such as the now familiar x and y's. Asimov says that this discovery had an impact similar to the progression from Roman numerals to Arabic numbers. Muslim mathematicians, the inventors of algebra, introduced the concept of using letters for unknown variables in equations as early as the 9th century A.D. Through this system, they solved a variety of complex equations, including quadratic and cubic equations. They used symbols to develop and perfect the binomial theorem.


It is taught that the difficult cubic equations (x to the third power) remained unsolved until the 16th century when Niccolo Tartaglia, an Italian mathematician, solved them. Muslim mathematicians solved cubic equations as well as numerous equations of even higher degrees with ease as early as the 10th century.


The concept that numbers could be less than zero, that is negative numbers, was unknown until 1545 when Geronimo Cardano introduced the idea. Muslim mathematicians introduced negative numbers for use in a variety of arithmetic functions at least 400 years prior to Cardano.


In 1614, John Napier invented logarithms and logarithmic tables. Muslim mathematicians invented logarithms and produced logarithmic tables several centuries prior. Such tables were common in the Muslim world as early as the 13th century.


During the 17th century Rene Descartes made the discovery that algebra could be used to solve geometrical problems. By this, he greatly advanced the science of geometry. Mathematicians of the Islamic Empire accomplished precisely this as early as the 9th century A.D. Thabit bin Qurrah was the first to do so, and he was followed by Abu'l Wafa, whose 10th century book utilized algebra to advance geometry into an exact and simplified science.


Isaac Newton, during the 17th century, developed the binomial theorem, which is a crucial component for the study of algebra. Hundreds of Muslim mathematicians utilized and perfected the binomial theorem. They initiated its use for the systematic solution of algebraic problems during the 10th century (or prior).


No improvement had been made in the astronomy of the ancients during the Middle Ages regarding the motion of planets until the 13th century. Then Alphonso the Wise of Castile (Middle Spain) invented the Aphonsine Tables, which were more accurate than Ptolemy's. Muslim astronomers made numerous improvements upon Ptolemy's findings as early as the 9th century. They were the first astronomers to dispute his archaic ideas. In their critic of the Greeks, they synthesized proof that the sun is the center of the solar system and that the orbits of the earth and other planets might be elliptical. They produced hundreds of highly accurate astronomical tables and star charts. Many of their calculations are so precise that they are regarded as contemporary. The Alphonsine Tables are little more than copies of works on astronomy transmitted to Europe via Islamic Spain, i.e. the Toledo Tables.


Gunpowder was developed in the Western world as a result of Roger Bacon's work in 1242. The first usage of gunpowder in weapons was when the Chinese fired it from bamboo shoots in attempt to frighten Mongol conquerors. They produced it by adding sulfur and charcoal to saltpeter. The Chinese developed saltpeter for use in fireworks and knew of no tactical military use for gunpowder, nor did they invent its formula. Research by Reinuad and Fave have clearly shown that gunpowder was formulated initially by Muslim chemists. Further, these historians claim that the Muslims developed the first firearms. Notably, Muslim armies used grenades and other weapons in their defense of Algericus against the Franks during the 14th century. Jean Mathes indicates that the Muslim rulers had stockpiles of grenades, rifles, crude cannons, incendiary devices, sulfur bombs and pistols decades before such devices were used in Europe. The first mention of a cannon was in an Arabic text around 1300 A.D. Roger Bacon learned of the formula for gunpowder from Latin translations of Arabic books. He brought forth nothing original in this regard.


The Chinese who may have been the first to use it for navigational purposes sometime between 1000 and 1100 A.D invented the compass. The earliest reference to its use in navigation was by the Englishman, Alexander Neckam (1157-1217). Muslim geographers and navigators learned of the magnetic needle, possibly from the Chinese, and were the first to use magnetic needles in navigation. They invented the compass and passed the knowledge of its use in navigation to the West. European navigators relied on Muslim pilots and their instruments when exploring unknown territories. Gustav Le Bon claims that the magnetic needle and compass were entirely invented by the Muslims and that the Chinese had little to do with it. Neckam, as well as the Chinese, probably learned of it from Muslim traders. It is noteworthy that the Chinese improved their navigational expertise after they began interacting with the Muslims during the 8th century.


The first man to classify the races was the German Johann F. Blumenbach, who divided mankind into white, yellow, brown, black and red peoples. Muslim scholars of the 9th through 14th centuries invented the science of ethnography. A number of Muslim geographers classified the races, writing detailed explanations of their unique cultural habits and physical appearances. They wrote thousands of pages on this subject. Blumenbach's works were insignificant in comparison.


The science of geography was revived during the 15th, 16th and 17th centuries when the ancient works of Ptolemy were discovered. The Crusades and the Portuguese/Spanish expeditions also contributed to this reawakening. The first scientifically based treatise on geography were produced during this period by Europe's scholars. Muslim geographers produced untold volumes of books on the geography of Africa, Asia, India, China and the Indies during the 8th through 15th centuries. These writings included the world's first geographical encyclopedias, almanacs and road maps. Ibn Battutah's 14th century masterpieces provide a detailed view of the geography of the ancient world. The Muslim geographers of the 10th through 15th centuries far exceeded the output by Europeans regarding the geography of these regions well into the 18th century. The Crusades led to the destruction of educational institutions, their scholars and books. They brought nothing substantive regarding geography to the Western world.


It is taught that Robert Boyle in the 17th century originated the science of chemistry. A variety of Muslim chemists, including ar-Razi, al-Jabr, al-Biruni and al-Kindi, performed scientific experiments in chemistry some 700 years prior to Boyle. Durant writes that the Muslims introduced the experimental method to this science. Humboldt regards the Muslims as the founders of chemistry.


It is taught that Leonardo da Vinci (16th century) fathered the science of geology when he noted that fossils found on mountains indicated a watery origin of the earth. Al-Biruni (1lth century) made precisely this observation and added much to it, including a huge book on geology, hundreds of years before Da Vinci was born. Ibn Sina noted this as well (see pages 100-101). It is probable that Da Vinci first learned of this concept from Latin translations of Islamic books. He added nothing original to their findings.


The first mention of the geological formation of valleys was in 1756, when Nicolas Desmarest proposed that they were formed over a long periods of time by streams. Ibn Sina and al-Biruni made precisely this discovery during the 11th century (see pages 102 and 103), fully 700 years prior to Desmarest.


Galileo (17th century) was the world's first great experimenter. Al-Biruni (d. 1050) was the world's first great experimenter. He wrote over 200 books, many of which discuss his precise experiments. His literary output in the sciences amounts to some 13,000 pages, far exceeding that written by Galileo or, for that matter, Galileo and Newton combined.


The Italian Giovanni Morgagni is regarded as the father of pathology because he was the first to correctly describe the nature of disease. Islam's surgeons were the first pathologists. They fully realized the nature of disease and described a variety of diseases to modern detail. Ibn Zuhr correctly described the nature of pleurisy, tuberculosis and pericarditis. Az-Zahrawi accurately documented the pathology of hydrocephalus (water on the brain) and other congenital diseases. Ibn al-Quff and Ibn an-Nafs gave perfect descriptions of the diseases of circulation. Other Muslim surgeons gave the first accurate descriptions of certain malignancies, including cancer of the stomach, bowel, and esophagus. These surgeons were the originators of pathology, not Giovanni Morgagni.


It is taught that Paul Ehrlich (19th century) is the originator of drug chemotherapy, that is the use of specific drugs to kill microbes. Muslim physicians used a variety of specific substances to destroy microbes. They applied sulfur topically specifically to kill the scabies mite. Ar-Razi (10th century) used mercurial compounds as topical antiseptics.


Purified alcohol, made through distillation, was first produced by Arnau de Villanova, a Spanish alchemist in 1300 A.D. Numerous Muslim chemists produced medicinal-grade alcohol through distillation as early as the 10th century and manufactured on a large scale the first distillation devices for use in chemistry. They used alcohol as a solvent and antiseptic.


C.W. Long, an American in 1845, conducted the first surgery performed under inhalation anesthesia. Six hundred years prior to Long, Islamic Spain's Az-Zahrawi and Ibn Zuhr, among other Muslim surgeons, performed hundreds of surgeries under inhalation anesthesia with the use of narcotic-soaked sponges which were placed over the face.


During the 16th century Paracelsus invented the use of opium extracts for anesthesia. Muslim physicians introduced the anesthetic value of opium derivatives during the Middle Ages. The Greeks originally used opium as an anesthetic agent. Paracelus was a student of Ibn Sina's works from which it is almost assured that he derived this idea.


Humphrey Davy and Horace Wells invented modern anesthesia in the 19th century. Modern anesthesia was discovered, mastered, and perfected by Muslim anesthetists 900 years before the advent of Davy and Wells. They utilized oral as well as inhalant anesthetics.


The concept of quarantine was first developed in 1403. In Venice, a law was passed preventing strangers from entering the city until a certain waiting period had passed. If by then no sign of illness could be found, they were allowed in. The concept of quarantine was first introduced in the 7th century A.D. by the prophet Muhammad (P.B.U.H.), who wisely warned against entering or leaving a region suffering from plague. As early as the 10th century, Muslim physicians innovated the use of isolation wards for individuals suffering with communicable diseases.


The scientific use of antiseptics in surgery was discovered by the British surgeon Joseph Lister in 1865. As early as the 10th century, Muslim physicians and surgeons were applying purified alcohol to wounds as an antiseptic agent. Surgeons in Islamic Spain utilized special methods for maintaining antisepsis prior to and during surgery. They also originated specific protocols for maintaining hygiene during the post-operative period. Their success rate was so high that dignitaries throughout Europe came to Cordova, Spain, to be treated at what was comparably the "Mayo Clinic" of the Middle Ages.


It is taught that In 1545, the scientific use of surgery was advanced by the French surgeon Ambroise Pare. Prior to him, surgeons attempted to stop bleeding through the gruesome procedure of searing the wound with boiling oil. Pare stopped the use of boiling oils and began ligating arteries. He is considered the "father of rational surgery." Pare was also one of the first Europeans to condemn such grotesque "surgical" procedures as trepanning (see reference #6, pg. 110). Islamic Spain's illustrious surgeon, az-Zahrawi (d. 1013), began ligating arteries with fine sutures over 500 years prior to Pare. He perfected the use of Catgut, that is suture made from animal intestines. Additionally, he instituted the use of cotton plus wax to plug bleeding wounds. The full details of his works were made available to Europeans through Latin translations. Despite this, barbers and herdsmen continued to be the primary individuals practicing the "art" of surgery for nearly six centuries after az-Zahrawi's death. Pare himself was a barber, albeit more skilled and conscientious than the average ones. Included in az-Zahrawi's legacy are dozens of books. His most famous work is a 30-volume treatise on medicine and surgery. His books contain sections on preventive medicine, nutrition, cosmetics, drug therapy, surgical technique, anesthesia, pre and post-operative care as well as drawings of some 200 surgical devices, many of which he invented. The refined and scholarly az-Zahrawi must be regarded as the father and founder of rational surgery, not the uneducated Pare.


William Harvey, during the early 17th century, discovered that blood circulates. He was the first to correctly describe the function of the heart, arteries, and veins. Rome's Galen had presented erroneous ideas regarding the circulatory system, and Harvey was the first to determine that blood is pumped throughout the body via the action of the heart and the venous valves. Therefore, he is regarded as the founder of human physiology. In the 10th century, Islam's ar-Razi wrote an in-depth treatise on the venous system, accurately describing the function of the veins and their valves. Ibn an-Nafs and Ibn al-Quff (13th century) provided full documentation that the blood circulates and correctly described the physiology of the heart and the function of its valves 300 years before Harvey. William Harvey was a graduate of Italy's famous Padua University at a time when the majority of its curriculum was based upon Ibn Sina's and ar-Razi's textbooks.


The first pharmacopoeia (book of medicines) was published by a German scholar in 1542. According to World Book Encyclopedia, the science of pharmacology was begun in the 1900's as an offshoot of chemistry due to the analysis of crude plant materials. Chemists, after isolating the active ingredients from plants, realized their medicinal value. According to the eminent scholar of Arab history, Phillip Hitti, the Muslims, not the Greeks or Europeans, wrote the first "modern" pharmacopoeia. The science of pharmacology was originated by Muslim physicians during the 9th century. They developed it into a highly refined and exact science. Muslim chemists, pharmacists, and physicians produced thousands of drugs and/or crude herbal extracts one thousand years prior to the supposed birth of pharmacology. During the 14th century Ibn Baytar wrote a monumental pharmacopoeia listing some 1400 different drugs. Hundreds of other pharmacopoeias were published during the Islamic Era. It is likely that the German work is an offshoot of that by Ibn Baytar, which was widely circulated in Europe.


It is taught that the discovery of the scientific use of drugs in the treatment of specific diseases was made by Paracelsus, the Swiss-born physician, during the 16th century. He is also credited with being the first to use practical experience as a determining factor in the treatment of patients rather than relying exclusively on the works of the ancients. Ar-Razi, Ibn Sina, al-Kindi, Ibn Rushd, az-Zahrawi, Ibn Zuhr, Ibn Baytar, Ibn al-Jazzar, Ibn Juljul, Ibn al-Quff, Ibn an-Nafs, al-Biruni, Ibn Sahl and hundreds of other Muslim physicians mastered the science of drug therapy for the treatment of specific symptoms and diseases. In fact, this concept was entirely their invention. The word "drug" is derived from Arabic. Their use of practical experience and careful observation was extensive. Muslim physicians were the first to criticize ancient medical theories and practices. Ar-Razi devoted an entire book as a critique of Galen's anatomy. The works of Paracelsus are insignificant compared to the vast volumes of medical writings and original findings accomplished by the medical giants of Islam.


The first sound approach to the treatment of disease was made by a German, Johann Weger, in the 1500's. Harvard's George Sarton says that modern medicine is entirely an Islamic development while emphasizing that Muslim physicians of the 9th through 12th centuries were precise, scientific, rational, and sound in their approach. Johann Weger was among thousands of Europeans physicians during the 15th through 17th centuries that were taught the medicine of ar-Razi and Ibn Sina. He contributed nothing original.


Medical treatment for the insane was modernized by Philippe Pinel when in 1793 he operated France's first insane asylum. As early as the 1lth century, Islamic hospitals maintained special wards for the insane. They treated them kindly and presumed their disease was real at a time when the insane were routinely burned alive in Europe as witches and sorcerers. A curative approach was taken for mental illness and for the first time in history, the mentally ill were treated with supportive care, drugs, and psychotherapy. Every major Islamic city maintained an insane asylum where patients were treated at no charge. In fact, the Islamic system for the treatment of the insane excels in comparison to the current model, as it was more humane and was highly effective as well.


It is taught that kerosene was first produced by an Englishman, Abraham Gesner in 1853. He distilled it from asphalt. Muslim chemists produced kerosene as a distillate from petroleum products over 1,000 years prior to Gesner (see Encyclopaedia Britannica under the heading, Petroleum).

Friday, January 7, 2005

Virtual Realism

With current advances in graphics, either sophisticated algorithms, superfast Graphic Processor Unit (GPU) and other advanced technologies that support them, I predict soon we will have what is called "Virtual Realism". The concept is that "movie" makers create graphical scenes as game developers/animators do. The scenes themself are constructed only of triangles/vertices, color codes, ray information and others. This data are then streamed to the Internet. People who want to watch the movie will just click a link, and then the server will start stream the information.

The data received is then decoded by computer on the watcher's TV. The computer then will reconstruct the information back to a real animation. As the graphic techonologies will be so advanced, the scenes will really look realistic. The benefits of this mechanism is that people will possibly be able to change the scenario/stories, actors/actresses, voices so on. A lot of possibilities and sure it will be more interactive and fun for us.

What about the streaming bandwidth? As the data transmission only involves 'codes', not real images of scenes, a lot less information will be transmitted hence the bandwidth can be used for other data transmission (such as audio, subtitles, other information etc.) Quality? Of course the quality will be high, even may be higher than current HD quality because the system will just regenerate the scenes (noiseless) compared to displaying the images from the original source.

Wednesday, December 1, 2004

CISC Architecture will die?

As we know, Intel's x86 architecure is based on an old technology called CISC (Complete Instruction Set Computer). The newer technology of microprocessors is based on RISC (Reduced Instruction Set Computer). The difference is that CISC uses a complete set of instructions and many of them with different length, thus pipelining, pre-fetching and other new schemes to improve parallelism hard to achieve. Meanwhile, in RISC all instructions are in the same length. Another thing is that RISC is based on architecture that is called "register-register" or Load-Store, which does not have accumulator, but all of the registers are General Purpose Registers (GPR).

Optimizations achieved by RISCs are done through compiler assistance. Thus, in the desktop/server market, RISC computers use compilers to translate high-level codes into RISC instructions and the remaining CISC computer uses hardware to translate microcodes. One recent novel variation for the laptop market is the Transmeta Crusoe, which interprets 80x86 instructions and compiles on the fly into internal instructions. The similar way is done on recent Intel Pentium4 architecture with its NetBurst, superscalar etc.

The oldest architecture in computer engineering is stack architecture. In 196, a company called Burroughs delivered the B5000 which is based on stack architecture. Stack architecture is almost obsolete, until Java Virtual Machine came from Sun. Some processors also still use this stack architecture. For example, floating point processing on x86 processors or some embedded microcontrollers.

In the early 1980s, the direction of computer architecture began to swing away from providing high-level hardware support for languages. Ditzel and Patterson analyzed the difficulties encountered by the high-level language archictectures and argued that the answer lay in simpler architectures. In another paper, these authors first discussed the idea of RISC and presented the argument for simpler architecture. Two VAX architects, Clark adn Strecker, rebutted their proposal.

In 1980, Patterson and his colleagues at Berkeley began the project that was to give this architectural approach its name. They built two computers called RISC-I and RISC-II. Because the IBM project on RISC was not widely known or discussed, the role played by the Berkeley group in promoting the RISC approach was critical to the acceptance of the technology. They also built one of the first instruction caches to support hybrid format RISC. It supported 16 and 32-bit instructions in memory but 32-bit in the cache. The Berkeley group went on to build RISC computer targeted toward Smalltalk, and LISP.

In 1981, Hennessy and his colleagues at Stanford University published a description of the Stanford MIPS computer. Efficient pipelining and compiler-assisted scheduling of the pipeline were both important aspects of the original MIPS design. MIPS stood for "Microprocessor without Interlocked Pipeline Stages", reflecting the lack of hardware to stall the pipeline, as the compiler would handle dependencies.

In 1987, a new company named Sun Microsystems started selling computers based on the SPARC architecture. SPARC is a derivative of the Berkeley RISC-II processor. In 1990s, Apple, IBM, and Motorola co-developed a new RISC processor called PowerPC. The processor is now used in every computer made by Apple. The latest PowerPC is G5 which is dual-core RISC processor. As we see, Apple's Mac computers are basically speedier than Intel's x86 architecture, but because Intel is strong in its marketing and always talks about "GigaHertz" clock performance, many people still think that higher clock speed on processors always corresponds to faster processing, which is not always the case. All graphic card producers such as NVidia and ATI also base their graphic coprocessors on RISC architecture, even with more advanced technologies (just for your info, NVidia's GeForce6 GPUs have more transistors than the latest Pentium4 Extreme Edition).

Why then the old technology (CISC in x86) still can survive? The answer is machine-level compatibility. With millions of x86 installed in most PCs worldwide, Intel ofcourse wants to keep it that way. Their RISC project cooperate along with HP (I64 architecture [one of the product is named Itanium], which is based on RISC) could not repeat their success in x86. But, although x86s use CISC from code perspective, they actually are more RISC and CISC as the processors now borrow technologies from RISC, such as pipelining, pre-fetch, super-scalar, branch prediction and parallelism (which is popularized by Intel with terminology: SIMD, such as in MMX, SSE, SSE2 and SSE3 codes).