Friday, January 7, 2005

Virtual Realism

With current advances in graphics, either sophisticated algorithms, superfast Graphic Processor Unit (GPU) and other advanced technologies that support them, I predict soon we will have what is called "Virtual Realism". The concept is that "movie" makers create graphical scenes as game developers/animators do. The scenes themself are constructed only of triangles/vertices, color codes, ray information and others. This data are then streamed to the Internet. People who want to watch the movie will just click a link, and then the server will start stream the information.

The data received is then decoded by computer on the watcher's TV. The computer then will reconstruct the information back to a real animation. As the graphic techonologies will be so advanced, the scenes will really look realistic. The benefits of this mechanism is that people will possibly be able to change the scenario/stories, actors/actresses, voices so on. A lot of possibilities and sure it will be more interactive and fun for us.

What about the streaming bandwidth? As the data transmission only involves 'codes', not real images of scenes, a lot less information will be transmitted hence the bandwidth can be used for other data transmission (such as audio, subtitles, other information etc.) Quality? Of course the quality will be high, even may be higher than current HD quality because the system will just regenerate the scenes (noiseless) compared to displaying the images from the original source.

Wednesday, December 1, 2004

CISC Architecture will die?

As we know, Intel's x86 architecure is based on an old technology called CISC (Complete Instruction Set Computer). The newer technology of microprocessors is based on RISC (Reduced Instruction Set Computer). The difference is that CISC uses a complete set of instructions and many of them with different length, thus pipelining, pre-fetching and other new schemes to improve parallelism hard to achieve. Meanwhile, in RISC all instructions are in the same length. Another thing is that RISC is based on architecture that is called "register-register" or Load-Store, which does not have accumulator, but all of the registers are General Purpose Registers (GPR).

Optimizations achieved by RISCs are done through compiler assistance. Thus, in the desktop/server market, RISC computers use compilers to translate high-level codes into RISC instructions and the remaining CISC computer uses hardware to translate microcodes. One recent novel variation for the laptop market is the Transmeta Crusoe, which interprets 80x86 instructions and compiles on the fly into internal instructions. The similar way is done on recent Intel Pentium4 architecture with its NetBurst, superscalar etc.

The oldest architecture in computer engineering is stack architecture. In 196, a company called Burroughs delivered the B5000 which is based on stack architecture. Stack architecture is almost obsolete, until Java Virtual Machine came from Sun. Some processors also still use this stack architecture. For example, floating point processing on x86 processors or some embedded microcontrollers.

In the early 1980s, the direction of computer architecture began to swing away from providing high-level hardware support for languages. Ditzel and Patterson analyzed the difficulties encountered by the high-level language archictectures and argued that the answer lay in simpler architectures. In another paper, these authors first discussed the idea of RISC and presented the argument for simpler architecture. Two VAX architects, Clark adn Strecker, rebutted their proposal.

In 1980, Patterson and his colleagues at Berkeley began the project that was to give this architectural approach its name. They built two computers called RISC-I and RISC-II. Because the IBM project on RISC was not widely known or discussed, the role played by the Berkeley group in promoting the RISC approach was critical to the acceptance of the technology. They also built one of the first instruction caches to support hybrid format RISC. It supported 16 and 32-bit instructions in memory but 32-bit in the cache. The Berkeley group went on to build RISC computer targeted toward Smalltalk, and LISP.

In 1981, Hennessy and his colleagues at Stanford University published a description of the Stanford MIPS computer. Efficient pipelining and compiler-assisted scheduling of the pipeline were both important aspects of the original MIPS design. MIPS stood for "Microprocessor without Interlocked Pipeline Stages", reflecting the lack of hardware to stall the pipeline, as the compiler would handle dependencies.

In 1987, a new company named Sun Microsystems started selling computers based on the SPARC architecture. SPARC is a derivative of the Berkeley RISC-II processor. In 1990s, Apple, IBM, and Motorola co-developed a new RISC processor called PowerPC. The processor is now used in every computer made by Apple. The latest PowerPC is G5 which is dual-core RISC processor. As we see, Apple's Mac computers are basically speedier than Intel's x86 architecture, but because Intel is strong in its marketing and always talks about "GigaHertz" clock performance, many people still think that higher clock speed on processors always corresponds to faster processing, which is not always the case. All graphic card producers such as NVidia and ATI also base their graphic coprocessors on RISC architecture, even with more advanced technologies (just for your info, NVidia's GeForce6 GPUs have more transistors than the latest Pentium4 Extreme Edition).

Why then the old technology (CISC in x86) still can survive? The answer is machine-level compatibility. With millions of x86 installed in most PCs worldwide, Intel ofcourse wants to keep it that way. Their RISC project cooperate along with HP (I64 architecture [one of the product is named Itanium], which is based on RISC) could not repeat their success in x86. But, although x86s use CISC from code perspective, they actually are more RISC and CISC as the processors now borrow technologies from RISC, such as pipelining, pre-fetch, super-scalar, branch prediction and parallelism (which is popularized by Intel with terminology: SIMD, such as in MMX, SSE, SSE2 and SSE3 codes).

Monday, November 22, 2004

Zuse is a father of Digital Computer?

I just read the biography of Konrad Zuse. Interesting and very encouraging. Apparently, he deserves to be called the father of Digital Computing Machine. His inventions such as Z1 to Z4 had pioneered the uses of digital instead of analog computing. His Zuse-3 was in fact the first operational program controlled calculating machine, using the binary floating point numbers and Boolean circuits. In 1936 Zuse made a patent application on some of its parts, which proves that he had developed various major concepts of the digital computer long before men like Von Neumann or Burks presented their ideas.

He is also the father of programming language. In 1945/1946 he finished his "Plankalküll", the world's first programming language, thus establishing his name as a software pioneer. It was presented to the public in 1972.

It is hard to trace who is truly the father of computer or computing machine. There is no single person credited for the works. From Pascal, Babbage, Turing, Atanasoff, Mauchley and Eckart and Von Neumann. They all contributed to the invention of computer to what we see and use nowadays.

His name also reminds me about a german-linux distro, SuSe. I believe it is named after him or borrows his name.

Thursday, November 18, 2004

Java is going open source?

There is recently a news that Sun is going to make Java Platform Standard Edition Environment as open source, at least to non-profit and academic organizations. This is a breakthrough for Java community and seems another "attack" to Microsoft which is still keeping their Windows private for most people.

Another Sun's plan is to make its Solaris 10 open. The copyright is not under GNU, but seems under similar one. Will it take people out of Linux environment? We still need to see this. But, so far Sun's GUI is far behind than Windows, even Linux in term of quality. The new operating system will work on Opteron, Xeon, and UltraSparc.

Tuesday, November 16, 2004

HD (high-definition) video is stalled again

HD (high-definition) video is stalled again. That refrain is familiar to those of us who have waited the better part of a decade to get our HDTV. But this time, high-definition is DVD stuck in the standards conundrum. The situation perfectly illustrates the complexities involved in setting standards for state-of-the-art products—with a global plot twist thrown in for good measure.

The DVD industry’s track record when it comes to standards is far from perfect. Remember when Sony, Philips, and others went against the DVD Forum to establish theDVD+RW format after the Forum shunned the +RW technology in favor of DVD-RAM and DVD-RW? That fight delayed the widespread adoption of DVD recorders for three years.


Now, the industry must address the move toward HDTV-level 1080i (1080-line, interlaced) resolution for DVD content. Consumers who have spent big money on HDTV monitors are waiting.

A product such as DVD involves many standards issues, including factors such as power and interfaces. But two major issues demand the most attention: the recording format and the video-encoding format. Initially, industry players both inside and outside the DVD Forum considered two approaches. The first involved staying with the existing 9-Gbyte format and using more aggressive encoding to pack a feature-length, high-definition movie onto one disc. The DVD Forum, working on what it terms HD-DVD, favored this conservative approach because it would maintain full compatibility with existing discs. Sony, Matsushita, and others favored a move to “Blu-ray” technology. By changing to a "blue"-wavelength laser, Blu-ray would allow a disc to store 25 Gbytes. However, a player would need two lasers—red andblue—to play both old and new discs.

Now, Toshiba and NEC have produced a compromise, which the DVD Forum has endorsed. The duo has developed a blue laser that can provide higher capacity and also read today’s discs. The compromise reduces capacity to 20 Gbytes, 5 Gbytes fewer than Blu-ray.

Of course, the Blu-ray group wants nothing to do with the compromise. This spring, the group formed its own industry body, the BDA (Blu-ray Disc Association). Hey, if you can’t get your way in this industry, just create your own standards body. The game is clearly about getting your own technology embedded into the next standard, so that you can collect royalties on top of the profit that you make selling your own products.

Meanwhile, a battle raged for a while on the encoding side. The BDA initially appeared to be sticking with the MPEG-2 encoding that existing DVDs use. On the DVD Forum side, Microsoft entered the battle, trying to get its Windows Media technology into the next standard. As of press time, a rare outbreak of logical thinking seems to have taken place: Both the BDA and the DVD Forum have announced plans to support MPEG-2, H.264, and Microsoft’s Windows Media 9.

So, for now, we wait. Hollywood hasn’t weighed in with the standard that it prefers. Meanwhile, Sony has proclaimed that its Playstation 3 will use BDA technology. The BDA is also aggressively pursuing datacentric applications in addition to next-generation DVD video. And manufacturers will soon ship expensive, rewritable BDA products.

Enter China. Chinese companies and the Chinese government already had a major dislike for the DVD technology the the rest of the world uses. Specifically, they didn’t like paying royalties to the companies who had key technologies embedded in the DVD standards. And you can bet that Chinese vendors didn’t want to wait for the high-definition conflict in the rest of the world to play out.

So a standards organization of the Chinese government—SAC (Standardisation Administration of China)—rolled out a new spec, EVD (Enhanced Video Disc). The spec is complete, and vendors are shipping early products. North American vendors, such as LSI Logic, are offering EVD chip sets. High-definition Chinese content is trickling into the Chinese market, with some Hollywood content expected next year.

There’s nothing like governments, multiple international standards bodies, and the collaboration of private industry associations to stave off adoption of a compelling new technology.