Due, in large part, to the significant advances in PC hardware that have been made over the last two years, PC based VR is approaching reality. While the cost of a basic desktop VR system has only gone down by a few thousand dollars since that time, the functionality has improved dramatically, both in terms of graphics processing power and VR hardware such as head-mounted displays (HMDs). The availability of powerful PC engines based on such computing work-horses as Intel's Pentium IV Xeon and IBM G4/G5 processors, and the emergence of reasonably priced, Direct 3D and OpenGL-based 3D accelerator cards allow high-end PCs to process and display 3D simulations in real time. While a standard Celeron/Duron processor with as little as 64 Mega of RAM can provide sufficient processing power for a bare-bones VR simulation, a fast Pentium IV/Athlon XP-based PC (2 Ghz or faster) with 256 Mega of RAM, can transport users to a convincing virtual environment, while a dual Pentium IV configuration with 512 Mega of RAM OpenGL acceleration and 128 Mega of VRAM running Windows XP Pro rivals the horsepower of a graphics workstation.
As reported recently in the Virtual Reality Special Report magazine "much of the graphics display quality of the Silicon Graphics (SGI) RealityEngine, which has been the Rolls Royce of the VR world with a six-digit price tag, can now be achieved with a new Intergraph (Huntsville, Ala.) graphics accelerator board plugged into a Pentium PC". The Intergraph board has a $5,000 price tag, and renders on the order of 100,000 polygons per second. Two boards can be plugged in to render stereo images. The same demo running on a Pentium IV at 15-20Hz runs on an SGI Onyx at 30Hz.
And the future is bright, too. In the following paragraphs we will give an updated look on the possible future of the PC based VR market as it has turned in the last months all around the world.
1. The future is now?
2. 3D Graphics Accelerators
5. 3D APIs
Intergraph's accelerator is the first in a series that should hit the market soon. According to many sources, some of the biggest computer companies are getting ready to launch mid-range ($5,000-$25,000) graphics accelerators for PCs that will compete aggressively with the performance of SGI machines. Several hardware vendors will announce new products during 2002 that will bring the price down by 80%. The notion of a special purpose board is likely to go away. What will result is a single board and then a chip that will do 24 bit/pixel display graphics, MPEG 4 video, and VR and 3D graphics rendering. The Gartner Group predicts that 3D desktop visualisation will be the first VR technology to become widespread as soon as 2003. They further predict that by 2004, high-end desktop PCs will be able to generate virtual worlds as well as those generated by today's graphics workstations. According to Sense8 corporation, "we'll have to get RealityEngine performance for $5,000 for the technology to become ubiquitous. By the year 2005 it will be possible to get SGI RealityEngine3 graphics for less than $2,500 on a PC. It may come sooner. We wouldn't be surprised if it were here in two to three years."
Immersion, too, is getting more affordable. Olympus (Japan), for example, now has an HMD that costs less than $500 and has headtracking built in. Two years ago the same quality cost on the order of 10 times this amount. Most people doing practical applications are using monitors or projection systems, because HMDs have a way to go to get good quality at affordable price. VGA quality is now about $2000. It was $70,000 in 1994, so it' s a big breakthrough. But that price will probably lose a digit or two during the next five years. Input devices for desktop VR today are largely mouse and joystick based. As developers focus on how to make desktop VR practical, this is sound advice. Not only does it keep costs down, but it minimises the foreignness of VR applications and avoids the ergonomic issues of some of the fancier I/O devices like 3D mice and gloves. The day will come, however, when datagloves and other innovative devices move out of the realm of high-end entertainment and research VR systems.
Software, too, has been greatly improved during the last two years and are now available programs ranging from high-end authoring toolkits that require significant programming experience to "hobbyist" packages for which familiarity with the computer's operating system is the only prerequisite. Despite the differences in the types of virtual worlds these products can deliver, the various tools are based on the same VR-development model: they allow users to create or import 3D objects, to apply behavioural attributes such as weight and gravity to the objects, and to program the objects to respond to the user via visual and or audio events. Ranging in price from $800 to $4000, the toolkits are the most functional of the available VR software options. While some of the them rely exclusively on C or C++ programming to build a virtual world (they consists of a library of C functions, but it's getting hard to find such unfriendly interfaces), others offer simpler point-and-click operations to develop a simulation. Using toolkits, it is also possible to bring in files from a wide array of software packages, such as Wavefront, 3D Studio, EDS Unigraphics, Pro Engineer, and Intergraph EMS, and they can also import VRML and Multigen databases as well as animation scripts and sounds. Most of the toolkits also let you edit the models to a limited degree once they've been brought in.
In addition to supporting various software file formats, some of the toolkits offer graphics-creation capabilities and extensive object libraries. The toolkits also offer mechanisms for optimising the polygon counts of the objects that will populate the virtual worlds, which is necessary for achieving real-time performance. This is particularly important when dealing with imported models, which may be too polygonally dense for real-time rendering on the fly. In the same vein, other products have an "object prebuild" function that scans all of the information in a file and meshes and sorts and does everything it can do to make the object draw faster. However, because this is not a real-time operation, you don't do it during the simulation; you do it in advance or when you're ready to commit your geometry to a real time simulator.
In terms of "hands-on" optimisation, toolkits generally support such functions as level of detail and visual culling. Level of detail refers to the process of creating models of various degrees of detail, then programming the simulation to switch among different versions of the same model depending on its visual proximity to the user. If you are doing a driving simulation, and the car is coming at you, from far away, it's hard to see any detail. As it gets closer, you see more. So you might model your car, then tell the tool to construct three or four levels of detail which will switch depending on the distance.
In conclusion, the PC based VR is becoming real. In a interview recently published on the Computer Graphics World magazine, Dick O'Donnell, vice president of world-wide marketing for Superscape, declared: "The bottom line is that VR suddenly has a lot more credibility today, and the real driving force is that people are able to look at it on a standard PC and get the kind of performance and resolution that you used to get only from a $100000 SGI box. Granted, it's not Toy Story in terms of image crispness, but you can't navigate through Toy Story in real time."
2. 3D Graphics Accelerators
The graphics card landscape is evolving quickly, and the cards available today take good advantage of the advancements being made. Three advancements are particularily interesting for VR users: the inclusion of a VGA-to-TV converter and tuner, the Accelerated Graphics Port (AGP) and the new faster 3D chips (GEFORCE 4, RADEON 8500).
These improvements will reflect on applications soon. Look for more advanced uses of 3-D in presentation packages, where its visual impact will make for a more powerful and engaging presentation. Just as interesting are data-visualization and data-mining tools that may help to make business professionals more productive. Three-dimensional spreadsheets could turn endless columns of numbers into 3-D maps of the country, with sales figures creating mountains or valleys as appropriate, once the numerical data is converted to the 3-D chart form. Correlation of small data sets and their relation to the big picture may become more apparent and not get overlooked.
For training and education, workaday PCs will be able to render machine parts and other complex 3-D objects that formerly could only be drawn on expensive graphics workstations. Students could virtually explore faraway places, or walk around a lifelike model of a tyrannosaur in a museum. Or potential home buyers could evaluate a real-estate layout on-screen in a convincing 3-D world.
But the most prominent delivery vehicle for 3-D content could be the Web. VRML (Virtual Reality Markup Language) 2.0, the programming standard for 3-D on the Web, enables users to log on to 3-D sites and interact within a 3-D world. Log onto a site, click on an object, and it comes to life; turn it, move it, do what you can't do with a flat image. 3-D interaction could be helpful for everything from shopping to learning about physics on the Web.
There are VRML authoring tools available from several different vendors, and both Microsoft Internet Explorer and Netscape Communicator include VRML 2.0 browsers. While Direct3D support in Windows will enable all users to reap these 3-D benefits via software emulation, users with graphics accelerators able to speed the process along will have a better experience.
The Virtual Reality Modelling Language (VRML) is a language for describing multi-participant interactive simulations - virtual worlds networked via the global Internet and hyperlinked with the World Wide Web. All aspects of virtual world display, interaction and internetworking can be specified using VRML.
As is often true with emerging graphics technologies, on-line virtual reality depends largely on breaking bottlenecks and establishing standards. On both those fronts, significant developments are taking place. In particular, the VRML (Virtual Reality Modelling Language) authoring standard is gaining enough momentum that some predict it could be to upcoming on-line content what the HTML authoring standard has been to the World-Wide-Web. VRML is file-based, and calls for 3-D polygon-based scenes to be transferred to PCs from originating Web sites, after which the speed scenes run at is determined by the local PC's graphics subsystem. VRML authoring tools are springing up everywhere, including free tools made available by virtual reality pioneer Mark Pesce.
The VRML specification, now in version 2.0, has been extended over version 1.0 to promote interactivity with objects inside virtual worlds. For example, while version 1.0 let you walk up and down an aisle in a virtual store, version 2.0 lets you take down an object from a shelf and rotate it. In fact, objects in VRML 2.0 worlds can affect and be affected by other objects or external applications. In order to understand how it works, it's important to understand the event model for VRML 2.0. It's possible to think about the VRML 2.0 event model is as a series of components that get wired together to create a virtual machine. Each of the components are in their own "black box"; it's not necessary to know what's going on inside each box, but each one presents a set of interfaces--points where they can be wired into other boxes. The components themselves are either built-in or can be defined by the world's author. To facilitate the development of scripts and interfaces to VRML worlds, an Application Program Interface (API) has been defined for the VRML browser that defines a standard interface between the browser and external scripts.
Upcoming improvements in graphics subsystem performance will be key ingredients in making VRML-based on-line virtual reality viable. The next generation of graphics boards - at $350 - will essentially double the raw graphics performance of a 166-MHz Pentium CPU on its own, and we're going to see huge increases over that next year. What floating-point and VGA were to graphics advancements in the past, 3-D acceleration is now. New systems available next year will have more than enough graphics capability to do fluid VR applications on-line. There should be especially fierce price/performance competition among Direct 3D and OpenGL accelerator cards, which will dramatically accelerate VRML.
But for PC users, especially those with only a low speed Pentium, VRML used to need too
much processing power and provides too little interaction to make it anything else than a
taste of far-off applications. There were many reasons for VRML's sluggishness, but as
seen before, recent technical advances and collaborations between various companies are
working hard in order to make a real impact with VRML on standard PC use before late 1998.
As well as the variety of open standard VRML authoring tools now freely available on the
Web, hardware changes in PCs have accelerated development. Leading graphics cards, such as
the ones we have reviewed below, already double the raw graphics performance of a 166 MHz
Pentium PC, so that the graphics subsystems that run the polygon-based 3D scenes that make
up VRML environments do improve drastically, making the 3D worlds more fluid and lifelike.
Although this means that PC users will be able to navigate through 3D worlds more
easily, there are new changes to the VRML standard itself which improve performance. VRML
version 2.0, introduces interactivity inside virtual worlds. As well as viewing a virtual
environment, VRML 2.0 now lets users choose a 3D object from within the environment, use
that 3D object as an application subset, such as a WWW link or a word processor document,
and even share the document with another user in the 3D environment.
This ability to combine extensive 3D worlds with 2D WWW links and standard text documents within an environment that can be occupied by thousands of users regardless of location is the commercial driving force behind VRML 2.0. Games developers and online shopping providers have been quick to see the potential of hosting thousands of users within a controlled 3D environment. To bring this closer to reality, developers are integrating Java along with HTML and VRML to create totally integrated 3D environments.
Java is a platform independent language, which since 1996 has become an integral part of Web browsers including Netscape Navigator and Microsoft Internet Explorer, that allows video, audio and graphics 'applets', or small sections of a program such as a video clip, to be played directly from the host server, removing the need for the PC to store and process the code.
The Java programming language and environment is designed to solve a number of problems in modern programming practice. Java started as a part of a larger project to develop advanced software for consumer electronics. The result was a simple, object-oriented, network-savvy, interpreted, architecture neutral, portable, high-performance, multithreaded, dynamic language.
The developers wanted to build a system that could be programmed easily without a lot of training and which leveraged today's standard practice. Most programmers working these days use C, and most programmers doing object-oriented programming use C++. So even though they found that C++ was unsuitable, they designed Java as closely to C++ as possible in order to make the system more comprehensible.
Another aspect of being simple is being small. One of the goals of Java was to enable the construction of software that can run stand-alone in small machines. The Java interpreter and standard libraries have a small footprint. A small size is important for use in embedded systems and so Java can be easily downloaded over the net.
One more issue about Java is that it is object-oriented. Object-oriented design is very powerful because it facilitates the clean definition of interfaces and makes it possible to provide reusable routines. Simply stated, object-oriented design is a technique that focuses design on the data (=objects) and on the interfaces to it. To make an analogy with carpentry, an "object-oriented" carpenter would be mostly concerned with the chair he was building, and secondarily with the tools used to make it; a "non-object-oriented" carpenter would think primarily of his tools. Object-oriented design is also the mechanism for defining how modules "plug and play." The object-oriented facilities of Java are essentially those of C++, with extensions from Objective C for more dynamic method resolution.
Java has an extensive library of routines for coping easily with TCP/IP protocols like HTTP and FTP. This makes creating network connections much easier than in C or C++. Java applications can open and access objects across the net via URLs with the same ease that programmers are used to when accessing a local file system.
Java is intended for use in networked/distributed environments. Toward that end, a lot of emphasis has been placed on security. Java enables the construction of virus-free, tamper-free systems. The authentication techniques are based on public-key encryption. There is a strong interplay between "robust" and "secure." For example, the changes to the semantics of pointers make it impossible for applications to forge access to data structures or to access private data in objects that they do not have access to. This closes the door on most activities of viruses.
Java was designed to support applications on networks. In general, networks are composed of a variety of systems with a variety of CPU and operating system architectures. To enable a Java application to execute anywhere on the network, the compiler generates an architecture-neutral object file format - the compiled code is executable on many processors, given the presence of the Java runtime system. This is useful not only for networks but also for single system software distribution. In the present personal computer market, application writers have to produce versions of their application that are compatible with the IBM PC and with the Apple Macintosh. With the PC market (through Windows/NT) diversifying into many CPU architectures, and Apple moving off the 680x0 toward the PowerPC, production of software that runs on all platforms becomes nearly impossible. With Java, the same version of the application runs on all platforms.
Other benefits of multithreading are better interactive responsiveness and real-time behaviour. This is limited, however, by the underlying platform: stand-alone Java runtime environments have good real-time behaviour. Running on top of other systems like UNIX, Windows, the Macintosh, or Windows NT limits the real-time responsiveness to that of the underlying system. As we have seen, the Java language provides a powerful addition to the tools that programmers have at their disposal because it is object-oriented and has automatic garbage collection. In addition, because compiled Java code is architecture-neutral, Java applications are ideal for a diverse environment like the Internet.
APIs (application programming interfaces), such as those within Microsoft's DirectX Foundation and proprietary schemes from graphics-chip makers, let developers get directly to the display hardware to increase speed. For their part, the graphics chips and APIs support a wide range of special effects that developers can call on: fog at varying densities; transparency, to make glass objects or water more realistic;specular highlighting, which gives the illusion of light reflected off a surface; lights that resemble spotlights or floodlights that originate from a source and can move with an object; and many more.
While gamers have been reaping the benefits of all this technology, 3-D applications have taken a while to filter through to mainstream business PC users. We're still waiting for the killer application that will make 3-D support a must on every business and home PC. But as the installed base of 3-D-capable PCs reaches critical mass, software developers will start to fill that void.
Pioneers in 3-D productivity applications will likely be software vendors clamoring for recognition among their competitors. The DirectX Media Layer component of Microsoft's DirectX 5a API will make this relatively easy to do. One of its services, Direct3D retained mode, will enable programmers not familiar with 3-D coding to incorporate 3-D objects that move within a 3-D world space, or sit atop a regular 2-D interface. An object like this may be useful in a floating palette of tools, a tutorial, or a wizard that extends the application.There is no a standard 3-D engine in the PC market, primarily since different types of APIs are needed at different levels of development and for different applications. At the high end, OpenGL has become the most accepted standard. OpenGL was originally developed by Silicon Graphics for workstations, but has become very popular on the Windows NT platform. OpenGL is a complex and robust API that is intended for precise applications like scientific visualization and CAD/CAM; however, it is overkill, to say the least, for games and other casual 3-D applications.
Several other individual APIs have sprung up for Windows and DOS: Apple Quickdraw 3D, Argonaut Brender, Criterion Renderware, Rendermorphics RealityLab, and others. Sensing discontinuity, Intel joined the game with 3DR, an API that was supposed to provide a common framework for application developers and board makers, not to mention a wider market for the Pentium. The 3DR API was beginning to catch on when Microsoft jumped into the ring and convinced Intel to go along with its plans for standard APIs for Windows 95. On Windows 3.x, Microsoft had previously provided 3-D DDI, a low-level device driver interface for chip makers, but had not delivered a high-level API for application development. Microsoft has a strategy for 3-D but has been slow to bring it to market. First, it bought Rendermorphics to acquire the RealityLab product and announced plans to integrate it within the Windows 95 graphics architecture as its standard high-level API. Microsoft also announced Direct3D, an API layer below RealityLab that will provide the glue between existing APIs, such as OpenGL, and hardware. Direct3D will join the other Windows 95 APIs such as DirectPlay and DirectSound as part of a universal strategy to spur game development. Vendors of 3-D products are indeed optimistic about Microsoft's strategy, although perhaps they're also frustrated. Microsoft has recently released its second beta of Direct3D to developers, and the final version may be released very soon. However, hardware and software vendors alike have been waiting anxiously for Direct3D and RealityLab to finally gel. In the meantime, chip vendors have brought even more proprietary APIs to market as short-term solutions. This delay in software development has given the chip and board makers time to catch up, and there will be no shortage for acceleration hardware when the 3-D software titles start coming to market in force.
Below you will find a brief description of the most promising APIs: OpenGL,
QuickDraw 3D and Direct 3D.
OpenGL is the only graphics engine we looked at that extends beyond the PC and Mac to many flavours of UNIX. The most interesting feature about OpenGL is that it uses a client/server mechanism to handle graphics processing. A graphics client (e.g., a 3-D application) uses the OpenGL interface and the host's OS to transmit such graphics primitives as vertex co-ordinates, colours, and texture co-ordinates to a graphics server process that runs the OpenGL rasterizer.
The rasterizer performs one or more graphics operations based on the current OpenGL graphics state and generates pixels. The pixel values undergo a final processing stage that includes texture-mapping, fogging, antialiasing, depth testing, and blending. This information goes to the client for display in the application window. Normally, the client and server run on the same system. However, the advantage to the separation is that a low-cost desktop system can transmit OpenGL commands across a network to a high-performance system running the server process, which returns images to the desktop machine.
Compared to QD3D and Direct3D, OpenGL is not a high-level API. Its purpose is to draw objects. Such high level tasks as object editing and file l/o are left up to the application. Instead, OpenGL provides a platform-independent interface to a low-level rendering engine. This interface consists of about 250 drawing commands that let the programmer describe objects and perform a set of operations to produce the final image. You can call OpenGL from a variety of languages, including C/C++, FORTRAN, Ada, and Java.
OpenGL supports both an immediate mode and a retained mode for graphics operations. In the immediate mode, an application sends graphics commands to OpenGL, which promptly executes them. This provides a quick response to any changes to an image, a critical feature for interactive graphics applications.
A number of vendors provide 3-D applications and toolkits for use with OpenGL. open Inventor is a 3-D toolkit that supplies a number of built-in graphics tools such as colour and texture editors. It is available on workstations and Windows. It saves 3-D graphics in an Inventor file format that's a superset of the Virtual Reality Modelling Language (VRML), the industry-standard file format that describes 3-D objects on the Web.
OpenGL supports a wide range of graphics environments that cover the spectrum in terms of cost and performance. on the low end, it provides software-only rendering for desktop PCs. But it can also communicate directly with high-end workstations equipped with visualisation hardware that can draw 50 million polygons per second.
QD3D is a complete 3-D graphics environment. The top level is an API that enables developers to create and manipulate 3-D objects and to read or write 3-D data to files. This API talks to an extensible graphics pipeline that processes drawing operations. The pipeline in turn talks to a thin hardware abstraction layer (HAL) that provides a device-independent API
for game designers. QD3D supports an immediate mode and a retained mode. The immediate mode is identical to OpenGL's in that it's up to the application to supply drawing commands to the renderer. In the retained mode, the object-oriented-programming (OOP) structures maintain the scene's geometry for display and manipulation.
While the immediate mode offers fine-grained control for those applications that need it (e.g., animation, where all the objects change constantly), the retained mode lets you store the scene's geometry in an object database. The retained mode makes it easier to read and write objects, and it enables data structures to be cached for fast display or hardware acceleration. Programmers can opt to use one image mode exclusively, or both as the situation demands it.
Where QD3D differs radically from Upends is that it is an object-oriented graphics system. A new instance of an object can inherit characteristics from its class, including the geometry, size, orientation, colour, texture-maps, and lighting, which makes for rapid construction of a scene's objects. This also simplifies the maintenance of each object's information for manipulation and display.
High-level API commands let you create, rotate, edit, illuminate, and apply transformations to objects. A widgets mechanism provides visual " handles" that allow you to edit or scale an object interactively. Thanks to QD3D's object-oriented design, you need no knowledge of the 3-D object's internal structure to perform these actions. Currently, you can use only C/C+ + to make QD3D calls.
Also unlike OpenGL, QD3D lets you read and write 3-D images in a common 3-D metafile (3DMF) format. This format saves not only each object's geometry, but also its lighting and texture-maps. Also, 3DMF enables applications to copy and paste 3-D graphics or drag and drop them into other applications. In March, Netscape, Silicon Graphics, and Apple jointly announced plans to develop a binary file format for VRML 2.0 based on the 3DMF format.
Direct3D, the Microsoft's 3-D API, is a new addition to the company's DirectX interactive-media technology family. Like OpenGL and QD3D, Direct3D supports both immediate and retained modes of operation. Direct3D's retained mode resembles QD3D in that it offers developers an object-oriented high-level interface to applications for manipulating 3-D objects. After you load an object using one API call, you can rotate and scale it using other API calls. The retained mode provides API calls that read and write a file format that stores 3-D data. This data consists of pre-defined 3-D objects, meshes, textures, and animation sets. This file format lets applications exchange 3-D information and play back animation sequences in real time.
Direct3D's immediate mode is a thin API layer that deals with polygons and vertices. The immediate-mode API passes display lists composed of vertex data and graphics commands known as execute buffers in Microsoft terminology to the rendering engine for processing. This system provides a high-performance device independent mechanism that lets programmers access a system's graphics hardware or tap into hardware accelerators. This mode doesn't offer any object or scene management engines; such functions are the application's responsibility.
The immediate mode is best suited for developers who have an application with its own rendering engine, yet wish to access a system's hardware-acceleration features. Currently, you use C/C + + to access Direct3D calls. Support for Visual Basic is planned.
Direct3D offers a variety of graphics objects. In the immediate mode, these objects include execute buffer, matrix (for transformations), light, texture, material (an object's surface properties), viewport (the screen region owned by the object), device (the hardware managing the screen), and interface. The retained mode provides additional objects based on the immediate-mode objects, including polygon face, mesh, frame (manages the position of all the objects in a scene and their orientations shadow, and animate (used to apply transformations to an object, such as scaling and position).
Direct3D is more like OpenGL in that it provides drawing primitives, but not such basic objects as spheres, cylinders, or cones, as QD3D does. It's not clear at this time whether such objects will be available in a separate library, or whether you will have to construct them from scratch. Also, information on the interactive editing of these objects is lacking.
For any questions or requests, please contact firstname.lastname@example.org