Video Cards

3.364757070353 (1379)
Posted by bender 04/03/2009 @ 22:14

Tags : video cards, components, hardware, technology

News headlines
MSI N260GTX Lightning Black Edition Video Card Review @ Hardware ... - Ninjalane
In the sea of boring reference type video cards. MSI has opened up our eyes to something very cool in new designs. Read this review and admire the cool style the Lighting Black Edition has to offer from MSI. "It's been hard to evaluate video cards...
AMD Breaks 1 GHz Video Card Speed Barrier, Pleases AMD - Kotaku.com
Pleased enough with its video card accomplishments to issue a press release—and pretty product shots!—about its 1 gigahertz ATI Radeon HD 4890. There's just a slight catch. The new ATI Radeon HD 4890 GPU does get to say "First!...
What's the Big Deal with CUDA and GPGPU anyway? - VizWorld.com
Video cards have been around for years, so what's happened in the last few years to make it so lucrative and desirable? Not just for consumers, but for big players in High Performance Computing? Come on inside for the VizWorld's feature article on Why...
ATI Radeon HD 4770 Video Card Overclocking Guide - Legit Reviews
Since the Radeon HD 4770 uses a 40nm GPU was fully stable at 830MHz/850MHz (core/memory) we downloaded RivaTuner v2.24, which is a great utility that provides you nearly everything you may need to tune NVIDIA and ATI video cards....
Gigabyte GV-R477D5-512H-B Radeon HD 4770 Video Card Review - Jonny Guru
We already know one Radeon HD 4770 graphics card offers great performance for under a hundred bucks, the real question is what level of gaming performance will two 4770's teamed up under Crossfire deliver? Mainstream videocards often scale faster and...
Some PC fixes don't require special skills - Atlanta Journal Constitution
Adding a video card. (**) A new video card can extend the life of an old computer. And those who play computer games might want a better card. The cards are important because most of what Windows does involves graphics —- everything from the type on...
RIP standalone network media players - CNET News
... in this new world is spend your money wisely, invest in dsl/cable internet, purchase a computer with hdmi/dvi output, which is now becoming standard on most video cards not built-in to the motherboards (just ask a geek squad member or whatever,...
From Voodoo to GeForce: The History of 3D Graphics - Ve3d.com
While other videocards fused both 2D and 3D functionality onto a single board, the Voodoo1 concentrated solely on 3D and lacked any 2D capabilities. This meant consumers still needed a 2D graphics card for day to day computing, which would be connected...
Techie Tips for Travelers - Baltimore Sun
Buy the biggest memory card you can afford (or get two). Memory cards are available in Europe, but they're more expensive. I travel with a six-megapixel camera and a two-gigabyte memory card. Taking photos at high resolution, I can fit about 500 photos...
Boost Your Company's Image With Video - Alibaba News Channel
Your video doesn't have to be professionally produced, but in the same way that you wouldn't create your own business cards from index cards, you should consider improving the overall quality of your videos. To do this, you could use the free video...

Matrox

Matrox is a Canadian company based in Dorval, Quebec, which produces video card components and equipment for personal computers. It was founded by Lorne Trottier and Branko Matić. The "ma" from Matić and "tro" from Trottier, combined with an "x" for excellence, forms the Matrox name.

Matrox is the umbrella name for two legal entities: Matrox Graphics Inc. most known to the public, designing graphics cards for over 30 years. The other entity is Matrox Electronic Systems Ltd. a company comprised of two divisions: Imaging, designing frame grabbers hardware and software, and Digital Video Solutions providing video editing products for broadcast and video professional markets.

Matrox Graphics specializes in professional multi-display video cards that enable more than one monitor to be driven by a single card. The targeted user-base for Matrox video cards largely consists of 2D, 3D, video, scientific, medical, military and financial workstation users.

Matrox's first graphics card product was the ALT-256 for S-100 bus computers, released in 1978. The ALT-256 produced a 256 by 256 pixel monochrome display by "racing the beam"; having the host CPU set registers on the fly to produce bit patterns as the screen was being drawn (see Atari 2600 for details). This mode of operation meant the ALT-256 required no frame buffer. An expanded version followed, the ALT-512, both available for Intel SBC bus machines as well. Through the 1980s, Matrox's cards followed changes in the hardware side of the market, to Multibus and then the variety of PC standards.

During the 1990s, Matrox's "Millennium" line of video cards were noted for their exceptional 2D speed and visual quality. They had a wide following among users willing to pay for a higher quality and sharper display. In 1994 they introduced the Matrox Impression, an add-on card that worked in conjunction with a Millennium card to provide 3D acceleration. The Impression was aimed primarily at the CAD market and failed to make much of an impression on the rapidly emerging 3D gaming market. A later version of the Millennium included features similar to the Impression, but lagged behind emerging vendors like 3dfx Interactive.

Matrox made several attempts to regain a foothold in the market that was increasingly dominated by 3D-capable cards. The Matrox Mystique, released in 1996, was their first attempt to make a card with good gaming performance and a price point able to sell into that market, but a number of design decisions resulted in poor quality 3D images and poor performance (while the 2D support remained excellent, as always). Matrox nevertheless made bold performance claims for the Mystique, but was widely derided in reviews as offering performance nowhere near the contemporary Voodoo1.

A refresh started with the short-lived G100, which was quickly replaced by the Matrox G200. The G200 was sold as two models, the Millennium G200 was a higher-end version with faster SGRAM memory, while the Mystique G200 used slower memory but added a TV-out port. The G200 offered competent 3D performance for the first time, but was released shortly before a new generation of cards from Nvidia and ATI that completely outperformed it. Later versions in the Matrox G400 series were never able to regain the crown, and despite huge claims for the Matrox Parhelia, their performance continued to be quickly outpaced by the major players.

Since then, Matrox has continued to shift the focus of its card designs towards specialized, niche markets, moving more deeply into enterprise, industrial, and government applications. In recent years they have held no more than a 3–5% share of the total video card market. Matrox is now divided in three divisions: Matrox Graphics, Matrox Video, and Matrox Imaging. Matrox Graphics is the primary consumer and end-user brand, while Matrox Video markets digital video editing solutions, and Matrox Imaging sells high-end video capture systems and "smart cameras", video cameras with a built-in computer for machine vision applications.

To support Unix and Linux, Matrox has released binary only drivers for most of their product line and one partially open source driver for the G550 card which comes with a binary blob to enable some additional functionality. In addition to the proprietry drivers provided by Matrox the DRI community has provided fully GPL'd drivers for many more of the devices.

To the top



Scalable Link Interface

NVIDIA's SLI Ready logo.Products that are SLI certified bear this logo.

Scalable Link Interface (SLI) is a brand name for a multi-GPU solution developed by Nvidia for linking two or more video cards together to produce a single output. SLI is an application of parallel processing for computer graphics, meant to increase the processing power available for graphics.

The name SLI was first used by 3dfx under the full name Scan-Line Interleave, which was introduced to the consumer market in 1998 and used in the Voodoo2 line of video cards. After buying out 3dfx, Nvidia acquired the technology but did not use it. Nvidia later reintroduced the SLI name in 2004 and intended for it to be used in modern computer systems based on the PCI Express (PCIe) bus. However, the technology behind the name SLI has changed dramatically.

The basic idea of SLI is to allow two or more graphics processing units (GPUs) to share the work load when rendering a 3D scene. Ideally, two identical graphics cards are installed in a motherboard that contains two PCI-Express x16 slots, set up in a master-slave configuration. Both cards are given the same part of the 3D scene to render, but effectively half of the work load is sent to the slave card through a connector called the SLI Bridge. As an example, the master card works on the top half of the scene while the slave card works on the bottom half. When the slave card is done, it sends its output to the master card, which combines the two images to form one and then outputs the final render to the monitor.

In its early implementations, motherboards capable of SLI required a special card (colloquially known as a "paddle card") which came with the motherboard. This card would fit into a socket usually located between both of the PCI-Express x16 slots. Depending on which way the card was inserted, the motherboard would either channel all 16 lanes into the primary PCI-Express x16 slot, or split lanes equally to both PCI-Express x16 slots. This was necessary as no motherboard at that time had enough PCI-Express lanes for both to have 16 lanes each. Thanks to the advancement in available PCI-Express lanes, most modern SLI-capable motherboards allow each video card to use all 16 lanes in both PCI-Express x16 slots.

The SLI bridge is used to reduce bandwidth constraints and send data between both graphics cards directly. It is possible to run SLI without using the bridge connector on a pair of low-end to mid-range graphics cards (e.g. 7100GS or 6600GT) with Nvidia's Forceware drivers 80.XX or later. Since these graphics cards do not use as much bandwidth, data can be relayed through just the chipsets on the motherboard. However, if no SLI bridge is used on two high-end graphics cards, the performance suffers severely as the chipset does not have enough bandwidth.

Nvidia has created a set of custom video game profiles in cooperation with video game publishers that will automatically enable SLI in the mode that gives the largest performance boost. It is also possible to create custom game profiles or modify pre-defined profiles using their Coolbits software.

For more information on SLI-optimized games, visit Nvidia's SLI Zone.

In February 2005, Gigabyte Technology released the GV-3D1, a single video card that uses Nvidia's SLI technology to run two 6600-series GPUs. Due to technical issues with compatibility, at release the card was supported by only one of Gigabyte's own motherboards, with which it was bundled. Later came the GV-3D1-68GT, functionally similar and possessing similarly-limited motherboard compatibility, but with 6800 GPUs in place of the GV-3D1's 6600 units.

Around March 2006, ASUS released the N7800GT Dual. Similar to Gigabyte's design, it had two 7800GT GPUs mounted on one video card. Again, this faced several issues, such as high price (it retailed for around US$800, while two separate 7800GTs were cheaper at the time), limited release, and limited compatibility. It would only be supported on the nForce4 chipset and only a few motherboards could actually utilize it. It was also one of the first video cards with the option to use an external power supply if needed.

In January 2006, Nvidia released the 7900 GX2, their own attempt at a dual-GPU card. Effectively, this product is a pair of slightly lower clocked 7900GTX cards "bridged" together into one discrete unit, with separate frame buffers for both GPUs (512MB of GDDR3 each). The GeForce 7900 GX2 is only available to OEM companies for inclusion in quad-GPU systems, and it cannot be bought in the consumer market. The Dell XPS, announced at the 2006 Consumer Electronics Show, used two 7900 GX2's to build a quad-GPU system. Later, Alienware acquired the technology in March.

The official implementations of dual-GPU graphics cards work in the same fashion. Two GPUs are placed on two separate printed circuit boards (PCBs), with their own power circuitry and memory. Both boards have slim coolers, cooling the GPU and memory. The 'primary' GPU can be considered to be the one on the rear board, or 'top' board (being on top when in a standard ATX system). The primary board has a physical PCIe x16 connector, and the other has a round gap in it to provide cooling for the primary HSF. Both boards are connected to each other by two physical links; one for 16 PCI-Express lanes, and one for the 400 MHz SLI bridge. An onboard PCI-Express bridge chip, with 48 lanes in total, acts as the MCP does in SLI motherboards, connecting to both GPUs and the physical PCI-Express slot, removing the need for the motherboard to support SLI.

A newer version, the GeForce 7950 GX2, which addressed many issues in the 7900 GX2, was available to consumers for separate purchase.

The GeForce 9800 GX2 was Nvidia's next attempt at a multi-GPU solution released in March 2008 at a launch price of $599, this time using separate PCBs facing each other, thus sharing one large double wide cooling fan. This GX2 could expand to a total of four GPUs when paired in SLI. The 9800 GX2 was concurrent with the launch of a single-GPU 65nm 9800 GTX whose own launch price was $349. Three months later, with the 9800 GX2 selling at $299, Nvidia found their product line competing with itself, as the GTX 260 and the 55nm improved 9800 GTX+ became available, Nvidia elected to venture into the GTX200 series and beyond lineups, rather than expanding the 55nm G92 into a GX2 form factor, thus leaving mid-range audiences with the options of the 9800 GT and 9800 GTX+.

On January 2009, the new GTX200 series based GeForce GTX 295 was released at a price of $499. It combines two GeForce GTX 260 GPUs, with a similar sandwich design of two graphics PCBs facing each other with a large double wide cooling fan solution in-between, but with all the GDDR3 ram modules on the same half of each board as each corresponding GPU; a feature the initial GTX200 boards as well as the 9800 GX2 board didn't have. It manages to maintain the same amount of shaders as the GTX 280 bringing it to a total of 480 shader units.

In early 2006, Nvidia revealed its plans for Quad SLI. When the 9800GX2 was originally demonstrated, it was with two such cards in an SLI configuration. This is possible because each GX2 has two extra SLI connectors, separate from the bridges used to link the two GPUs in one unit – one on each PCB, one per GPU, for a total of two links per GPU. When two GX2 graphics cards are installed in an SLI motherboard, these SLI connectors are bridged using two separate SLI bridges. (In such a configuration, if the four PCBs were labeled A, B, C, D from top to bottom, A and C would be linked by an SLI bridge, as would B and D.) This way, four GPUs can contribute to performance. The 7950GX2, sold as an enthusiast-friendly card, omits the external SLI connector on one of its PCBs, meaning that only one SLI bridge is required to run two 7950GX2s in SLI.

Quad SLI did not show any massive improvements in gaming using the common resolutions of 1280x1024 and 1600x1200, but has shown improvements by enabling 32x anti-aliasing in SLI-AA mode, and support for 2560x1600 resolutions at much higher framerates than is possible with single or dual GPU systems with maximum settings in modern games. It was believed that high latencies severely marginalized the benefits of four GPUs, however much of the blame for poor performance scaling is due to Windows XP's API which only allows for a maximum storage of 3 extra frames. Windows Vista is not limited in this fashion and shows promise for future multi-GPU configurations.

Nvidia has also revealed a triple SLI setup for the nForce 700 series motherboards, which only works on Vista. The setup can be achieved using three high-end video cards with two MIO ports and a specially wired connector (or three flexible connectors used in a specific arrangement). The technology was officially announced in December 2007, shortly after the revised G92-based 8800GTS made its way out of the factory. In practical terms, it delivers up to a 2.8x performance increase over a single GPU system.

Unlike traditional SLI, or CrossFire X, 3-way SLI is limited to the GeForce 8800 GTX, 8800 Ultra, 9800 GTX and June 2008 introduced the GTX 260 and GTX 280, and later the 9800GTX+ graphics cards on the 680i, 780i and 790i chipsets, whereas CrossFire X can be theoretically used on multiple Radeon HD 2400 cards.

The Nvidia Quadro Plex is an external graphics processing unit (VCS) designed for large-scale 3D visualizations. The system consists of a box containing a pair of high-end Nvidia graphics cards featuring a variety of external video connectors. A special PCI Express card is installed in the host computer, and the two are connected by VHDCI cables.

The Nvidia Quadro Plex system supports up to four GPUs per unit. It connects to the host PC via a small form factor PCI Express card connected to the host, and a 2 meter (6.5 foot) Nvidia Quadro Plex Interconnect Cable. The system is housed in an external case that is approximately 9.49 inches in height, 5.94 inches in width, and 20.55 inches in depth and weighs about 19 pounds. The system relies heavily on Nvidia's SLI technology.

In response to ATI offering a discrete physics calculation solution in a tri-GPU system, Nvidia announced a partnership with physics middleware company Havok to incorporate a similar system using a similar approach. Although this would eventually become the Quantum Effects technology, many motherboard companies began producing boards with three PCI-Express x16 slots in anticipation of this implementation being used.

In February 2008, Nvidia acquired physics hardware and software firm Ageia, with plans to increase the market penetration for PhysX beyond its fairly limited use in games; notably Unreal Engine 3. In July 2008, Nvidia released a beta PhysX driver supporting GPU acceleration, followed by an official launch on August 12, 2008. This allows PhysX acceleration on the primary GPU, a different GPU, or on both GPUs in SLI.

In January 2009 Mirrors Edge on Microsoft Windows by DICE and distributed by E.A., became the first major title to add Nvidia PhysX to enhance visual effects in-game and add gameplay elements.

Also in response to the PowerXpress technology from AMD, a configuration of similar concept named "Hybrid SLI" was announced on January 7, 2008. The setup consists of an IGP as well as a GPU on MXM module. The IGP would assist the GPU to boost performance when the laptop is plugged to a power socket while the MXM module would be shut down when the laptop was unplugged from power socket to lower overall graphics power consumption.

Hybrid SLI is also available on desktop Motherboards and PC's with PCI-E discrete video cards. nVidia claims that twice the performance can be achieved with a Hybrid SLI capable IGP motherboard and a GeForce 8400 GS video card.

On desktop systems, the motherboard chipsets nForce 720a, 730a, 750a SLI and 780a SLI and the motherboard GPUs GeForce 8100, 8200, 8300 and 9300 support Hybrid SLI (GeForce Boost and HybridPower). The GPUs GeForce 8400 GS and 8500 GT support GeForce Boost, the GPUs 9800 GT, 9800 GTX, 9800 GTX+ 9800 GX2, GTX 260 and GTX 280 support HybridPower.

To the top



Nvidia

Badge displayed on products certified by Nvidia to utilize SLI technology

Nvidia (NASDAQ: NVDA, pronounced /ɪnˈvɪ.di.ə/) is a multinational corporation specializing in the manufacture of graphics-processor technologies for workstations, desktop computers, and mobile devices. Based in Santa Clara, California, the company has become a major supplier of integrated circuits (ICs) used for personal-computer motherboard chipsets, graphics processing units (GPUs), and video-game consoles.

Notable Nvidia product lines include the GeForce series for gaming and the Quadro series for graphics processing on workstations, as well as the nForce series of integrated motherboard chipsets.

Jen-Hsun Huang (the present CEO), Curtis Priem, and Chris Malachowsky co-founded the company in 1993 with venture-capital funding from Sequoia Capital.

In 2000 Nvidia acquired the intellectual assets of its one-time rival 3dfx, one of the biggest graphics companies of the mid- to late-1990s.

On December 14, 2005, Nvidia acquired ULI Electronics, which at the time supplied third-party Southbridge parts for chipsets to ATI, Nvidia's competitor. In March 2006, Nvidia acquired Hybrid Graphics and on January 5, 2007, it announced that it had completed the acquisition of PortalPlayer, Inc.

In December 2006 Nvidia, along with its main rival in the graphics industry AMD (which acquired ATI), received subpoenas from the Justice Department regarding possible antitrust violations in the graphics card industry.

Forbes magazine named Nvidia its Company of the Year for 2007, citing the accomplishments it made during the said period as well as during the previous 5 years.

The company's name combines an initial n — a letter usable as a pronumeral in mathematical statements — and the root of video— which comes from Latin videre, "to see", thus implying "the best visual experience" or perhaps "immeasurable display". The name NVIDIA suggests "envy" (Spanish envidia or in Latin, Italian, or Romanian invidia); and Nvidia's GeForce 8 series product uses the slogan "Green with envy". The company-name appears entirely in upper-case ("NVIDIA") in company technical documentation.

Nvidia's product-portfolio includes graphics-processors, wireless-communications processors, PC platform (motherboard core-logic) chipsets, and digital-media-player software. The community of computer users arguably knows Nvidia best for its "GeForce" product-line, which not only offers a complete line of "discrete" graphics chips found in AIB (add-in-board) video cards, but also provides a core-technology in both the Microsoft Xbox game console and nForce motherboards.

In many respects Nvidia resembles its competitor ATI: Both companies began with a focus in the PC market and later expanded their activities into chips for non-PC applications. Nvidia does not sell graphics boards into the retail market, instead focusing on the development of GPU chips. Since Nvidia is a fabless semiconductor company, chip manufacturing is provided under contract by Taiwan Semiconductor Manufacturing Company, Ltd. (TSMC). As part of their operations, both ATI and Nvidia create "reference designs" (circuit board schematics) and provide manufacturing samples to their board partners. Manufacturers of Nvidia cards include BFG, EVGA, Foxconn, and PNY. XFX, ASUS, Gigabyte Technology, and MSI exemplify manufacturers of both ATI and Nvidia cards.

On February 4, 2008, Nvidia announced plans to acquire physics software producer AGEIA, whose PhysX physics engine program forms part of hundreds of games shipping or in development for PlayStation 3, Xbox 360, Wii, and gaming PCs. This transaction completed on February 13, 2008 and efforts to integrate PhysX into the GeForce 8800's CUDA system began.

On June 2, 2008 Nvidia officially announced its new Tegra product-line. These "computers on a chip" integrate CPU (ARM), GPU, northbridge, southbridge and primary memory functionality onto a single chip. Commentators opine that Nvidia will target this product at the smart-phone and mobile Internet device sector.

Nvidia does not publish the documentation for its hardware, meaning that programmers cannot write appropriate and effective open-source drivers for Nvidia's products. Instead, Nvidia provides its own binary GeForce graphics drivers for X.Org and a thin open-source library that interfaces with the Linux, FreeBSD or Solaris kernels and the proprietary graphics software. Nvidia also supports an obfuscated open-source driver that only supports two-dimensional hardware acceleration and ships with the X.Org distribution. Nvidia's Linux support has promoted mutual adoption in the entertainment, scientific visualization, defense and simulation/training industries, traditionally dominated by SGI, Evans & Sutherland and other relatively costly vendors.

Because of the proprietary nature of Nvidia's drivers, they continue to generate controversy within the free-software communities. Some Linux and BSD users insist on using only open-source drivers, and regard Nvidia's insistence on providing nothing more than a binary-only driver as wholly inadequate, given that competing manufacturers like Intel offer support and documentation for open-source developers, and that others like ATI at least release partial documentation. Because of the closed nature of the drivers, Nvidia video cards do not deliver adequate features on several platforms and architectures, such as FreeBSD on the x86-64 architecture and the other BSD operating systems on any architecture. Support for three-dimensional graphics acceleration in Linux on the PowerPC does not exist; nor does support for Linux on the hypervisor-restricted PlayStation 3 console. While some users accept the Nvidia-supported drivers, many users of open-source software would prefer better out-of-the-box performance if given the choice. However, the performance and functionality of the binary Nvidia video card drivers surpass those of open-source alternatives following VESA standards.

Nvidia drivers cause known issues on computers running Windows Vista. The forums on the Nvidia homepage have various topics where users discuss the failure and recovery error of the driver without any solution.

X.Org Foundation and Freedesktop.org have started the Nouveau project, which aims to develop free software drivers for Nvidia graphics cards by reverse-engineering Nvidia's current proprietary drivers for Linux.

According to a survey conducted by Jon Peddie Research, a market-watch firm, in the third quarter of 2007, Nvidia occupied the top slot in the desktop graphic-devices market with a market share of 37.8%. However, in the mobile space, it remained third with 22.8% of the market. Overall Nvidia has maintained its position as the second-largest supplier of PC graphic shipments, which includes both integrated and discrete GPUs, with 33.9% market share, their highest in many years, which puts them just behind Intel (38%).

According to the Steam hardware survey conducted by the game-developer Valve, Nvidia had 64.64% of PC video card market share (as of 1 December 2008 (2008 -12-01)). ATI had 27.12% of the PC video card market share. But this could relate to Valve releasing trial versions of The Orange Box to Nvidia graphics-card users, which link to the test. However, free copies of The Orange Box were also released to ATI card purchasers, notably those who purchased the Radeon 2900XT.

Nvidia released its first graphics card, the NV1, in 1995. Its design used quadratic surfaces, with an integrated playback-only sound-card and ports for Sega Saturn gamepads. Because the Saturn also used forward-rendered quadratics, programmers ported several Saturn games to play on a PC with NV1, such as Panzer Dragoon and Virtua Fighter Remix. However, the NV1 struggled in a market-place full of several competing proprietary standards.

Market interest in the product ended when Microsoft announced the DirectX specifications, based upon polygons. Subsequently NV1 development continued internally as the NV2 project, funded by several millions of dollars of investment from Sega. Sega hoped that an integrated sound-and-graphics chip would cut the manufacturing cost of their next console. However, Sega eventually realized the flaws in implementing quadratic surfaces, and the NV2 was never fully developed.

Nvidia's CEO Jen-Hsun Huang realized at this point that after two failed products, something had to change for the company to survive. He hired David Kirk as Chief Scientist from software-developer Crystal Dynamics. Kirk combined the company's experience in 3D hardware with an intimate understanding of practical implementations of rendering.

As part of the corporate transformation, Nvidia sought to fully support DirectX, and dropped multimedia functionality in order to reduce manufacturing costs. Nvidia also adopted the goal of an internal 6-month product-cycle, under the supposition that the failure of any one product could be mitigated by having a replacement waiting in the pipeline.

However, since the Sega NV2 contract remained secret, and since Nvidia had laid off employees, it appeared to many industry observers that Nvidia had ceased active research-and-development. So when Nvidia first announced the RIVA 128 in 1997, the specifications were hard to believe: performance superior to market leader 3dfx Voodoo Graphics, and a full hardware triangle setup engine. The RIVA 128 shipped in volume, and the combination of its low cost and high performance made it a popular choice for OEMs.

Having finally developed and shipped in volume the market-leading integrated graphics chipset, Nvidia set itself the goal of doubling the number of pixel pipelines in its chip, in order to realize a substantial performance-gain. The TwiN Texel (RIVA TNT) engine which Nvidia subsequently developed could either apply two textures to a single pixel, or process two pixels per clock-cycle. The former case allowed for improved visual quality, the latter for doubling the maximum fill-rate.

New features included a 24-bit Z-buffer with 8-bit stencil support, anisotropic filtering, and per-pixel MIP mapping. In certain respects (such as transistor-count) the TNT had begun to rival Intel's Pentium processors for complexity. However, while the TNT offered an astonishing range of quality integrated features, it failed to displace the market leader, 3dfx's Voodoo 2, because the actual clock-speed ended up at only 90 MHz, about 35% less than expected.

Nvidia responded with a refresh part: a die shrink for the TNT architecture from 350 nm to 250 nm. A stock TNT2 now ran at 125 MHz, an Ultra at 150 MHz. Though the Voodoo 3 beat Nvidia to the market, 3dfx's offering proved disappointing: it was not much faster and lacked features that were becoming standard, such as 32-bit color and textures of resolution greater than 256 x 256 pixels.

The RIVA TNT2 marked a major turning-point for Nvidia. They had finally delivered a product competitive with the fastest on the market, with a superior feature-set, strong 2D functionality, all integrated onto a single die with strong yields, that ramped to impressive clock-speeds. Nvidia's six month cycle refresh took the competition by surprise, giving it the initiative in rolling out new products.

The autumn of 1999 saw the release of the GeForce 256 (NV10), most notably bringing on-board transformation and lighting. It ran at 120 MHz; it implemented advanced video-acceleration, motion-compensation and hardware sub-picture alpha-blending; and had four pixel pipelines. The GeForce outperformed existing products — such as the ATI Rage 128, 3dfx Voodoo 3, Matrox G400 MAX, and RIVA TNT2 — by a wide margin.

Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft’s Xbox game-console, which earned Nvidia a large $200 million advance. However, the project drew the time of many of Nvidia's best engineers. In the short term, this was of no importance, and the GeForce 2 GTS shipped in the summer of 2000.

The GTS benefited from the fact that Nvidia had by this time acquired extensive manufacturing experience with their highly integrated cores, and as a result they succeeded in optimizing the core for clock-speeds. The volume of chips produced by Nvidia also enabled it to bin-split parts, picking out the highest-quality cores for its premium range. As a result, the GTS shipped at 200 MHz. The pixel fill rate of the GeForce256 nearly doubled, and texel-fill rate nearly quadrupled because multi-texturing was added to each pixel pipeline. New features included S3TC compression, FSAA, and improved MPEG-2 motion compensation.

Shortly afterward Nvidia launched the GeForce 2 MX, intended for the budget and OEM market. It had two pixel-pipelines fewer, and ran at 165 MHz and later at 250 MHz. Offering strong performance at a mid-range price, the GeForce 2MX became one of the most successful graphics chipsets. Nvidia also shipped a mobile derivative called the GeForce2 Go at the end of 2000.

Nvidia's success proved too much for 3dfx to recover its past market-share. The long-delayed Voodoo 5, the successor to the Voodoo 3, did not compare favorably with the GeForce 2 in either price or performance, and failed to generate the sales needed to keep the company afloat. With 3dfx on the verge of bankruptcy near the end of 2000, Nvidia purchased most of 3dfx's intellectual property (in dispute at the time). Nvidia also acquired anti-aliasing expertise and about 100 engineers (but not the company itself, which filed for bankruptcy in 2002).

At this point, Nvidia’s market position was dominant. However, ATI Technologies remained competitive due to its new Radeon product, which performed mostly on a par with the GeForce 2 GTS. Though ATI's answer to the GeForce 3, the Radeon 8500, came later to market and initially suffered from driver issues, the 8500 proved a superior competitor due to its lower price and untapped potential for growth. Nvidia countered ATI's offering with the GeForce 4 Ti line, but not before the 8500 carved out a niche. ATI opted to work on its next-generation Radeon 9700 rather than on a direct competitor to the GeForce 4 Ti.

During the development of the next-generation GeForce FX chips, many of Nvidia’s best engineers focused on the Xbox contract, including the API used as part of the SoundStorm platform. Nvidia also had a contractual obligation to develop newer and more hack-resistant NV2A chips, and this requirement further shortchanged the FX project. The Xbox contract did not allow for falling manufacturing costs as processor technology improved, and Microsoft sought to re-negotiate the terms of the contract, withholding the DirectX 9 specifications as leverage. Relations between the two companies, which had previously been very good, deteriorated as a result. Both parties later settled the dispute through arbitration and the terms were not released to the public.

Due to the Xbox dispute, no consultation with Nvidia took place during the development of the DirectX 9 specification. ATI limited rendering color support to 24-bit floating point, and emphasized shader performance. Developers built the shader-compiler using the Radeon 9700 as the base card.

In contrast, Nvidia’s cards offered 16- and 32-bit floating point modes, offering either lower visual quality (as compared to the competition), or slower performance. The 32-bit support made them much more expensive to manufacture, requiring a higher transistor count. Shader performance often remained at half or less of the speed provided by ATI's competing products. Having made its reputation by designing easy-to-manufacture DirectX-compatible parts, Nvidia had misjudged Microsoft’s next standard and paid a heavy price: as more and more games started to rely on DirectX 9 features, the poor shader performance of the GeForce FX series became more obvious. With the exception of the FX 5700 series (a late revision), the FX series did not compete well against ATI cards.

Nvidia released an "FX only" demo called Dawn, but a hacked wrapper enabled it to run on a 9700, where it ran faster despite translation overhead. Nvidia began to use application detection to optimize their drivers. Hardware review sites published articles showing that Nvidia’s driver auto-detected benchmarks, and produced artificially inflated scores that did not relate to real world performance.{{Fact|date=January 2009} Often it was tips from ATI’s driver development team that lay behind these articles. While Nvidia did partially close the performance gap with new instruction reordering capabilities introduced in later drivers, shader performance remained weak and over-sensitive to hardware-specific code compilation. Nvidia worked with Microsoft to release an updated DirectX compiler that generated code optimized for the GeForce FX.

Furthermore, GeForce FX devices also ran hot, because they drew as much as double the amount of power as equivalent parts from ATI. The GeForce FX 5800 Ultra became notorious for its fan noise, and acquired the nicknames "dustbuster" and "leafblower" - Nvidia jokingly acknowledged these accusations with a video in which the marketing team compares the cards to a Harley-Davidson motorcycle. Although the quieter 5900 replaced the 5800 without fanfare, the FX chips still needed large and expensive fans, placing Nvidia's partners at a manufacturing cost disadvantage compared to ATI. As a result of Microsoft's actions, and the resultant FX series' weaknesses, Nvidia lost its market leadership position to ATI.

With the GeForce 6 series, Nvidia had clearly moved beyond the DX9 performance problems that plagued the previous generation. The GeForce 6 series not only performed competitively where Direct 3D shaders were concerned, but also supported DirectX Shader Model 3.0, while ATI's competing X800 series chips only supported the previous 2.0 specification. This proved an insignificant advantage, mainly because games of that period did not employ extensions for Shader Model 3.0. However, it demonstrated Nvidia's desire to design and follow through with the newest features and deliver them in a specific timeframe. What became more apparent during this time was that the products of the two firms, ATI and Nvidia, offered equivalent performance. The two firms traded blows in specific titles and specific criteria — resolution, image quality, anisotropic filtering/anti-aliasing — but differences were becoming more abstract, and the reigning concern became price-to-performance. The mid-range offerings of the two firms demonstrated the consumers' appetite for affordable, high-performance graphics cards, and it is now this price segment in which much of the firms' profitability is determined. The GeForce 6 series were released in a very interesting period: the game Doom 3 was just released where ATI's Radeon 9700 struggled at the OpenGL performance. In 2004, the GeForce 6800 performed excellently, while the GeForce 6600GT remained as important to Nvidia as the GeForce2 MX a few years previously. The GeForce 6600GT enabled users of the card to play Doom 3 at very high resolutions and graphical settings, which was thought to be highly unlikely considering its selling price. The GeForce 6 series also introduced SLI (which is similar to what 3dfx was using on the Voodoo 2). A combination of SLI and the performance gain as a result returned Nvidia to market leadership.

The GeForce 7 series represented a heavily beefed-up extension of the reliable 6-series. The industry's introduction of the PCI Express bus standard allowed Nvidia to release SLI (Scalable Link Interface), a solution that employs two similar cards to share the workload in rendering. While these solutions do not equate to double the performance, and require more electricity (two cards vis-à-vis one), they can make a huge difference as higher resolutions and settings are enabled and, more importantly, offer more upgrade flexibility. ATI responded with the X1000 series, and their own dual-rendering solution called "CrossFire". Sony chose Nvidia to develop the "RSX" chip used in the PlayStation 3 — a modified version of the 7800 GPU.

Nvidia released the 8-series chip towards the end of 2006, making the 8-series the first to support Microsoft's next-generation DirectX 10 specification. The 8-series GPUs also featured the revolutionary Unified Shader Architecture, and Nvidia leveraged this to provide an additional functionality for its graphics cards: better support for General Purpose Computing on GPU (GPGPU). A new product-line of "compute-only" devices called Nvidia Tesla emerged from the G80 architecture, and subsequently Nvidia also became the market leader of this new field by introducing the world's first C programming language API for GPGPU: CUDA.

Nvidia released two models of the high-end 8-series (8800) chip: the 8800GTS (640MB and 320MB) and the 8800GTX (768MB). Later, Nvidia released the 8800 Ultra (essentially an 8800GTX with a different cooler and higher clocks). All three of these cards derive from the 90 nm G80 core (with 681 million transistors). The GTS model had 96 stream processors and 20 ROPS and the GTX/Ultra had 128 stream processors and 24 ROPS.

In early 2007 Nvidia released the 8800GTS 320mb. This card resembles an 8800GTS 640, but with 32MB memory chips instead of 64MB (the cards contained 10 memory chips).

In October 2007 Nvidia released the 8800GT. The 8800GT used the new 65 nm G92 GPU and had 112 stream processors. It contained 512Mb of VRAM and operated on a 256bit bus. It had several fixes and new features that the previous 8800s lacked.

Later in December 2007 Nvidia released the 8800GTS G92. It represented a larger 8800GT with higher clocks and all of the 128 stream processors of the G92 unlocked. Both the 8800GTS G92 and 8800GT have full PCI Express 2.0 support.

In February 2008 Nvidia released the 9600-series chip, which supports Microsoft's DirectX 10 specification, in response to ATI's release of the Radeon HD3800 series. After March Nvidia released the GeForce 9800 GX2, which, roughly put, packs two GeForce 8800 GTS G92s into a single card.

In June 2008 Nvidia released their new flagship GPUs named the GTX 280 and GTX 260. The cards used the same basic Unified Architecture deployed in the previous 8 and 9 series cards, but with a tune-up in power. Both of the cards take as their basis the GT200 GPU. This GPU contains 1.4 billion transistors on a 65 nm fabrication. According to TSMC, it has the largest die area of any chip ever fabricated. The GTX 280 has 240 shaders (stream processors) and the GTX 260 has 192 shaders (stream processors) . The GTX 280 has 1GB of GDDR3 VRAM and uses a 512-bit memory bus. The GTX 260 has 896MB of GDDR3 VRAM on a 448-bit memory bus (revised in September 2008 to include 216 shaders). The GTX 280 allegedly provides approximately 933 GFLOPS of floating point power.

In January 2009, Nvidia released a 55 nm die shrink of GT200 called the GT200b. The update to the GTX 280 (card called GTX 285) allegedly providing 1062.72 GFLOPS of floating point power; an update to the GTX 260 (still called the GTX 260) with 216 shaders and a dual-chip card (called GTX 295), featuring two GT200b (55 nm-shrinked GT200 chips) which are a hybrid of the GT200 cores that were featured on the original GTX 280 and GTX 260. The difference here is that each individual GPU features 240 stream processors, but only a 448-bit memory bus. The GTX 295 has 1.75GB (1792MB, 896MB per GPU) of GDDR3 VRAM. The GTX 295 allegedly provides approximately 1788.48 GFLOPS of floating point power.

March 2009 saw the released of the GTS 240 and GTS 250 main stream chips. Based on the previous generation G92s but 55 nm die shrink code named the G92b. The GTS 240 (based on the 9800GT) with 112 shaders (stream processors) and a 256-bit memory bus. The GTS 250 (based on the 9800GTX +) with 128 shaders (stream processors) also with a 256-bit memory bus and 0.5GB or 1GB of GDDR3 of VRAM.

In July 2008, Nvidia noted increased rates of failure in certain mobile video adapters. A writer for The Inquirer alleged that the problems potentially affect all G84 and G86, mobile and desktop, video adapters, though NVIDIA have denied this. In response to this issue, Dell and HP released BIOS updates for all affected notebook computers which turn on the cooling fan earlier than before in an effort to keep the defective video adapter at a lower temperature. Leigh Stark has suggested that this may lead to the premature failure of the cooling fan. It is also possible that this resolution may only delay component failure past warranty expiration.

In August 2008 rumors emerged that these issues also affected G92 & G94 mobile video adapters. But at the end of August 2008, Nvidia reportedly issued a product-change notification announcing plans to update the bump material of GeForce 8 and 9 series chips “to increase supply and enhance package robustness”. In response to the possibility of defects in some mobile video adapters from Nvidia, some notebook manufacturers have allegedly turned to ATI to provide graphics options on their new Montevina notebook computers.

On 18 August 2008, according to the direct2dell.com blog, Dell began to offer a 12-month limited warranty "enhancement" specific to this issue on affected notebook computers worldwide.

On 9 October 2008, Apple Inc. announced on a support page that MacBook Pro notebook computers had exhibited faulty Nvidia GeForce 8600M GT graphics adapters. The manufacture of affected computers took place between approximately May 2007 and September 2008. Apple also stated that they would repair MacBook Pros affected within two years of the original purchase date free-of-charge and also offered refunds to customers who had paid for repairs related to this issue.

On 9 December 2008, The Inquirer conducted another series of tests to check whether the new MacBook Pro notebook computers used eutectic solder or high-lead solder. They found that the 9400M chipset used eutectic solder, while the 9600M used a high-lead solder which they associated with the "old process" responsible for the failures.

Activision Blizzard · Adobe · Akamai Technologies · Altera · Amazon.com · Amgen · Apollo Group · Apple · Applied Materials · Autodesk · Automatic Data Processing · Baidu · Bed Bath & Beyond · Biogen Idec · Broadcom · C. H. Robinson Worldwide · CA, Inc. · Celgene · Cephalon · Check Point · Cintas · Cisco · Citrix · Cognizant Technology Solutions · Comcast · Costco · Dell · DENTSPLY International · Dish Network Corporation · eBay · Electronic Arts · Expedia · Expeditors International · Express Scripts · Fastenal · First Solar · Fiserv · Flextronics · FLIR Systems · Foster Wheeler · Garmin · Genzyme · Gilead Sciences · Google · Hansen Natural · Henry Schein · Hologic · IAC/InterActiveCorp · Illumina · Infosys · Intel · Intuit · Intuitive Surgical · J.B. Hunt · Joy Global · Juniper Networks · KLA-Tencor · Lam Research · Liberty Global · Liberty Media · Life Technologies · Linear Technology · Logitech · Marvell · Maxim Integrated Products · Microchip Technology · Microsoft · Millicom International Cellular · NetApp · News Corporation · NII · Nvidia · O'Reilly Automotive · Oracle · PACCAR · Patterson Companies · Paychex · Pharmaceutical Product Development · Qualcomm · Research In Motion · Ross Stores · Ryanair · Seagate · Sears · Sigma-Aldrich · Staples · Starbucks · Steel Dynamics · Stericycle · Sun Microsystems · Symantec · Teva Pharmaceutical Industries · The DirecTV Group · Urban Outfitters · VeriSign · Vertex Pharmaceuticals · Warner Chilcott · Wynn Resorts · Xilinx · Yahoo!

To the top



Graphics processing unit

GeForce 6600GT (NV43) GPU

A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a dedicated graphics rendering device for a personal computer, workstation, or game console. Modern GPUs are very efficient at manipulating and displaying computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms. A GPU can sit on top of a video card, or it can be integrated directly into the motherboard. More than 90% of new desktop and notebook computers have integrated GPUs, which are usually far less powerful than those on a video card.

The ANTIC and CTIA chips provided for hardware control of mixed graphics and text modes, sprite positioning and display (a form of hardware blitting), and other effects on Atari 8-bit computers. The ANTIC chip was a special purpose processor for mapping (in a programmable fashion) text and graphics data to the video output. The designer of the ANTIC chip, Jay Miner, subsequently designed the graphics chip for the Commodore Amiga.

The Commodore Amiga was the first mass-market computer to include a blitter in its video hardware, and IBM's 8514 graphics system was one of the first PC video cards to implement 2D primitives in hardware.

The Amiga was unique, for the time, in that it featured what would now be recognized as a full graphics accelerator, offloading practically all video generation functions to hardware, including line drawing, area fill, block image transfer, and a graphics coprocessor with its own (though primitive) instruction set. Prior (and quite some time after on most systems) a general purpose CPU had to handle every aspect of drawing the display.

By the early 1990s, the rise of Microsoft Windows sparked a surge of interest in high performance, high-resolution 2D bitmapped graphics (which had previously been the domain of Unix workstations and the Apple Macintosh). For the PC market, the dominance of Windows meant PC graphics vendors could now focus development effort on a single programming interface, Graphics Device Interface (GDI).

In 1991, S3 Graphics introduced the first single-chip 2D accelerator, the S3 86C911 (which its designers named after the Porsche 911 as an indication of the performance increase it promised). The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, and these coprocessors faded away from the PC market.

Throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later.

In the early and mid-1990s, CPU-assisted real-time 3D graphics were becoming increasingly common in computer and console games, which lead to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-marketed 3D graphics hardware can be found in fifth generation video game consoles such as PlayStation and Nintendo 64. In the PC world, notable failed first-tries for low-cost 3D graphics chips were the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were even pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, performance 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D GUI acceleration entirely) such as the 3dfx Voodoo. However, as manufacturing technology again progressed, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were the first to do this well enough to be worthy of note.

OpenGL appeared in the early 90s as a professional graphics API, but became a dominant force on the PC, and a driving force for hardware development. Software implementations of OpenGL were common during this time although the influence of OpenGL eventually lead to widespread hardware support. Over time a parity emerged between features offered in hardware in those offered in OpenGL. DirectX became popular among Windows game developers during the late 90s. Unlike OpenGL, Microsoft insisted on a providing strict one-to-one support of hardware. The approach made DirectX less popular as a stand alone graphics API initially since many GPUs provided their own specific features, which existing OpenGL applications were already able to benefit from, leaving DirectX often one generation behind. (See: Comparison of OpenGL and Direct3D).

Over time Microsoft began to work closer with hardware developers, and started to target the releases of DirectX with those of the supporting graphics hardware. Direct3D 5.0 was the first version of the burgeoning API to gain widespread adoption in the gaming market, and it competed directly with many more hardware specific, often proprietary graphics libraries, while OpenGL maintained a strong following. Direct3D 7.0 introduced support for hardware-accelerated transform and lighting (T&L). 3D accelerators moved beyond being just simple rasterizers to add another significant hardware stage to the 3D rendering pipeline. The NVIDIA GeForce 256 (also known as NV10) was the first card on the market with this capability. Hardware transform and lighting, both already existing features of OpenGL, came to hardware in the 90s and set the precedent for later pixel shader and vertex shader units which were far more flexible and programmable.

With the advent of the OpenGL API and similar functionality in DirectX, GPUs added programmable shading to their capabilities. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. NVIDIA was first to produce a chip capable of programmable shading, the GeForce 3 (code named NV20). By October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and in general were quickly becoming as flexible as CPUs, and orders of magnitude faster for image-array operations. Pixel shading is often used for things like bump mapping, which adds texture, to make an object look shiny, dull, rough, or even round or extruded.

As the processing power of GPUs have increased, so has their demand for electrical power. High performance GPUs often consume more energy than current CPUs. See also performance per watt and quiet PC.

Today, parallel GPUs have begun making computational inroads against the CPU, and a subfield of research, dubbed GPGPU for General Purpose Computing on GPU, has found its way into fields as diverse as oil exploration, scientific image processing, linear algebra, 3D reconstruction and even stock options pricing determination. There is increased pressure on GPU manufacturers from "GPGPU users" to improve hardware design, usually focusing on adding more flexibility to the programming model.

Many companies have produced GPUs under a number of brand names. In 2008, Intel, NVIDIA and AMD/ATI were the market share leaders, with 49.4%, 27.8% and 20.6% market share respectively. However, those numbers include Intel's very low-cost, less powerful integrated graphics solutions as GPUs. Discounting those numbers, NVIDIA and AMD control nearly 100% of the market. VIA Technologies/S3 Graphics and Matrox also produce GPUs.

Modern GPUs use most of their transistors to perform calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations.

In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). In addition, most GPUs made since 1995 support the YUV color space and hardware overlays (important for digital video playback), and many GPUs made since 2000 support MPEG primitives such as motion compensation and iDCT. Recent graphics cards even decode high-definition video on the card, taking some load off the central processing unit.

The most powerful class of GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIe) or Accelerated Graphics Port (AGP) and can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is unavailable.

A dedicated GPU is not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that dedicated graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.

Technologies such as SLI by NVIDIA and CrossFire by ATI allow multiple GPUs to be used to draw a single image, increasing the processing power available for graphics.

Integrated graphics solutions, or shared graphics solutions are graphics processors that utilize a portion of a computer's system RAM rather than dedicated graphics memory. Computers with integrated graphics account for 90% of all PC shipments. These solutions are cheaper to implement than dedicated graphics solutions, but are less capable. Historically, integrated solutions were often considered unfit to play 3D games or run graphically intensive programs such as Adobe Flash. (Examples of such IGPs would be offerings from SiS and VIA circa 2004.) However, today's integrated solutions such as the Intel's GMA X3000 (Intel G965 chipset), AMD's Radeon HD 3200 (AMD 780G chipset) and NVIDIA's GeForce 8200 (NVIDIA nForce 730a) are more than capable of handling 2D graphics from Adobe Flash or low stress 3D graphics. However, most integrated graphics still struggle with high-end video games. Chips like the Nvidia 9400M in the new Macbook and Macbook Pro have improved performance, but still lag behind dedicated graphics cards. Some Integrated Graphics Modern desktop motherboards often include an integrated graphics solution and have expansion slots available to add a dedicated graphics card later.

As a GPU is extremely memory intensive, an integrated solution may find itself competing for the already slow system RAM with the CPU as it has minimal or no dedicated video memory. System RAM may be 2 Gbit/s to 12.8 Gbit/s, yet dedicated GPUs enjoy between 10 Gbit/s to over 100 Gbit/s of bandwidth depending on the model.

Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.

This newer class of GPUs competes with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI's HyperMemory and NVIDIA's TurboCache. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. These also share memory with the system, but have a smaller dedicated amount of it than discrete graphics cards do, to make up for the high latency of the system RAM. Technologies within PCI Express can make this possible. While these solutions are sometimes advertised as having as much as 768MB of RAM, this refers to how much can be shared with the system memory.

A new concept is to use a modified form of a stream processor to allow a general purpose graphics processing unit. This concept turns the massive floating-point computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power, as opposed to being hard wired solely to do graphical operations. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics cards" above) GPU designers, ATI and NVIDIA, are beginning to pursue this new market with an array of applications. Both nVidia and ATI have teamed with Stanford University to create a GPU-based client for the Folding@Home distributed computing project (for protein folding calculations). In certain circumstances the GPU calculates forty times faster than the conventional CPUs traditionally used in such applications.

Recently NVidia began releasing cards supporting an API extension to the C programming language called CUDA ("Compute Unified Device Architecture"), which allows specified functions from a normal C program to run on the GPU's stream processors. This makes C programs capable of taking advantage of a GPU's ability to operate on large matrices in parallel, while still making use of the CPU where appropriate. CUDA is also the first API to allow CPU-based applications to access directly the resources of a GPU for more general purpose computing without the limitations of using a graphics API.

Since 2005 there has been interest in using the performance offered by GPUs for evolutionary computation in general and for accelerating the fitness evaluation in genetic programming in particular. There is a short introduction on pages 90-92 of A Field Guide To Genetic Programming. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to run. Typically the performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel using the GPU's SIMD architecture. However, substantial acceleration can also be obtained by not compiling the programs but instead transferring them to the GPU and interpreting them there. Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU (e.g. 8800 GTX or later) can readily simultaneously interpret hundreds of thousands of very small programs.

To the top



Video card

Gpu-connections.png

A video card, also known as a graphics accelerator card, display adapter, or graphics card, is an expansion card whose function is to generate and output images to a display. Some video cards offer added functions, such as video capture, TV tuner adapter, MPEG-2 and MPEG-4 decoding, FireWire, light pen, TV output, or the ability to connect multiple monitors.

A misconception regarding high end video cards is that they are strictly used for video games. High end video cards have a much broader range of capability; for example, they play a very important role for graphic designers and 3D animators, who tend to require optimum displays as well as faster rendering.

Video cards are not used exclusively in IBM type PCs; they have been used in devices such as Commodore Amiga (connected by the slots Zorro II and Zorro III), Apple II, Apple Macintosh, Atari Mega ST/TT (attached to the MegaBus or VME interface), Spectravideo SVI-328, MSX, and in video game consoles.

Video hardware can be integrated on the mainboard, as it often happened with early computers; in this configuration it was sometimes referred to as a video controller or graphics controller.

The first IBM PC video card, which was released with the first IBM PC, was developed by IBM in 1981. The MDA (Monochrome Display Adapter) could only work in text mode representing 25x80 lines in the screen. It had a 4KB video memory and just one color.

Starting with the MDA in 1981, several video cards were released, which are summarized in the attached table.

VGA was widely accepted, which led some corporations such as ATI, Cirrus Logic and S3 to work with that video card, improving its resolution and the number of colours it used. This developed into the SVGA (Super VGA) standard, which reached 2 MB of video memory and a resolution of 1024x768 at 256 color mode.

In 1995 the first consumer 2D/3D cards were released, developed by Matrox, Creative, S3, ATI and others. These video cards followed the SVGA standard, but incorporated 3D functions. In 1997, 3dfx released the Voodoo graphics chip, which was more powerful compared to other consumer graphics cards, introducing 3D effects such mip mapping, Z-buffering and anti-aliasing into the consumer market. After this card, a series of 3D video cards were released, such as Voodoo2 from 3dfx, TNT and TNT2 from NVIDIA. The bandwidth required by these cards was approaching the limits of the PCI bus capacity. Intel developed the AGP (Accelerated Graphics Port) which solved the bottleneck between the microprocessor and the video card. From 1999 until 2002, NVIDIA controlled the video card market (taking over 3dfx) with the GeForce family. The improvements carried out at this time were focused in 3D algorithms and graphics processor clock rate. Video memory was also increased to improve their data rate; DDR technology was incorporated, improving the capacity of video memory from 32 MB with GeForce to 128 MB with GeForce 4.

From 2002 onwards, the video card market came to be dominated almost entirely by the competition between ATI and Nvidia, with their Radeon and Geforce lines respectively, taking around (90% of the independent graphics card market between them, while other manufacturers were forced into much smaller, niche markets..

A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations which are fundamental to 3D graphics rendering. The main attributes of the GPU are the core clock frequency, which typically ranges from 250 to 850 MHz, and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels.

The video BIOS or firmware contains the basic program that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor and RAM and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance) although this is typically only done by video card overclockers, and has the potential to irreversibly damage the card.

The memory capacity of most modern video cards range from 128 MB to 4.0 GB, though very few cards actually go over 1.0 GB.. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3 and GDDR4 even GDDR5 utilized most notably by the ATI Radeon HD 4870. The effective memory clock rate in modern cards are generally between 400 MHz and 3.8 GHz.

Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs.

The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. Depending on the number of bits used and the RAMDAC data transfer rate, the converter will be able to support different computer display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker. (With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCD and plasma displays and TVs work in the digital domain and do not require a RAMDAC. There are few remaining legacy LCD and plasma displays which feature analog inputs (VGA, component, SCART etc.) only; these require a RAMDAC but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion.

In the attached table is a comparison between a selection of the features of some of those interfaces.

As the processing power of video cards has increased, so has their demand for electrical power. Present fast video cards tend to consume a great deal of power. While CPU and power supply makers have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video card may be the biggest electricity user in a computer. Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 W. Nowadays, video cards with a power consumption over 75 watts usually include a combination of six pin (75W) or eight pin (150W) sockets that connect directly to the power supply to supplement power.

To the top



Voodoo3

3dfx Voodoo3 box art

Voodoo3 was a series of computer gaming video cards manufactured and designed by 3dfx Interactive. It was the successor to the company's high-end Voodoo 2 line and was based heavily upon the older Voodoo Banshee product. Voodoo3 was announced at COMDEX '98 and arrived on store shelves in 1999. The Voodoo3 line was the first product manufactured by the combined STB Technologies and 3dfx.

The 'Avenger' graphics core was originally conceived immediately after Banshee. Due to mis-management by 3dfx, this caused the next-generation 'Rampage' project to suffer delays which would prove to be fatal to the entire company.

Avenger was pushed to the forefront as it offered a quicker time to market than the already delayed Rampage. Avenger was no more than the Banshee core with a second texture mapping unit (TMU) added - the same TMU which Banshee lost compared to Voodoo2. Avenger was thus merely a Voodoo2 with an integrated 128-bit 2D video accelerator and twice the clock speed.

Much was made of Voodoo3 (as Avenger was christened) and its 16-bit color rendering limitation. This was in fact quite complex, as Voodoo3 operated to full 32-bit precision (8 bits per channel, 16.7M colours) in its texture mappers and pixel pipeline as opposed to previous products from 3dfx and other vendors, which had only worked in 16-bit precision.

To save framebuffer space, the Voodoo3's rendering output was dithered to 16 bit. This offered better quality than running in pure 16-bit mode. However, a controversy arose over what happened next.

The Voodoo3's RAMDAC, which took the rendered frame from the framebuffer and generated the display image, performed a 2x2 box or 4x1 line filter on the dithered image to almost reconstruct the original 24-bit color render. 3dfx claimed this to be '22-bit' equivalent quality. The controversy began because most people relied on screenshots to compare image quality, yet Voodoo3's framebuffer (where a screenshot is taken from) is not the final result put on screen. Therefore screenshots did not accurately portray Voodoo3's display quality which was actually much closer to the 24-bit outputs of NVIDIA's RIVA TNT2 and ATI's Rage 128 but far faster. 32-bit color cannot be output to a display as the extra 8 bits are transparency information.

The internal organisation of Avenger was not complex. Pre-setup notably featured a guardband clipper (eventually part of hardware transformation and lighting) but the pixel pipeline was a conventional single-issue, dual-texture design almost identical to that featured on Voodoo2, but capable of working on 32-bit image data as opposed to Voodoo2's pure 16-bit output. Avenger's other remarkable features included the astonishingly high-performance 128-bit GDI accelerator first debuted in Banshee. This 2D engine at least matched the best offerings available, including much more serious parts from Matrox, in benchmarks tests. Finally, Avenger's filtering RAMDAC is notable as covered in the previous paragraph. A hack to force it on outside 3D rendering was noted to de-block low bitrate MPEG video.

The Voodoo3 2000, 3000 and 3500 differed mainly in clock frequencies (memory and core were synchronous). The clock rates were 143 MHz, 166 MHz and 183 MHz respectively. This gave the 3000 and 3500 a notable theoretical advantage in multi-textured fillrate over its main rival, the 125 MHz TNT2, but the TNT2 had just under twice the single-textured fillrate of the Voodoo3. While Voodoo3 consisted of one multi-texturing pipeline, the TNT series consisted of twin single texturing pipelines. As a result, Voodoo3 could be at a disadvantage in games not using multiple texturing. The 2000 and 3000 boards generally differed in their support for TV output; the 3500 boards also carried a TV tuner and provided a wide range of video inputs and outputs.

Modern (for the time) multi-texturing games such as Quake3 and Unreal Tournament were almost exclusively Voodoo3 territory. Voodoo3's initial competition was RIVA TNT, and it simply outclassed that chip. NVIDIA's RIVA TNT2 arrived shortly thereafter and the two traded places frequently in benchmark results, fighting for the top (although the Voodoo3's could not support 32-bit rendering like the TNT2 could). The Unreal series, with its particularly potent Glide support, was always a safe haven for Voodoo3 because Glide was much superior to the Direct3D renderer. Eventually once drivers had been refined, Matrox's G400 MAX became more than a match for Voodoo3 and TNT2.

Voodoo3 remained performance competitive throughout its life, eventually being comprehensively outclassed by NVIDIA's GeForce 256 and ATI's Radeon. 3dfx created the ill-fated Voodoo 5 to counter.

3dfx released a line of business / value-oriented cards based on the Voodoo3 Avenger chipset. With the purchase of STB Technologies, 3dfx had acquired several popular brand names. The Velocity brand had appealed to OEM system builders for years, with boards such as the S3 Graphics ViRGE VX-based STB Velocity 3D and NVIDIA RIVA 128-based Velocity 128 being used in many OEM systems from companies such as Gateway. The 3dfx Velocity boards came with only 8 MiB of RAM, compared to 16 MiB on a regular Voodoo3. In addition, one of the texture management units came disabled as well, making the board more like a Banshee. Enthusiasts discovered that it was possible to enable the disabled TMU with a simple registry alteration. The board's clock speed was set at 143 MHz, exactly the same as a Voodoo3 2000.

The last set of drivers officially released for the Voodoo 3 on Win9x was version 1.07.00. For Win2000 the latest version is 1.03.00. After 3dfx shut its doors 3rd party drivers for Windows 98/98se and XP were developed by loyal 3dfx customers.

To the top



Source : Wikipedia