It's getting harder and harder these days to keep pace with all the new technology being released in the PC world. Just when you thought dual-core processors were the future (yesterday), Intel released its first quad core processor. AMD, not a company to sit around idly and get slapped around, released its "Quad FX" motherboard, which enables two dual-core processors to be run together (hmmm... didn't Apple do that already?). nVidia has also released its next generation nForce motherboard, and ATI's new chipset is right around the corner.
For PC gamers, though, none of this is more significant than the release of nVidia's new G80 line of cards, better known as the 8800GTX and its slightly less ridiculously powerful brother, the 8800GTS.
The cards are significant for a few reasons. One, they are fast. I don't mean "a little better than the 7900GTX" fast – I mean, it eats the 7-series cards for breakfast. In current games, a single 8800GTX is actually faster than two 7900GTX cards in SLI. That's really freakin' fast. Even its lesser brother, the 8800GTS, is faster than nVidia's dual-PCB 7950GX2. Overclocked, the GTS can perform on a level close to the GTX in current games, but it may begin to show its relative weakness as games start taking more advantage of the unified-shader architecture of the cards.
But what really makes these cards significant for nVidia is the fact that for the first time in quite a while (at least a couple of years), nVidia can rightly claim to have image quality that surpasses ATI. Yes, the G80 can do high-dynamic-range lighting and anti-aliasing at the same time (perhaps ATI's biggest trump card in the last generation). It has a host of new features in the anti-aliasing and anisotropic filtering departments that image-quality nerds will undoubtedly be drooling over. What's more, it does these things more efficiently than ATI, so high-quality graphics features can be enabled without as much of a performance hit as gamers are used to.
But the cards also represent a bigger trend: the move to DirectX 10. It seems more than a little permature; after all, Windows Vista, which is required for DirectX 10, won't be out until January of next year. And the first big DirectX 10 game, Crysis, is speculated to hit in March of next year, and possibly later.
By the time that game hits shelves, something else will be on shelves – ATI's R600 generation of cards. Enthusiasts are grabbing nVidia's new cards with much, err, enthusiasm, but unless you're really aching for the upgrade, it might not be the worst idea to hold off until ATI has flexed its muscles. Rumors have it that ATI's new smartly-designed card will indeed surpass nVidia's flagship, returning the performance crown to the Red Team (let's face it, it's not really a fair fight right now). It wouldn't be surprising, since ATI has more time in development. It's no guarantee, of course – just speculation. But, the G80 was in development for a long time itself, and its performance is nothing less than a breakthrough. Even if you're not planning to upgrade to Vista and DirectX 10 anytime soon, the G80 cards offer the best performance on the market, bar-none.
At street prices of around $650 and $450 respectively, the 8800GTX and 8800GTS are not video cards for the low-end segment. They're the cards you get when you want to play games with all the details cranked up and image quality settings set to "wow!" They're also power-hungry. The 8800GTX requires an SLI-ready power supply just for one card, because the behemoth 10.5" card (which may not even fit in some mid-tower or desktop cases) has two power connectors. Sheesh.
It's all pretty amazing though. To put this in perspective, the old Voodoo3 graphics card had 8.1 million transistors. The new 8800GTX has 681 million transistors. My, how far we've come. The development is certainly exciting, although now the onus is on developers to make good use of all the 128 unified shaders on the 8800GTX. The next generation is here, but it might be a while before you know it when you're gaming.