Again 128 stronger California shooters, but with cut spears (512MB and 256bit)

Part 1: Theory and architecture

In the previous article devoted to the release of the new mid-range solution Nvidia Geforce 8800 GT based on the G92 chip, we mentioned that this solution uses a chip in which not all ALU and TMU execution units are unlocked, some of them are waiting in the wings, to be included in a graphics card of a different price tier. And now this moment has come, Nvidia has announced an updated version of the Geforce 8800 GTS, which has the same name as the younger solution based on the G80. The easiest way to distinguish it is by the amount of installed video memory, it is equal to 512 megabytes, unlike the previous 320 MB and 640 MB options. So this model was named Geforce 8800 GTS 512MB.

The new version of the GeForce 8800 GTS is based on the G92 chip, already used in the GeForce 8800 GT, a video card of the so-called upper middle price level, so we already know the main features and characteristics. Unlike the two Geforce 8800 GT models with a recommended price of $200 to $250 (which doesn't compare well with real prices, by the way), the new solution has a manufacturer's recommended price of $349-399. The peculiarities of the used video chip is the support of only a 256-bit memory bus, but a greater number of unlocked universal execution units. Let's take a closer look at Nvidia's new low-end high-end solution...

Before reading this material, we recommend that you carefully read the basic theoretical materials of DX Current, DX Next and Longhorn, which describe various aspects of modern hardware graphics accelerators and the architectural features of Nvidia and AMD products.

These materials quite accurately predicted the current situation with video chip architectures, and many assumptions about future solutions were justified. Detailed information about the unified architecture of Nvidia G8x/G9x on the example of previous chips can be found in the following articles:

As we mentioned in the previous article, the G92 chip includes all the advantages of the G8x: a unified shader architecture, full support for DirectX 10, high-quality anisotropic filtering methods and a CSAA anti-aliasing algorithm with up to sixteen samples inclusive. Some chip blocks are slightly different from those in the G80, but the main change compared to the G80 is the 65 nm manufacturing technology, which has reduced the cost of production. Consider the characteristics of the GPU and new video solutions based on it:

Graphic accelerator Geforce 8800 GTS 512MB

  • Chip codename G92
  • 65 nm technology
  • 754 million transistors (more than G80)
  • Unified architecture with an array of common processors for vertex and pixel streaming, and other kinds of data
  • Hardware support for DirectX 10, including shader model Shader Model 4.0, geometry generation and recording intermediate data from shaders (stream output)
  • 256-bit memory bus, four independent 64-bit wide controllers
  • Core clock 650 MHz (Geforce 8800 GTS 512MB)
  • ALUs run at more than double the frequency (1.625 GHz for Geforce 8800 GTS 512MB)
  • 128 scalar floating point ALUs (integer and float formats, FP support for IEEE 754 32-bit precision, MAD+MUL without clock loss)
  • 64 texture address units with support for FP16 and FP32 components in textures
  • 64 bilinear filtering blocks (as in G84 and G86, no free trilinear filtering and more efficient anisotropic filtering)
  • Possibility of dynamic branching in pixel and vertex shaders
  • 4 wide ROPs (16 pixels) with support for anti-aliasing modes up to 16 samples per pixel, including with FP16 or FP32 framebuffer format. Each block consists of an array of flexibly configurable ALUs and is responsible for generating and comparing Z, MSAA, blending. Peak performance of the entire subsystem up to 64 MSAA samples (+ 64 Z) per clock, in colorless mode (Z only) 128 samples per clock
  • Write results to 8 frame buffers simultaneously (MRT)
  • All interfaces (two RAMDAC, two Dual DVI, HDMI, HDTV) are integrated on the chip (in contrast to those placed on an additional NVIO chip in Geforce 8800)

Geforce 8800 GTS 512MB reference card specifications

  • Core clock 650 MHz
  • Frequency of universal processors 1625 MHz
  • Number of universal processors 128
  • Number of texture units 64, blending units 16
  • Effective memory frequency 1.94 GHz (2*970 MHz)
  • Memory type GDDR3
  • Memory capacity 512 megabytes
  • Memory bandwidth 64.0 gigabytes per second.
  • The theoretical maximum fill rate is 10.4 gigapixels per second.
  • Theoretical texture sampling rate up to 41.6 gigatexels per second.
  • Two DVI-I Dual Link connectors, supports output at resolutions up to 2560x1600
  • SLI connector
  • PCI Express 2.0 bus
  • TV Out, HDTV Out, HDCP support
  • Recommended price $349-399

As you can see from the specifications, the new version of Geforce 8800 GTS 512MB is quite different from the old ones. The number of execution units has increased: ALU and TMU, the GPU frequency has also increased significantly, including the frequency of shader units. Despite the truncated memory bus (256-bit versus 320-bit for older versions), the memory bandwidth remained the same, since its operating frequency was raised to the appropriate value. As a result, the new GTS has significantly improved shader execution power as well as increased texture fetching speed. At the same time, the fill rate and memory bandwidth remained the same.

Due to the changed width of the memory bus, the volume of the latter cannot now be equal to 320 MB or 640 MB, only 256 MB, 512 MB or 1 GB. The first value is too small, it will be clearly not enough for a card of this class, and the last is too high, a slight increase in performance is unlikely to justify the increased price of such options (which may well appear in the future). Therefore, Nvidia chose the middle option with a bundle of cards with a capacity of 512 MB. Which, as our recent study showed, is the golden mean for all modern games that are very demanding on the amount of video memory and use up to 500-600 megabytes. We do not get tired of repeating that this does not mean that all game resources must be located only in the local memory of the video card, resource management can be given to API control, especially in Direct3D 10 with video memory virtualization.

Architecture

As it was written in the previous article on the GeForce 8800 GT, we can say that the G92 is the previous G80 flagship, transferred to the new technical process, but with some changes. The new chip has 8 large shader units and 64 texture units, as well as four wide ROPs. Despite all the changes for the better, the number of transistors in the chip seems to be too large, probably, the increased complexity of the chip is due to the inclusion of a previously separate NVIO chip, as well as a new generation video processor. In addition, the number of transistors has been affected by more complex TMUs, and there is a possibility of increasing caches to provide more efficient 256-bit memory bus.

There are very few architectural changes in the G92 chip, we talked about all of them in the previous article, and we won't do it again. Everything said in the reviews of previous solutions remains valid, we will only give the main diagram of the G92 chip, now with all 128 universal processors:

Of all the changes in the chip, in comparison with the G80 only a reduced number of ROPs and some changes in the TMU, which are described in our previous article. Let's once again dwell on the fact that 64 texture units in GeForce 8800 GTS 512MB in real applications in most cases will NOT be stronger than 32 units in GeForce 8800 GTX. With trilinear and/or anisotropic filtering enabled, their performance will be approximately the same, since they have the same number of texture data filtering units. Of course, where unfiltered samples are used, the performance of G92 solutions will be higher.

Pure Video HD

One of the expected changes in the G92 was the second-generation integrated video processor, known from the G84 and G86, which received enhanced support for PureVideo HD. This version of the video processor almost completely offloads the CPU when decoding all types of video data, including the "heavy" H.264 and VC-1 formats. The G92 uses a new model of programmable PureVideo HD video processor, including the so-called BSP engine. The new processor supports decoding of H.264, VC-1 and MPEG-2 formats with resolutions up to 1920x1080 and bit rates up to 30-40 Mbps, doing the work of decoding CABAC and CAVLC data in hardware, which allows you to play all existing HD-DVD and Blu -ray drives even on average single-core PCs. VC-1 decoding is not as efficient as H.264, but it is still supported by the new processor. You can read more about the second generation video processor in our G84/G86 and G92 reviews, links to which are given at the beginning of the article.

PCI Express 2.0

One of the real innovations in the G92 is support for the PCI Express 2.0 bus. Second PCI version Express doubles the standard throughput from 2.5 Gb/s to 5 Gb/s, resulting in an x16 connector that can transfer data at up to 8 Gb/s in each direction, as opposed to 4 Gb/s for version 1 .x At the same time, it is very important that PCI Express 2.0 is compatible with PCI Express 1.1, and old video cards will work in new motherboards, and new video cards with support for the second version will remain operational in boards without support. Subject to sufficiency external power supply and without magnification bandwidth interface, of course.

The real impact of the higher bandwidth of the PCI Express bus on performance in their materials was assessed by the main competitor of Nvidia. According to them, a mid-range graphics card with 256 MB of local memory accelerates when moving from PCI Express 1.0 to 2.0 in modern games such as Company of Heroes, Call of Juarez, Lost Planet and World In Conflict by about 10%, the indicators change from 5 % up to 25% for different games and testing conditions. Naturally, speech high resolutions, when the frame buffer and associated buffers take up most of the local video memory, and some resources are stored in the system.

To provide backward compatibility with existing PCI Express 1.0 and 1.1 solutions, the 2.0 specification supports both 2.5 Gb/s and 5 Gb/s transfer rates. PCI Express 2.0 backwards compatibility allows legacy 2.5 Gb/s solutions in 5.0 Gb/s slots to run at slower speeds, while a device designed to 2.0 specifications can support both 2.5 Gb/s and 5 Gb/s speeds . In theory, compatibility is good, but in practice, some combinations of motherboards and expansion cards may cause problems.

Support for external interfaces

Everything here is the same as for the GeForce 8800 GT, there are no differences. The additional NVIO chip available on the GeForce 8800 boards, which supports external interfaces placed outside the main one (two 400 MHz RAMDAC, two Dual Link DVI (or LVDS), HDTV-Out), in this case was included in the chip itself, support for all these interfaces built into the G92 itself.

Geforce 8800 GTS 512MB video cards usually have two Dual Link DVI outputs with HDCP support. As for HDMI, support for this connector has been implemented, it can be implemented by manufacturers on cards of a special design. Although the presence of an HDMI connector on a video card is completely optional, it will be successfully replaced by an adapter from DVI to HDMI, which is included with most modern video cards.

The 8800 GTX was a milestone in the history of 3D graphics. It was the first card to support DirectX 10 and its associated unified shader model, which greatly improved image quality over previous generations, and in terms of performance it remained unrivaled for a long time. Unfortunately, all this power came at a cost. Given the expected competition from ATI and the release of cheaper mid-range models based on the same technology, the GTX was considered a card aimed only at those enthusiasts who wanted to be at the forefront of modern graphics processing advances.

Model history

To remedy this situation, nVidia released a GTS 640MB card of the same line a month later, and a couple of months later - the GTS 320MB. Both offered performance close to the GTX, but at a much more reasonable price. However, at around $300-$350, they were still too expensive for gamers on a budget - these were not mid-range models, but high-end ones. In hindsight, the GTS were worth every cent invested in them, as what followed through the rest of 2007 was one disappointment after another.

The first cards to appear were the supposed mid-range 8600 GTS and GT cards, which were heavily stripped-down versions of the 8800 series. They were smaller and quieter and had new HD video processing capabilities, but their performance was below expectations. Their acquisition was impractical, although they were relatively inexpensive. The alternative video card ATI Radeon HD 2900 XT matched the GTS 640MB in terms of performance, but under load consumed great amount energy and was too expensive to be in the middle range. Finally, ATI tried to release the DX10 series in the form of the HD 2600 XT and Pro, which had even better multimedia capabilities than the nVidia 8600, but lacked the power to be worthy of the attention of gamers who had already bought previous-generation video cards such as the X1950 Pro. or 7900GS.

And now, a year after the start of sales of the 8800 GTX with the release of the 8800 GT, the first real update of the model with support for DirectX 10 appeared. Although it took a long time, the nVidia GeForce 8800 GT had the characteristics of the GTS model, and the cost was in the range of $ 200-250 , has finally reached that middle price range that everyone has been waiting for. But what made the card so special?

More doesn't mean better

As technology and the number of transistors in CPUs and GPUs advances, there is a natural need to reduce their size. This leads to lower power consumption, which in turn means less heat. More processors fit on one silicon chip, which reduces their cost and theoretically gives a lower price limit for the equipment made from them. However, changing production processes poses high risks for business, so it is customary to release a completely new architecture based on already existing and proven technologies, as was the case with the 8800 GTX and HD 2900 XT. With the improvement of the architecture, there is a transition to a less energy-intensive Hardware, on which the new design is later based again.

The 8800 series followed this path, with the G80 cores of the GTX and GTS manufactured using 90nm technology, while the nVidia GeForce 8800 GT is based on the G92 chip, already made using the 65nm process. While the change doesn't seem like much, it equates to a 34% reduction in wafer size or a 34% increase in the number of processors on a silicon wafer. As a result, electronic components become smaller, cheaper, more economical, which is an extremely positive change. However, the G92 core is not just smaller, there is something else.

First of all, the VP2 video processing engine that was used in the 8600 series has now appeared in the GeForce 8800 GT 512MB. So now it's possible to enjoy HD video without system slowdown. The final display engine, which is controlled by a separate chip on the 8800 GTX, is also integrated into the G92. As a result, there are 73 million more transistors on a chip than the 8800 GTX (754 million vs. 681 million), although the number of stream processors, texture processing power and ROP has become less than in a more powerful model.

The new version of nVidia's transparent anti-aliasing algorithm, added to the GeForce 8800 GT arsenal, is designed to significantly improve image quality while maintaining high system performance. In addition, the new processor did not add new graphics capabilities.

The manufacturer apparently thought for a long time about which functionality of the previous cards of the 8800 series was not fully used and could be reduced, and which should be left. The result is a GPU design that sits somewhere between the GTX and GTS in terms of performance, but with GTS functionality. As a result, the 8800 GTS card became completely redundant. The 8800 Ultra and GTX still provide more graphics power, but with fewer features, at a much higher price and higher power consumption. On this background GeForce card The 8800 GT 512 MB really took a strong position.

GPU architecture

The GeForce 8800 GT uses the same unified architecture that nVidia introduced when it first announced the G80 processor. The G92 consists of 754 million transistors and is manufactured using TSMC's 65nm process. The wafer size is about 330mm2, and although it's noticeably smaller than the G80, it's still far from being called a small piece of silicon. There are a total of 112 scalar stream cores that run at 1500 MHz in standard configuration. They are grouped into 7 clusters, each with 16 stream processors that share 8 texture address units, 8 texture filter sections and their own independent cache. This is the same configuration that nVidia used in the G84 and G86 chips at the shader cluster level, but the G92 is a much more sophisticated GPU than either of them.

Each of the shader processors can generate two MADD and MUL instructions in one cycle, the blocks combined into a single structure can process all shader operations and calculations that come in both integer and floating point form. Curiously, though, despite the stream processors being capable of being the same as the G80 (apart from the number and frequency), Nvidia claims the chip can do up to 336 GFLOPS. However, 504 GFLOPS is required to calculate NADD and MUL. As it turned out, the manufacturer used a conservative approach to determining the computing power and did not take into account the MUL in the calculation of the overall performance. At briefings and roundtables, nVidia representatives said that some architectural improvements should allow the chip to approach its theoretical maximum bandwidth. In particular, the task manager has been improved, distributing and balancing the data that comes through the pipeline. NVidia has announced that it will support double precision in future GPUs, but this chip only emulates double precision due to the need to follow IEEE standards.

ROP architecture

The ROP structure of the G92 is similar to that of any other GeForce 8 series graphics processor. This means that each section has a second level cache and is assigned to a 64-bit memory channel. There are a total of 4 ROP sections and a 256-bit storage interface. Each of them is capable of processing 4 pixels per clock, if each of them is given four parameters (RGB color and Z). If only the Z-component is present, then each section can process 32 pixels per clock.

ROPs support all common anti-aliasing formats used in previous GeForce 8-series GPUs. Since the chip has a 256-bit GDDR interface, Nvidia decided to make some improvements in ROP compression efficiency to reduce bandwidth and graphics memory usage when anti-aliasing is enabled at 1600x1200 and 1920x1200 resolutions.

As a derivative of the original G80 architecture, the filter and texture address units and the ROP sections operate at a different clock rate than the stream processors. Nvidia calls this core speed. In the case of the GeForce 8800 GT, the characteristics of the video card are determined by the frequency of 600 MHz. Theoretically, this results in a fill rate of 9600 gigapixels per second (Gp/s) and a bilinear texture fill rate of 33.6 Gp/s. According to users, the clock frequency is very low, and the increase in the number of transistors does not guarantee the addition or preservation functionality. When the company moved from 110nm to 90nm technology, it reduced the number of transistors by 10% through optimization. Therefore, it would not be surprising if there were at least 16 more stream processors on the chip, disabled in this product.

Design

The card's reference design calls for core, shader, and memory to operate at 600 MHz, 1500 MHz, and 1800 MHz, respectively. The 8800 GT features a single-slot cooling system, and a glossy black metal casing almost completely hides its front side. The 50mm fan matches the design of top-end radial coolers and performs its duties very quietly in all operating modes. It doesn't matter if the computer is idle, loaded only with working Windows table, or your favorite game is running - it will hardly be audible against the background of other noise sources in the PC case. However, it is worth noting that the first time you turn on a computer with a new video card, you can get scared. The fan starts to howl when the GPU is loaded to full capacity, but the noise subsides before the desktop appears.

The metal front panel attracts fingerprints, but this is of little concern, since once installed it will be impossible to see them. According to user feedback, the cover helps prevent accidental damage to components such as capacitors on the front of the card. The green printed circuit board, combined with the black heatsink faceplate, makes the 8800 GT stand out. The model is marked with the GeForce logo along the top edge of the bezel. Mark Rein, the company's vice president, told reporters that this involved additional costs but was necessary to help users figure out which graphics card is the heart of the system at LAN parties.

Under the heatsink are eight 512-megabit graphics memory chips, for a total of 512 MB of data storage. This is GDDR3 DRAM with an effective frequency of up to 2000 MHz. The GPU supports both GDDR3 and GDDR4, but this feature was never used in this series.

Heating and power consumption

The nVidia GeForce 8800 GT video card is very sexy. Its design is simply very pleasing to the eye and, given the G92's internal changes, it has a seasoned design feel to it.

More important than aesthetic aspects, however, according to users, is the fact that the manufacturer managed to fit all the power into a single-slot device. This is not just a welcome change, it is a pleasant surprise. The characteristics of the GeForce 8800 GT are such that we can assume the presence of a two-slot cooler. The reason Nvidia went for such a thin design was because of a change in the manufacturing process that kept the heat down to levels that a low-profile fan could handle. In fact, temperatures have dropped so much that even a relatively small cooler doesn't have to spin very fast, resulting in the card being virtually silent even during intensive gaming. However, the temperature of the board rises significantly, so a significant amount of air is required to prevent overheating. As a result of the reduced process technology, the GeForce 8800 GT 512 MB consumes only 105 watts even at full load. Thus, only one six-pin power connector is required. This is another nice change.

The card was the first to support PCIe 2.0, allowing you to receive power up to 150 watts. However, the company felt that for backward compatibility it is much easier to limit the power through it to 75 watts. This means that regardless of whether the card is connected to motherboards with PCIe 1.1 or PCIe 2.0, only 75 watts are supplied through the connector, and the rest of the energy is supplied through an additional connector.

Processor VP2

Speaking of HDCP signaling capability, it's worth mentioning the new generation video processor that nVidia has incorporated into the G92. The VP2 is a single programmable SIMD processor with the flexibility to expand in the future. It provides very intensive processing of H.264 encoded video, shifting the load from the CPU to the GPU. In addition to VP2, there is also an H.264 stream processor and an AES128 decoder. The first one is specifically designed to speed up the CAVLC and CABAC coding schemes, tasks that are very CPU intensive in a pure software environment. AES128 enables faster processing of the encryption protocol required by video content security schemes such as AACS and Media Foundation. Both of these schemes require video data (both compressed and uncompressed) to be encoded when transferred over buses like PCI-Express.

Improving Image Quality

Nvidia is hard at work trying to improve on the transparent anti-aliasing technique that first appeared in the GeForce 7 series. Multisampling does little to reduce the performance of the card, but in most cases it is not effective. On the other hand, supersepling provides much better and more consistent image quality, but at the expense of speed, it is an incredibly resource intensive anti-aliasing technique.

The drivers that come with the video card contain new algorithm multisampling. The differences are quite significant, but the final decision is made by the user. The good news is that since this is a driver-level change, any hardware that supports transparent anti-aliasing can use the new algorithm, including cards released after the GeForce 7800 GTX. To activate the new mode, you just need to download the latest updates on the manufacturer's website.

According to user reviews, updating the driver for the GeForce 8800 GT will not be difficult. Although the graphics card's web page only contains links to files for Windows Vista and XP, a search from the main page will find what you need. For nVidia GeForce 8800 GT, Windows 7-10 drivers are installed by GeForce 342.01 Driver 292 MB.

Connectivity

The output connectors of the nVidia GeForce 8800 GT are quite standard - 2 dual-link DVI-I ports with HDCP support, which are suitable for both analog and digital interfaces of monitors and TVs, and a 7-pin analog video port provides the usual composite and component output. The DVI connectors can be used in combination with a DVI-VGA and DVI-HDMI adapter, so any connection option is possible. However, NVIDIA still makes audio support for use with HDMI connectors optional for third parties - there is no audio processor inside the VP2, so audio is handled via the board's S/PDIF connector. This is disappointing as the thin and quiet card is perfect for gaming home theaters.

GeForce 8800 GT is the first graphics system, which is compatible with PCI Express 2.0, which means that the memory can be accessed at a speed of 16 GB / s. - twice as fast as the previous standard. While this can be useful for workstations and intensive computing, it won't be useful for the casual gamer. In any case, the standard is fully compatible with all previous versions PCIe, so there's nothing to worry about.

nVidia partner companies offer overclocked versions of the GeForce 8800 GT as well as game packs.

BioShock by 2K Games

BioShock was one of the best games that existed at the time the graphics card was released. It's a "genetically engineered" first-person shooter set in the underwater city of Rapture, created on the bottom of the Atlantic Ocean by a man named Andrew Ryan as part of his 1930s art deco dream come true. 2K Boston and 2K Australia have licensed and used Epic Games' Unreal Engine 3 to achieve the best effect, and also applied some DirectX 10 features. All of this is controlled through an option in the game's graphics control panel.

BioShock's setting forced developers to use a lot of water shaders. DirectX 10 technology helped to improve the ripples when characters move through the water, and pixel shaders were massively used to create wet objects and surfaces. In addition, the DX10 version of the game uses a depth buffer to create "soft" particle effects where they interact with their environment and look more realistic.

nVidia GeForce 8800 GT, the characteristics of which allow it to prove itself in the game BioShock with forte, at a resolution of 1680x1050 only slightly inferior to the GTX. As this parameter increases, the gap between the cards increases, but not by a large margin. The reason for this is probably the fact that the game didn't support transparent anti-aliasing, and the 8800 GTX's massive memory bandwidth advantage becomes moot.

According to user reviews, the 8800 GT also performs quite well with SLI enabled. Although its capabilities do not closely match the GTX, the Radeon HD 2900 XT with 512 MB of memory in the CrossFire configuration does compete with it. Perhaps even more interesting is the fact that at 1920x1200 the 8800 GT is almost as fast as the 640MB GTS!

Crysis Syngle Player Demo by Electronic Arts

This game will literally make your graphics card cry! The big surprise was her graphics - she surpassed everything that was in computer games before her. Testing with the built-in GPU speed meter is much faster than in reality. About 25 fps in the performance test is enough to get a user-friendly frame rate. Unlike other games, low framerate still looks pretty smooth here.

The nVidia GeForce 8800 GT, whose performance in Crysis allows you to achieve sufficient frame rates at a resolution of 1680x1050 with high detail under DirectX 10, is not as fast as the GTX, but is noticeably faster than the Radeon HD 2900 XT and 8800 GTS 640MB. The GTS 320MB struggles to handle Crysis and will need to lower most settings to medium to get above 25 fps even at 1280 x 1024 image quality.

Performance

As expected, the 8800 GTX remains unbeatable, but overall the GeForce 8800 GT GTS outperforms in most tests. At the highest resolutions and anti-aliasing settings, the GT's reduced memory bandwidth fails and the GTS pulls ahead at times. However, considering the price difference and other advantages, the 8800 GT is better anyway. Conversely, the GeForce GTX 8800/GT 8800 comparison confirms every time why the first card costs so much. While other models start to slow down significantly with increasing number of image pixels, using transparent anti-aliasing and anisotropic filtering, the 8800 GTX continues to show excellent results. In particular, Team Fortress 2 at 1920x1200 with 8xAA and 16xAF runs twice as fast on the 8800 GTX than on the GT. However, for the most part, the GeForce 8800 GT performs well. Of course, if you do not take into account the incredible low frequency frames in Crysis.

Conclusion

Although the performance of the GeForce 8800 GT does not exceed the specifications of the series leader 8800 GTX, it provides similar performance at a fraction of the price, and also includes many additional features. And if you add here the small size and quiet operation, then the model will seem simply phenomenal.

Update: we decided to supplement the initial review with additional theoretical information, comparison tables, as well as test results from the American THG laboratory, where the "younger" GeForce 8800 GTS card also participated. In the updated article you will also find quality tests.

GeForce 8800 GTX is head and shoulders above the competition.

You've probably heard of DirectX 10 and the wonders the new API promises over DX9. On the Internet, you can find screenshots of games that are still in development. But until now, there were no video cards with DX10 support on the market. And nVidia was the first to fix this shortcoming. Let's welcome the release of DirectX 10 graphics cards in the form of nVidia GeForce 8800 GTX and 8800 GTS!

A single unified architecture will be able to squeeze more out of shader units, as they can now be used more efficiently than with a fixed layout. A new era in computer graphics is opened by the GeForce 8800 GTX with 128 unified shader units and the GeForce 8800 GTS with 96 such units. The days of pixel pipelines are finally over. But let's take a closer look at the new cards.

The backing shows 80 graphics cores. The new GPU promises to deliver twice the performance of the GeForce 7900 GTX (G71). 681 million transistors translates to a huge die area, but when we asked about it, nVidia CEO Jen-Hsun Huang replied: "If my engineers said they could double the performance by doubling the die area, I would even no doubt!"

Experience shows that doubling the area does not double the performance, but NVIDIA seems to have found the right balance between technological advances and silicon implementation.

The GeForce 8800 GTX and 8800 GTS fully comply with the DX10 and Shader Model 4.0 standard, various data storage and transmission standards, support geometry shaders and stream output (Stream Out). How did nVidia implement all this?

For starters, Nvidia has moved away from the fixed design that the industry has been using for the past 20 years in favor of a unified shader core.


Previously, we showed similar slides illustrating the trend of increasing the power of pixel shaders. Nvidia is well aware of this trend and is moving towards balancing computing needs by implementing unified shaders through which data flows. This gives maximum efficiency and productivity.

NVIDIA states: "The GeForce 8800 design team was well aware that high-end 3D DirectX 10 games would require a lot of hardware power to compute shaders. While DirectX 10 mandates a unified instruction set, the standard does not require a unified GPU shader design. But GeForce 8800 engineers believed that it is the unified shader architecture of the GPU that will effectively distribute the load of DirectX 10 shader programs, improving the architectural efficiency of the GPU and properly distributing the available power."

GeForce 8800 GTX | 128 SIMD stream processors



The processor core runs at 575 MHz for the GeForce 8800 GTX and at 500 MHz for the GeForce 8800 GTS. If the rest of the core runs at 575 MHz (or 500 MHz), then the shader core uses its own clock generator. The GeForce 8800 GTX runs at 1350 GHz, while the 8800 GTS runs at 1200 GHz.

Each shader element in the kernel is called a Streaming Processor. The GeForce 8800 GTX uses 16 blocks of eight such elements. As a result, we get 128 stream processors. Similar to the design of the ATi R580 and R580+ where pixel shader blocks are present, Nvidia plans to both add and remove blocks in the future. Actually, this is what we observe with 96 stream processors in the GeForce 8800 GTS.



Click on the picture to enlarge.

GeForce 8800 GTX | specification comparison table

Nvidia hasn't been able to do full-screen anti-aliasing with HDR lighting at the same time before, but that's history. Each raster operations unit (ROP) supports framebuffer blending. Thus, with multisampling antialiasing, both FP16 and FP32 render targets can be used. Under D3D10 color and Z acceleration, up to eight multiple render targets can be used in ROPs, as well as new compression technologies.

The GeForce 8800 GTX can fill 64 textures per clock, and at 575 MHz we get 36.8 billion textures per second (GeForce 8800 GTS = 32 billion/s). The GeForce 8800 GTX has 24 raster operations units (ROPs) and when the card is running at 575 MHz, the peak pixel fill rate is 13.8 gigapixels/s. The GeForce 880GTS version has 20 ROPs and a peak fill rate of 10 gigapixels/s at 500 MHz.

nVidia GeForce Specifications
8800GTX 8800GTS 7950GX2 7900GTX 7800GTX512 7800GTX
Process technology (nm) 90 90 90 90 110 110
Nucleus G80 G80 G71 G71 G70 G70
Number of GPUs 1 1 2 1 1 1
Number of transistors per core (millions) 681 681 278 278 302 302
Vertex block frequency (MHz) 1350 1200 500 700 550 470
Core frequency (MHz) 575 500 500 650 550 430
Memory frequency (MHz) 900 600 600 800 850 600
Effective memory frequency (MHz) 1800 1200 1200 1600 1700 1200
Number of vertex blocks 128 96 16 8 8 8
Number of pixel blocks 128 96 48 24 24 24
Number of ROPs 24 20 32 16 16 16
Memory bus width (bits) 384 320 256 256 256 256
GPU Memory (MB) 768 640 512 512 512 256
GPU Memory Bandwidth (GB/s) 86,4 48 38,4 51,2 54,4 38,4
No. of vertices/s (million) 10 800 7200 2000 1400 1100 940
Pixel Bandwidth (ROP x frequency, GPS) 13,8 10 16 10,4 8,8 6,88
Texture Bandwidth (number of pixel pipelines x frequency, in G/s) 36,8 32 24 15,6 13,2 10,32
RAMDAC (MHz) 400 400 400 400 400 400
Tire PCI Express PCI Express PCI Express PCI Express PCI Express PCI Express

Pay attention to the width of the memory bus. Looking at the diagram on the previous page, the GeForce 8800 GTX GPU uses six memory sections. Each of them is equipped with a 64-bit memory interface, which gives a total width of 384 bits. 768 MB of GDDR3 memory is connected to the memory subsystem, which is built on a high-speed cross-switch, like the GeForce 7x GPU. This cross switch supports DDR1, DDR2, DDR3, GDDR3 and GDDR4 memory.

The GeForce 8800 GTX uses GDDR3 memory at 900 MHz by default (the GTS version runs at 800 MHz). With a width of 384 bits (48 bytes) and a frequency of 900 MHz (1800 MHz effective DDR frequency), the throughput is a whopping 86.4 GB/s. And 768 MB of memory allows you to store much more complex models and textures, with higher resolution and quality.

GeForce 8800 GTX | Nvidia knocks out ATI


Click on the picture to enlarge.

We have good and bad news. The good ones are faster than the fast ones, they are very quiet and they have so many interesting technical things that there is not even software for yet. The bad news is they are not for sale. Well, yes, there is always something wrong with the new hardware. Sparkle sells such cards for 635 euros. We are already starting to get used to such prices for top-end hardware.

The board is 27 centimeters long, so you will not be able to install it in every case. If your computer has hard drives located directly behind the PCIe slots, then most likely installing a GeForce 8800 GTX will be a difficult task. Of course, disks can always be moved to a 5-inch bay through an adapter, but you must admit that there is little pleasant in the problem itself.


Click on the picture to enlarge.

You should not laugh at the technical implementation. this is the best piece of hardware you can buy as a gift for your PC on New Year. Why has the GeForce 8800 GTX garnered so much attention from the Internet community? Elementary - it's about record performance. So, in Half-Life 2: Episode 1, the number of frames per second on the GeForce 8800 GTX at 1600x1200 resolution is as much as 60 percent higher than in the top Radeon X1000 family (1900 XTX and X1950 XTX).

Oblivion runs incredibly smoothly at all levels. More precisely, with HDR rendering enabled in Oblivion, the speed is at least 30 fps. In Titan Quest, you can't see less than 55 fps. Sometimes you wonder if the benchmark is hanging, or maybe something happened to the levels. Enabling full-screen anti-aliasing and anisotropic filtering does not affect the GeForce 8800 at all.

This is the fastest video card among all models released in 2006. Only the Radeon x1950 XTX in the CrossFire pair mode catches up with the 8800 GTX in some places. So if you were asking on which card Gothic 3, Dark Messiah and Oblivion don't slow down, then here is the answer - you have a GeForce 8800 GTX.

GeForce 8800 GTX | Two power sockets

Power is supplied to it through two sockets on top of the board. Both are necessary - if you remove the cable from the left one, then 3D performance will be greatly reduced. Do you want the neighbors to go crazy? Then take out the right one - the crazy squeak that starts to be heard from the board will be envied by your car alarm. The board itself will not turn on at all. Note that nVidia recommends using a power supply with at least 450 watts with the GeForce 8800 GTX, and that 30 amps can be on the 12 volt line.


On the GeForce 8800 GTX, both power sockets must be connected. Click on the picture to enlarge.

The two power sockets are explained simply. According to the PCI Express specifications, no more than 75 watts of power can fall on one PCIe slot. Our test only in 2D mode consumes about 180 watts. That's a whopping 40 watts more than the Radeon X1900 XTX or X1950 XTX. Well, in 3D mode the board "eats" about 309 watts. The same Radeon X1900/1950 XTX in this case consume from 285 to 315 watts. What needs does the GeForce 8800 use so much energy for when working in plain Windows, we do not understand.

Two more connectors are reserved for SLI mode. According to nVidia's documentation, SLI only requires one plug. The second one is not used yet. Theoretically, having two connectors, you can connect more than two in a multi-board system. The appearance of the second connector can also be linked to the growing popularity of the hardware calculation of physics. Maybe another video card will be connected through it in order to count the physical functions in the game engine. Or maybe we are talking about Quad SLI on 4 boards, or something like that.


An additional connector is now reserved for SLI. But with the current version of the driver, you can use only one. Click on the picture to enlarge.

GeForce 8800 GTX | Quiet cooling system

The GeForce 8800 is equipped with a very quiet 80mm turbine cooler. Like the Radeon X1950 XTX, it's located at the very end of the board to force cool air across the entire surface of the GeForce 8800 and out. A special grille is installed at the end of the board, which releases hot air not only outside, through the hole that occupies the second slot, but also down, right into the case. In general, the system is simple, but there are a number of controversial points.


Warm air is expelled through the hole in the second slot, but some of it gets back into the case through the grille on the side of the GeForce 8800 GTX. Click on the picture to enlarge.

If the PCIe slots in your computer are close, and in SLI the two boards will stand up so that the gap between them is not too large, then the temperature in this place will be very decent. The bottom card will be additionally heated by the top one, through the same side grille on the cooling system. Well, what will happen if you install three cards, it's better not to even think about it. Get an excellent household electric heater. In cold weather, you will work near an open window.

When the board is installed alone, the cooling system is impressive and works to the fullest. Like the GeForce 7900 GTX boards, it also works quietly. During the entire six-hour test run, at a constant high load, the board was not heard even once. Even if the board is fully loaded with work, the cooler at medium speeds will cope with heat removal. If you put your ear close to the back of the computer, you will only hear a slight noise, a kind of soft rustling.


The 80mm cooler is quiet and never goes full blast. The board's cooling system occupies two slots. Click on the picture to enlarge.

The special driver ForceWare 96.94 that nVidia prepared for the GeForce 8800 GTX does not output temperature monitoring data. Prior to this release, you could choose between the classic and new interfaces, but the 96.94 press release contains only new version settings panel. If you try to open the frequency and temperature settings, the driver will take you to the nVidia website so you can download the Ntune utility. It is in it that these functions are configured. Download the 30 MB archive and install it. At the first start, we get a complete freeze of the computer and Windows.

After installing Ntune, if you select the frequency and temperature adjustment in the settings panel, a special information page opens, where the motherboard settings are indicated. You will not be able to find any settings, that is, information about frequency and temperature. Therefore, we carried out temperature measurements in the classical way - using infrared thermometer. When fully loaded, the measurements showed a temperature of 71 degrees Celsius, when working in 2D mode, the card was kept within 52 to 54 degrees.

We can only hope that nVidia will release a standard version of ForceWare for the GeForce 8800. The classic configuration interface is sometimes more convenient, besides, temperature information is displayed, and coolbits can be used to adjust frequencies. The new driver paired with Ntune takes about 100 megabytes and is segmented into a considerable number of tabs and windows. Working with him is not always convenient.


The GeForce 8800 GTX chip has as many as 681 million transistors, it is produced using 90 nanometer technology at the TSMC factory. Click on the picture to enlarge.

There are 681 million transistors in the G80 GeForce 8800 GTX chip. This is twice as much as in the Conroe core of Intel Core 2 Duo processors or in the GeForce 7 chip. The video card's GPU operates at a frequency of 575 MHz. The memory interface is 384-bit and serves 768 megabytes. For memory, nVidia used high-speed GDDR3, which operates at a frequency of 900 MHz.

For comparison: GeForce 7900 GTX memory runs at 800 MHz, and GeForce 7950 GT at 700 MHz. AT Radeon graphics cards The X1950 XTX uses 1000 MHz GDDR4 memory. The GeForce 8800 GTS card has a core frequency of 500 MHz, a memory capacity of 640 MB with a frequency of 800 MHz.

The test results show that full-screen anti-aliasing and anisotropic filtering, finally, do not reduce performance in any way when turned on. In resource-intensive games like Oblivion, you used to have to keep an eye on this, but now you can turn everything on to the maximum. The performance of previous nVidia is such that these games only run smoothly at resolutions up to 1024x768, while HDR rendering with pixel shaders of the third version took a huge amount of resources. Video cards are so powerful that enabling 4xAA and 8xAF allows you to play at resolutions up to 1600x1200 without any problems. The G80 chip supports maximum anti-aliasing settings of 16x and anisotropic filtering of 16x.


GeForce 8800 GTX supports 16x anti-aliasing and anisotropic filtering.

Compared to single ATi, the GeForce 8800 GTX has no competitors. New nVidia can now pull out HDR rendering using third shaders and anti-aliasing. HDR rendering allows for extreme reflections and glare, simulating the effect of blinding when you step out of a dark room into bright light. Unfortunately many old games - Half Life Episode 1, Neef For Speed Most Wanted, Spellforce 2, Dark Messiah and others - only use second shaders for HDR effects. Newer games like Gothic 3 or Neverwinter Nights 2 use the previous Bloom method, as they did in Black & White 2. And while Neverwinter Nights 2 can be configured to support HDR rendering, the developer is wary of these features so that even those with normal FPS can play who has the usual hardware installed. This is done right in Oblivion, which has both Bloom and outstanding HDR rendering effects through third shaders.

It also supports the fourth shaders (Shader Model 4.0), and the most important innovation is the changed architecture of the rendering pipeline. It is no longer divided into pixel and vertex shaders. The new shader core can process all data - vertex, pixel, geometry and even physical. This did not hurt performance - Oblivion runs almost twice as fast as on a pixel-optimized Radeon X1900 XTX or X1950 XTX.

What the video card supports in terms of DirectX 10 cannot yet be tested. Windows Vista, Direct X 10 and games for it do not yet exist. However, on paper, everything looks more than decent: geometry shaders support displacement maps (Displacement Mapping), which will allow you to display even more realistic things, such as rendering stereoscopic effects, trough-shaped objects and corrugated surfaces. Stream Output will allow you to get even better shader effects for particles and physics. The technology of quantum effects (Quantum Effect) copes well with the calculation of the effects of smoke, fog, fire and explosions, and will allow you to remove their calculation from the CPU. All this together will result in significantly more shader and physics effects that can be seen in future games. How all this will be implemented in practice, in what games and in what form, the future will show.

GeForce 8800 GTX | Boards in the test

Video cards on nVidia
and chip code name Memory HDR-R Vers./Pix. shaders GPU frequency Memory frequency
nVidia GeForce 8800 GTX G80 768MB GDDR3 Yes 4.0 575 MHz 1800 MHz
asus+ Gigabyte GeForce 7900 GTX SLI G71 512MB GDDR3 Yes 3.0/3.0 650 MHz 1600 MHz
Gigabyte GeForce 7900 GTX G71 512 MB GDDR3 Yes 3.0/3.0 650 MHz
nVidia GeForce 7950 GT G71 512MB GDDR3 Yes 3.0/3.0 550 MHz 1400 MHz
Asus GeForce 7900 GT Top G71 256MB GDDR3 Yes 3.0/3.0 520 MHz 1440 MHz
nVidia GeForce 7900GS G71 256MB GDDR3 Yes 3.0/3.0 450 MHz 1320 MHz
Asus GeForce 7800 GTX EX G70 256MB GDDR3 Yes 3.0/3.0 430 MHz 1200 MHz
Gigabyte GeForce 7800 GT G70 256MB GDDR3 Yes 3.0/3.0 400 MHz 1000 MHz
Asus GeForce 7600 GT G73 256MB GDDR3 Yes 3.0/3.0 560 MHz 1400 MHz
nVidia GeForce 6800 GT NV45 256MB GDDR3 Yes 3.0/3.0 350 MHz 1000 MHz
Gainward GeForce 7800 GS+ GSa AGP G71 512MB GDDR3 Yes 3.0/3.0 450 MHz 1250 MHz

The following table shows the ATi that took part in our testing.

Video cards on ATi
Video card and chip code name Memory HDR-R Vers./Pix. shaders GPU frequency Memory frequency
Club 3D + Club 3D Radeon X1950 XTX CF R580+ 512MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
Club 3D Radeon X1950 XTX R580+ 512MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
HIS + HIS Radeon X1900 XTX CF R580 512MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Gigabyte Radeon X1900 XTX R580 512MB GDDR3 Yes 3.0/3.0 648 MHz 1548 MHz
PowerColor Radeon X1900XT R580 512MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
ATI Radeon X1900XT R580 256MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Sapphire Radeon X1900 GT R580 256MB GDDR3 Yes 3.0/3.0 574 MHz 1188 MHz
HIS Radeon X1650 Pro Turbo RV535 256MB GDDR3 Yes 3.0/3.0 621 MHz 1386 MHz
Gecube Radeon X1300XT RV530 256MB GDDR3 Yes 3.0/3.0 560 MHz 1386 MHz

GeForce 8800 GTX | Test configuration

For testing, we used three reference stands. All of them were based on extremely identical components - a dual-core AMD Athlon 64 FX-60 processor with a frequency of 2.61 GHz, were equipped with 2 gigabytes of Mushkin MB HP 3200 2-3-2 RAM, two 120 GB Hitachi hard drives in a RAID 0 configuration. The difference was in the motherboards used - for tests of single and nVidia motherboards in SLI mode, we used the Asus A8N32-SLI Deluxe motherboard. To test video cards in CrossFire mode (this is indicated by the abbreviation CF below in the graphs), we used the same computer with a reference motherboard ATi based on RD580 chipset. Finally, AGP video cards were tested on a computer in the same configuration, but on motherboard ASUS AV8 Deluxe. The configuration data is summarized in a table.

For all nVidia graphics cards (including SLI) and single Ati cards
CPU
Bus frequency 200 MHz
Motherboard Asus A8N32-SLI Deluxe
Chipset nVidia nForce4
Memory
HDD Hitachi 2 x 120 GB SATA, 8 MB Cache
DVD Gigabyte GO-D1600C
LAN controller Marvell
Sound controller Realtek AC97
Power Supply Silverstone SST-ST56ZF 560 W
For tests of ATi video cards in CrossFire mode
CPU Dual-core AMD Athlon 64 FX-60 2.61 GHz
Bus frequency 200 MHz
Motherboard Reference ATi
Chipset ATI RD580
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller AC97
For tests of AGP video cards
CPU Dual-core AMD Athlon 64 FX-60 2.61 GHz
Bus frequency 200 MHz
Motherboard Asus AV8 Deluxe
Chipset VIA K8T800 Pro
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller Realtek AC97

On computers for testing single video cards and nVidia cards in SLI mode, we used Windows XP Professional with SP1a. CrossFire boards and AGP video cards were tested on systems with Windows XP Professional SP2 installed. Driver and software versions are summarized in the following table.

Drivers and configuration
Video ATi cards ATI Catalyst 6.6, X1900 XTX, X1950 + Crossfire, X1650 + Crossfire, X1300 XT + Crossfire, Crossfire X1900, Crossfire X1600 XT ATI Catalyst 6.7 (entspricht Catalyst 6.8), Crossfire X1600 Pro, Crossfire X1300 Pro, Crossfire X1300 ATI Catalyst 6.8
Video cards nVidia nVidia Forceware 91.31, 7900 GS, nVidia Forceware 91.47, 7950 GT nVidia Forceware 91.47 (Special), 8800 GTX nVidia Forceware 96.94 (Special)
Operating system Single cards and SLI: Windows XP Pro SP1a, ATI Crossfire and Windows XP Pro SP2 AGP graphics cards
DirectX Version 9.0c
Chipset driver nVidia Nforce4 6.86, AGP VIA Hyperion Pro V509A

GeForce 8800 GTX | Test results

We received the reference board directly from nVidia to THG. For testing, we were given a special ForceWare 96.94 driver, prepared exclusively for the press. is a card that supports DirectX 10 and Shader Model 4.0. Performance in DirectX 9 and Pixelshader 2 or Pixelshader 3 applications is staggering.

Enabling anti-aliasing and anisotropic filtering almost does not reduce performance. In Half-Life 2 Episode 1, the GeForce 8800 GTX video card cannot be slowed down. At 1600x1200 the chip is 60 percent faster than the Radeon X1950 XTX, in Oblivion the performance is twice that of the Radeon X1900 XTX or X1950 XTX. In Prey, the graphics card at 1600x1200 is a whopping 55 percent faster than the Radeon X1950 XTX. In Titan Quest, the frames per second does not change, no matter what resolution you set, and is 55 FPS.

In tests of Half-Life 2: Episode 1 with HDR rendering, the results of the board are impressive, but at low resolutions it loses to Radeon X1950 XTX and boards in CrossFire mode, being approximately on the same level with SLI solutions on the GeForce 7900 GTX. Note that at low resolutions, the video card is not the limiting factor. The higher we turn up the settings, the more interesting the result.

With the inclusion of anti-aliasing and anisotropic filtering, the picture begins to change. At the same time, all boards lose some performance, but the GeForce 8800 GTX drops very slightly - by only 10 fps on average, while the dual ATI Radeon The X1950 XTX loses as much as 20 fps in CrossFire mode.

As soon as we step over the 1280x1024 resolution with anti-aliasing and anisotropic filtering turned on, the single GeForce 8800 GTX becomes the clear leader. The figures exceed those of the Radeon X1950 XTX by almost 35 fps. This is a significant difference.

Further more. At 1600x1200 with anti-aliasing and anisotropic filtering, the gap from all other boards becomes a nightmare. Twice as much as the GeForce 7900 GTX SLI and slightly less than the CrossFire Radeon X1950 XTX. Here it is yea!

Finally, let's look at the dynamics of the decrease in FPS with an increase in resolution and image quality. We can see that the GeForce 8800 GTX has a slight performance drop - from bare 1024x768 to smoothed and anisotropy-filtered 1600x1200, the difference is just over 20 fps. Previously, the top solutions from ATi and nVidia go way back.

The game Hard Truck: Apocalypse is demanding both on the video card and on CPU. This explains the virtually identical performance at 1024x768 when simple trilinear filtering is used and full-screen anti-aliasing is turned off.

As soon as you switch to 4xAA and 8x anisotropic filtering, the results start to vary. The "younger" cards significantly lose their performance, but they don't seem to notice an improvement in the picture quality.

At 1280x960 the difference increases even more, but the GeForce 8800 GTX demonstrates the same results. It is clearly seen that the Athlon 64 FX-60 is not capable of bringing this video card to its knees.

At 1600x1200, all single boards' performance tends to be unplayable. But the GeForce 8800 GTX showed 51 fps, as it does.

Consider the decrease in performance with increasing settings. The CrossFire Radeon X1950 XTX and GeForce 7900 GTX keep close by, and the old generation cards have long been on their knees and begging for mercy.

In Oblivion, a game that pushes the graphics card to the limit, the picture is initially depressing for all boards except for the Radeon X1950 XTX in CrossFire and . We have collected statistics on the work of video cards in open locations, and when rendering indoors. It can be seen that in the open air the GeForce 8800 GTX stands next to or slightly behind the dual Radeon X1950 XTX.






But when the resolution hits 1600x1200, our GeForce 8800 GTX goes way ahead. The gap is especially visible at closed levels.


Look at the decline in performance as resolution and quality increase. The picture does not need comments. In closed locations, the speed is unshakable.


In the Prey game, the video card is between the single-board solutions of the ATi Radeon X1950 XTX and the same boards in the CrossFire mode. And the higher the resolution, the better the GeForce 8800 GTX looks.




Comparing the GeForce 8800 GTX with single-board solutions from ATi and nVidia is useless. The gap in high resolutions is huge, and at 1024x768 with anti-aliasing it is impressive.

In Rise of Nations: Rise of Legends, the graphics card is the only leader. If we calculate the gap between CrossFire Radeon X1950 XTX and GeForce 8800 GTX as a percentage, then the gap will be very, very large. If we count in fps, then the difference is not so noticeable, but still significant.




Notice how the speed decreases as the resolution increases. At all settings, the GeForce 8800 GTX is a leader not only in comparison with single boards, but also with SLI/CrossFire solutions.

In Titan Quest, nVidia cards perform at their best. At the same time, fps does not change from 1024x768 to 1600x1200 with anti-aliasing and anisotropic filtering.




The picture of what is happening is well illustrated by the following graph. The performance of the GeForce 8800 GTX is at the same level regardless of the settings.

In 3DMark06 the card performs well with both the second and third shaders. Note how slightly the performance penalty is when both anisotropy and anti-aliasing are enabled.


The increase in resolution is also not scary. The card is on par with SLI and CrossFire solutions, well ahead of all previous leaders in a single run.


To give you a better idea of ​​gaming performance, we've rearranged the graphics. There is no comparison here, only the pure result of one video card. It is worth paying attention to the fact that the performance of the GeForce 8800 GTX does not change from resolution to resolution. In all games, the limiting factor is not fast enough AMD processor Athlon 64FX-60. In the future, with the release of much faster chips, the card will perform even better in the same games. We think that the latest generation Core 2 Quad is not able to force the GeForce 8800 GTX to reach its limit.




So, having finished with the test results, let's try to rank the efficiency of video cards. To do this, we will collect together the results of all gaming tests and compare them with the price of the solution. We take the recommended prices as a basis, that is, without markups of specific stores. Of course, they will be very expensive at first, and many stores will charge excess profit into the price. But then the prices will drop, and you will probably be able to get a GeForce 8800 GTX for a more reasonable price pretty soon.

As we can see, GeForce 8800 GTX outperforms almost all solutions, including dual CrossFire and SLI. In absolute terms, the GeForce 8800 GTX is very fast. But what about the price?

The price is appropriate - the manufacturer asks for a fee of 635 euros. That's a lot, but for two Radeon X1900 XTX boards in CrossFire mode, you'll have to pay more - 700 euros. And for two Radeon X1950 XTX or SLI GeForce 7900 GTX as much as 750 euros. Despite the fact that in some tests a single GeForce 8800 GTX bypasses these solutions, and in the case it takes up less space in width, there is something to think about.

Finally, let's divide fps by money. We see that this figure is better than that of SLI and CrossFire. Of course, the cost of each fps will be higher than that of the GeForce 7800 GTX EX, and, of course, noticeably higher than that of the Radeon X1300 XT. But the performance of the board is appropriate. A very effective solution in terms of price-performance ratio.

We decided to supplement our review with the test results of the American laboratory THG, where the GeForce 8800 GTS also participated. Please note that due to differences in the test configuration, you should not directly compare the results above with the results of the American laboratory.


The GeForce 8800GTX is longer than the Radeon X1950XTX and most other cards on the market. 8800GTS is somewhat shorter.

Like other graphics card tests in 2006, we tested on AMD platform Athlon FX-60. We will also show the results of multi-GPU configurations. In addition, let's evaluate how new video cards behave when performance is limited by the CPU (low resolution and picture quality).

System hardware
Processors AMD Athlon 64 FX-60, 2.6GHz, 1.0GHz HTT, 1MB L2 Cache
Platform nVidia: Asus AN832-SLI Premium, nVidia nForce4 SLI, BIOS 1205
Memory Corsair CMX1024-4400Pro, 2x 1024MB DDR400 (CL3.0-4-4-8)
HDD Western Digital Raptor, WD1500ADFD, 150 GB, 10,000 rpm, 16 MB cache, SATA150
Net Integrated nForce4 Gigabit Ethernet
Video cards ATi Radeon X1950XTX 512MB GDDR4, 650MHz core, 1000MHz memory (2.00GHz DDR)
nvidia cards:
nVidia GeForce 8800GTX 768MB GDDR3, 575MHz core, 1.350GHz stream processors, 900MHz memory (1.80GHz DDR)
XFX GeForce 8800GTS 640 MB GDDR3, 500 MHz core, 1.200 GHz stream processors, 800 MHz memory (1.60 GHz DDR)
nVidia GeForce 7900GTX 512MB GDDR3, 675MHz core, 820MHz memory (1.64GHz DDR)
Power Supply PC Power & Cooling Turbo-Cool 1000W
CPU cooler Zalman CNPS9700 LED
System software and drivers
OS Microsoft Windows XP Professional 5.10.2600, Service Pack 2
DirectX Version 9.0c (4.09.0000.0904)
Graphics drivers ATi - Catalyst 6.10 WHQL
nVidia-ForceWare 96.94 Beta

During the first run of 3DMark, we ran tests at all resolutions, but with FSAA and anisotropic filtering turned off. In the second run, we enabled the 4xAA and 8xAF image enhancement options.

nVidia in 3DMark05 is clearly in first place. The GeForce 8800 GTX performs the same at 2048x1536 as the ATi Radeon X1950 XTX at the default 1024x768. Impressive.

Doom 3 is usually dominated by nVidia cards as their designs are well suited to this game. But ATi not so long ago was able to "take" this game with new cards.

Here, for the first time, we are confronted with the limitations of the processing power of the CPU, since at low resolution the result is somewhere around 126 frames per second. The ATi card is capable of higher frames per second on a given system configuration. The reason lies in the drivers. The fact is that ATi releases drivers that load the CPU less. As a result, the CPU is in better conditions and can give more performance to the graphics subsystem.

Overall, the winners are the new 8800 cards. Looking at the results at all resolutions, the new DX10 cards outperform the Radeon X1950 XTX starting at 1280x1024 and up.

GeForce 8800 GTX and GTS | F.E.A.R.

In F.E.A.R. nVidia cards usually lead the way. But, again, there is a noticeable lower load on the CPU for ATi drivers. Of course, the results will be different with a faster platform, but if your computer is not advanced, then this test clearly shows how the G80 cards will work on it. But apart from the test at 1024x768, the G80 simply kills the Radeon X1950 XTX. GTX is a monster. And no matter what kind of load we give to the GeForce 8800 GTX, the card always delivers over 40 frames per second.


Click on the picture to enlarge.

The second screenshot (below) is taken on an 8800 GTX with the same settings.



Click on the picture to enlarge.

The nVidia picture is far superior in quality to the ATi screenshot. It looks like Nvidia is back in the lead in this regard. Before us is another advantage that nVidia cards based on the G80 chip have.


Here is a table of new quality enhancements on G80 cards.

In addition to the new DX10 graphics cards, Nvidia has also revealed several features that will be available on the G80 cards. And the first of them is a patented image quality improvement technology called Coverage Sampled Antialiasing (CSAA).

The new version of FSAA uses an area of ​​16 subsamples. According to nVidia, the result can be compressing "redundant color and depth information into memory and a bandwidth of four or eight multisamples." The new quality level works more efficiently, reducing the amount of data per sample. If CSAA doesn't work with any game, the driver will fall back to traditional anti-aliasing methods.


Click on the picture to enlarge.

Before we end this review, let's talk about two more aspects of video cards that have been in development for a long time and will become more important over time. The first aspect is video playback. During the reign of the GeForce 7, the ATI Radeon X1900 cards led the way in terms of video playback quality. But the situation has changed with the advent of unified shaders with a dedicated Pure Video core.

Thanks to smart algorithms and 128 compute units, the GeForce 8800 GTX managed to score 128 out of 130 in HQV. In the near future, we plan to release a more detailed article regarding picture quality, so stay tuned to our website.

Finally, a very strong point of the G80 is what Nvidia calls CUDA. For years, scientists and enthusiasts have been looking for ways to squeeze more performance out of powerful parallel processors. Cluster Beowulf, of course, not everyone can afford. Therefore, ordinary mortals offer different ways how you can use the graphics card for computing.

The problem here is the following: the GPU is good for parallel computing, but it does not cope well with branching. This is where the CPU comes in handy. Also, if you want to use a graphics card, you should program shaders like game developers do. NVIDIA once again decided to take the lead by introducing the Compute Unified Device Architecture or CUDA.


This is how CUDA can work for fluid simulation.

nVidia has released a C+ compiler whose resulting programs can be scaled to computing power GPU (e.g. 96 stream processors on the 8800 GTS or 128 on the 8800 GTX). Now programmers have the opportunity to create programs that scale in terms of both CPU and GPU resources. CUDA will certainly appeal to various distributed computing programs. However, CUDA can be used not only to calculate blocks, but also to simulate other effects: bulk fluid, clothes and hair. Through CUDA on the GPU, you can potentially transfer the calculations of physics and even other game aspects.


Developers will be presented with a full set of SDKs.

GeForce 8800 GTX and GTS | Conclusion

Those who now upgrade from GeForce 6 to , will receive almost a threefold increase in performance. It doesn't matter when games for DirectX 10 come out, it doesn't matter that we get fourth shaders - already today the GeForce 8800 GTX is the fastest chip. Games like Oblivion, Gothic 3 or Dark Messiah seemed to be waiting for the G80 chip and video cards. It became possible to play without brakes again. The GeForce 8800 GTX has enough power for all the latest games.

The cooling system is quiet, the 80mm cooler on the reference card was unheard of. Even at full load, the cooler rotation speed is low. I wonder how ATI will respond to this. Anyway, nVidia did a hell of a job Good work and released a really powerful "piece of iron".

Disadvantages: The board is 27 centimeters long and takes up the space of two PCI Express slots. The power supply must be at least 450 watts (12V, 30A). For the GeForce 8800 GTS, the minimum will be a 400 watt PSU with 30 amps on the 12 volt bus.

Following a long tradition, nVidia cards are already available in online stores. On the international market, the recommended price for GeForce 8800GTX is $599, and for GeForce 8800GTS - $449. Yes, and games under DX10 should appear soon. But just as important, you will get a better picture in existing games.


This is what a supermodel bred with DX10 might look like. Click on the picture to enlarge.

GeForce 8800 GTX and GTS | Editor's opinion

Personally, I'm impressed with nVidia's implementation of DX10/D3D10. Watching Crysis in real time and numerous demos is impressive. The implementation of CUDA allows you to turn a video card into something more than just a frame renderer. Now programs will be able to use not only the resources of the CPU, but also the full parallel power of the universal shader core of the GPU. Can't wait to see these solutions in reality.

But the G80 leaves much to be desired. What? Of course, new games. Gentlemen developers, be so kind as to release DX10 games as soon as possible.

GeForce 8800 GTX | Photo gallery

For more than a year that has passed since the release of video cards based on NVIDIA GeForce 8800 chips, the situation on the graphics accelerator market has been extremely unfavorable for the end customer. In fact, an overclocker who could pay a tidy sum of money for top-end video cards simply had no alternative. A competitor from ATI (AMD) appeared later and, in the end, could not compete with the GeForce 8800 GTX, and later the Ultra version NVIDIA GeForce 8800. That's why NVIDIA's marketers easily realized that in the absence of competition, it is not necessary to reduce the cost of top-end video cards. As a result, throughout this period, the prices for the GeForce 8800 GTX and Ultra remained at the same very high level, and only a few could afford such video cards.

However, the upper price segment has never been a defining and priority for manufacturers of graphics chips and video cards. Yes, leadership in this class is certainly prestigious for any company, but from an economic point of view, the average price range is the most profitable. However, as recent tests of middle class aspirants have shown AMD Radeon HD 3850 and 3870, the performance of such video cards is unsatisfactory for modern games and, in principle, unacceptable for their quality modes. The NVIDIA GeForce 8800 GT is faster than the pair, but also falls short of being comfortable in DirectX 10 games. What is next after him, if there is an opportunity to pay extra? Until yesterday, in fact, there was nothing, since there is literally an abyss in terms of price between GT and GTX and that's it.

But technical progress does not stand still - the appearance of the new NVIDIA G92 chip, manufactured using 65-nm technology, allowed the company not only to attract overclockers with a quite successful video card GeForce 8800 GT, but also yesterday, December 11 at 17:00 Moscow time, to announce a new product - GeForce 8800 GTS 512 MB. Despite the quite uncomplicated name of the video card, the new graphics accelerator has a number of significant differences from the regular version of the GeForce 8800 GTS. In today's material, we will get acquainted with one of the first GeForce 8800 GTS 512 MB video cards that appear on the Russian market, check its temperature regime and overclocking potential, and, of course, study the performance of the new product.

advertising

1. Specifications video cards participating in testing

The technical characteristics of the novelty are presented to your attention in the following table in comparison with NVIDIA graphics cards GeForce 8800 families:

Name of technical
characteristics
NVIDIA GeForce
8800 GT 8800 GTS 8800 GTS
512 MB
8800
GTX/Ultra
GPU G92 (TSMC) G80 (TSMC) G92 (TSMC) G80 (TSMC)
Process technology, nm 65 (low-k) 90 (low k) 65 (low-k) 90 (low k)
Core area, sq. mm 330 484 330 484
Number of transistors, mln. 754 681 754 681
GPU frequency, MHz 600
(1512 shaders)
513
(1188 shaders)
650
(1625 shaders)
575 / 612
(1350 / 1500
shader)
Effective operating frequency
video memory, MHz
1800 1584 1940 1800 / 2160
Memory size, Mb 256 / 512 320 / 640 512 768
Supported memory type GDDR3
Bit width of the exchange bus with memory, bit 256
(4x64)
320 256
(4x64)
384
Interface PCI Express
x16 (v2.0)
PCI Express
x16 (v1.x)
PCI Express
x16 (v2.0)
PCI Express
x16 (v1.x)
Number of unified shaders
processors, pcs.
112 96 128
Number of texture blocks, pcs. 56 (28) 24 64 (32) 32
Number of rasterization blocks (ROP's), pcs. 16 20 16 24
Pixel Shaders/Vertex version support
shaders
4.0 / 4.0
Video memory bandwidth, Gb/s ~57.6 ~61.9 ~62.1 ~86.4 / ~103.7

fills, Gpix./sec
~9.6 ~10.3 ~10.4 ~13.8 / ~14.7
Theoretical top speed
texture sampling, Gtex/sec
~33.6 ~24.0 ~41.6 ~36.8 / ~39.2
Peak power consumption in
3D operating mode, Watt
~106 ~180
power supply requirements,
Watt
~400 ~400 ~400 ~450 / ~550
Dimensions of the reference video card
design, mm (L x H x T)
220 x 100 x 15 228 x 100 x 39 220 x 100 x 32 270 x 100 x 38
exits 2 x DVI-I
(Dual Link)
TV Out, HDTV
Out, HDCP
2 x DVI-I
(Dual Link)
TV Out,
HDTV Out
2 x DVI-I
(Dual Link)
TV Out, HDTV
Out, HDCP
2 x DVI-I
(Dual Link)
TV Out,
HDTV Out
Additionally SLI support
Recommended cost, USD 199 / 249 349 ~ 399 299~349 499~599 / 699

2. Review of BFG GeForce 8800 GTS 512 MB OC (BFGR88512GTSE)

The latest video card from a company well known to overclockers comes in a very compact box, decorated in dark colors.

Comparative testing of four GeForce 8800GTS 512 and 8800GT

Let's take a look at the GeForce 8800GTS 512 boards and compare them with the cheaper GeForce 8800GT and the veteran GeForce 8800GTX. Along the way, we run a new test bench and collect flaws in drivers for DX10

With the release of a new series of GeForce 8800GTS 512 video cards, NVIDIA has significantly strengthened its position. The new product replaced the more expensive, hotter and bulkier GeForce 8800GTX, and the only drawback compared to its predecessor was a narrower 256-bit memory bus (vs. . However, the novelty has undergone not only reductions, but also some improvements: the number of texture units has been increased from 32 to 64 pieces, which, of course, partly compensates for the simplifications in the map. Also, to compensate for the simplifications, the frequencies were increased compared to their predecessor, and the amount of video memory is easily expanded to 1 GB by simply installing larger-capacity chips, which, by the way, some manufacturers have already begun to do. But, despite the fact that the GeForce 8800GTS 512 replaced the GeForce 8800GTX, its main competitor is not its predecessor, but its closest relative GeForce 8800GT, and the whole point is in its lower price. Video cards GeForce 8800GTS 512 and GeForce 8800GT are not much different from each other, since GeForce 8800GT is a stripped-down version of GeForce 8800GTS 512 and, oddly enough, appeared on the market before the full-fledged version. Both video cards are equipped with 512 MB of video memory and, as today's study showed, they have the same memory. The main differences lie in the GPU, and specifically, in the GT version, some of its functional blocks are disabled. More detailed information is shown in the table below:

As you can see, the GeForce 8800GT differs from its older sister in the number of universal processors reduced to 112 and the number of texture units reduced to 56. Initially, the cards also differ in clock speeds, but this does not matter for our today's review, since almost all cards have been factory overclocked. Let's find out how the differences on paper are reflected in reality.

Leadtek 8800GTS 512

The designers from Leadtek chose a bright orange color to draw attention to their video card, and they were absolutely right: the novelty will not go unnoticed.
The face of the novelty was the image of a scene from a fictional “shooter”, under which the technical characteristics of the video card and a note about the bonus – the full version of the Neverwinter Nights 2 game are located.
The reverse side of the box contains the characteristics of the video card, a list of the package and standard information from NVIDIA.
  • S-video > S-video + component out splitter;
  • DVI > D-sub adapter;
  • CD with drivers;
  • CD with Power DVD 7;

The Leadtek 8800GTS 512 video card is based on the reference design familiar to us from GeForce 8800GT boards. Outwardly, the novelty is distinguished by a “two-story” cooling system, which, unlike its predecessor, throws hot air out of the computer. The advantages of such a solution are obvious, and the reason for using an improved cooling system is most likely not that the “new” chip heats up more, but that the buyer has every right to get a better product for big money. After all, to be honest, the reference system of the GeForce 8800GT does not cope with its duties in the best way.
The reverse sides of the GeForce 8800GTS 512 and GeForce 8800GT look almost the same and differ in that the 8800GTS 512 version has all the elements mounted. However, we will be able to see the differences later on the example of the Leadtek 8800GT video card, but for now let's get under the hood of the new product.
Having removed the cooling system, we can again verify the identity of the boards. However, pay attention to the right side of the board, where the power subsystem is located. Where the GeForce 8800GT is empty and there are only seats, Leadtek 8800GTS 512's space is densely populated with radio elements. It turns out that the GeForce 8800GTS 512 has a more sophisticated power subsystem than the GeForce 8800GT. In principle, it's not surprising, because the GeForce 8800GTS 512 has higher operating frequencies, and, consequently, more stringent requirements for power quality.
There are no external differences between the G92 chip in Leadtek 8800GTS 512 and the G92 chip in GeForce 8800GT video cards.
The new video card uses the same Qimonda chips with a 1.0 ns access time as in the GeForce 8800GT. A set of eight chips forms 512 MB of video memory. The nominal frequency for such chips is 2000 MHz DDR, but the actual frequency set in the video card is slightly lower.
The cooling system for the video card is aluminum with a copper plate. This combination of two materials has been used for a long time and allows you to achieve the required performance at a lower weight and at a lower price.
The processing of the copper "core" is at a satisfactory level, but no more.
After removing the casing from the cooling system, a stunning picture appears before us: as many as three heat pipes are engaged in heat removal from the copper base, which go to different parts of the radiator made of aluminum plates. Such a scheme serves to evenly distribute heat, and the large dimensions of the radiator should have the best effect on the quality of cooling, which cannot be said about the reference cooling system of GeForce 8800GT. There are also three heat pipes, but their dimensions are noticeably smaller, as are the dimensions of the radiator itself.

Differences, overclocking and efficiency of the cooling system


The differences from the GeForce 8800GT lie in the increased number of universal processors from 112 to 128, as well as the operating frequencies of the entire GPU.
In Leadtek 8800GTS 512, the frequencies correspond to the recommended ones and are equal to 650/1625 MHz for the graphics processor and 1944 MHz for the video memory.

Now - about the heating of the video card, which we will check using the Oblivion game with maximum settings.


The video card Leadtek 8800GTS 512 warmed up from 55 degrees at rest to 71 degrees, the noise from the fan was almost inaudible. However, this was not enough for overclocking, and with the help of the same Riva Tuner, we increased the fan speed to 50% of the possible maximum.
After that, the temperature of the GPU did not rise above 64 degrees, while the noise level remained at a low level. The video card Leadtek 8800GTS 512 was overclocked to 756/1890 MHz for the GPU and 2100 MHz for the video memory. Such high frequencies were unavailable for the GeForce 8800GT, apparently due to the simplified power supply system.

Well, let's get acquainted with the next participant in our testing today - the ASUS EN8800GTS TOP video card.

ASUS EN8800GTS TOP


When looking at the packaging of powerful ASUS video cards, you may get the feeling that this is not a video card at all, but, for example, a motherboard. It's all about large dimensions; for example, in our case, the size of the box is noticeably larger than that of the first participant in today's test. The large area of ​​the front side of the package made it possible to fit a large image of a branded archer girl and a considerable diagram showing a 7% faster speed compared to the "regular" GeForce 8800GTS 512. The "TOP" abbreviation in the name of the video card indicates that it has undergone factory overclocking. The minus of the package is that it is not obvious that the video card belongs to the GeForce 8800GTS 512 series, but, by and large, these are trifles. At first, it is surprising that there is too little information on the box, however, the truth is revealed later, by itself, and literally.
It is worth taking the box by the handle, as at the first breath of the breeze it opens like a book. The information under the cover is completely devoted to proprietary utilities from ASUS, in particular, ASUS Gamer OSD, which now can not only change the brightness / contrast / color in real time, but also show the FPS value, as well as record video and take screenshots. The second described utility called Smart Doctor is designed to monitor the value of the supply voltages and frequencies of the video card, and also allows you to overclock it. It should be said that ASUS' proprietary utility can change two GPU frequencies, that is, the core and the shader unit. This brings it very close to the famous Riva Tuner utility.
The reverse side of the box contains a bit of everything, in particular, a brief description of the Video Security utility, designed to use a computer as a "smart" online video surveillance system.
The complete set of the card is executed according to the principle of "nothing more":
  • adapter for powering PCI-express cards;
  • adapter S-video > component out;
  • DVI > D-sub adapter;
  • bag for 16 discs;
  • CD with drivers;
  • CD with documentation;
  • short instruction for installing a video card.

Externally, the video card is almost an exact copy of Leadtek 8800GTS 512, and this is not surprising: both cards are based on the reference design and, most likely, produced at the same factory by order of NVIDIA itself, and only then sent to Leadtek and ASUS. To put it simply, today a card from Leadtek could well become a card from ASUS, and vice versa.
It is clear that the reverse side of the video card also does not differ from that of Leadtek 8800GTS 512, except that they have different branded stickers.
Under the cooling system is also nothing unusual. The power system on the right side of the board is fully assembled, in the center is the G92 GPU with 128 active stream processors and eight memory chips, totaling 512 MB.
The memory chips are manufactured by Qimonda and have an access time of 1.0 ns, which corresponds to a frequency of 2000 MHz.
The appearance of the GPU does not reveal its noble origin, just like in Leadtek 8800GTS 512.
Cooling system ASUS graphics cards EN8800GTS TOP is exactly the same as that of the Leadtek 8800GTS 512 video card: a copper "core" is built into the aluminum heatsink to remove heat from the GPU.
The polishing quality of the copper core is satisfactory, as with its predecessor.
The heat from the copper core is distributed over the aluminum fins using three copper heat pipes. We have already seen the effectiveness of this solution on the example of the first card.

Rated frequencies and overclocking

As we have already said, the TOP prefix after the name of the video card indicates its factory overclocking. The nominal frequencies of the novelty are 740/1780 MHz for the GPU (against 650/1625 MHz for Leadtek) and 2072 MHz for video memory (against 1944 MHz for Leadtek). Note that for memory chips with 1.0 ns access time, the nominal clock frequency is 2000 MHz.

We managed to overclock the card to the same frequencies as the Leadtek 8800GTS 512: 756/1890 MHz for the GPU and 2100 MHz for the video memory at a fan speed of 50% of the maximum.

Well, now let's go down a step and get acquainted with two video cards of the GeForce 8800GT class.

Leadtek 8800GT

The Leadtek 8800GT video card is a typical representative of the GeForce 8800GT series and, in fact, differs little from the majority. The whole point is that the GeForce 8800GT video cards are cheaper than the "advanced" GeForce 8800GTS 512, so they don't become less interesting.
The box of Leadtek 8800GT is almost the same as that of the more expensive 8800GTS 512. The differences are in thinner thickness, no carrying handle and, of course, in the name of the video card. The inscription "extreme" after the name of the video card indicates its factory overclocking.
The back side of the box contains brief information about the video card, its advantages and a list of equipment. By the way, in our case, there was no Neverwinter Nights 2 game and instructions for installing a video card.
The new package includes:
  • adapter for powering PCI-express cards;
  • S-video > S-video + component out splitter;
  • DVI > D-sub adapter;
  • CD with drivers;
  • CD with Power DVD 7;
  • CD with the full version of the game Newervinter Nights 2;
  • brief instructions for installing a video card.

The Leadtek 8800GT video card is made according to the reference design and differs only in the sticker on the cooling system cover.
The reverse side of the video card does not stand out either, however, after getting acquainted with the GeForce 8800GTS 512 video card, the missing row of chip capacitors on the left of the board attracts attention.
The cooling system is made according to the reference design and is well known to us from previous reviews.
When examining the printed circuit board, the absence of elements on the right side of the card attracts attention, which, as we have already seen, are mounted in the 8800GTS 512 version. Otherwise, it is quite an ordinary board with a G92 graphics processor cut up to 112 stream processors and eight memory chips, in general forming 512 MB.
Like the previous participants in today's testing, the memory chips of Leadtek 8800GT are manufactured by Qimonda and have an access time of 1.0 ns, which corresponds to 2000 MHz.

Rated frequencies and overclocking

As already mentioned, the Leadtek 8800GT video card has a standard factory overclock. Its nominal frequencies are 678/1700 MHz for the GPU and 2000 MHz for the video memory. Very good, however, despite such a considerable factory overclocking, the video card did not show the best result during manual overclocking, only 713/1782 MHz for the graphics processor and 2100 MHz for the video memory. Recall that the participants in previous reviews were overclocked to frequencies of 740/1800 MHz for the video processor and 2000-2100 MHz for the video memory. We also note that we achieved such a result at the maximum fan speed of the cooling system, because, as we have already said, the reference system in GeForce 8800GT does not cope with its duties in the best way.

Now let's move on to the next participant of today's testing.

Palit 8800GT sonic


The face of the video card Palit 8800GT sonic is a fighting frog in a spectacular design. Silly, but very funny! However, our life consists of nonsense, and remembering this once again does not hurt at all. Turning from fun to business, you should pay attention to the lower right corner, where there is a sticker indicating the frequencies of the video card and its other characteristics. The frequencies of the novelty are almost the same as those of the GeForce 8800GTS 512: 650/1625 MHz for the graphics processor and 1900 MHz for the video memory, which is only 44 MHz less than that of the 8800GTS 512.
The reverse side of the box does not contain anything remarkable, because everything interesting is located on the front side.
The new package includes:
  • adapter for powering PCI-express cards;
  • adapter S-video > component out;
  • S-video adapter > tulip;
  • DVI > D-sub adapter;
  • DVI > HDMI adapter;
  • CD with drivers;
  • CD with full game tomb raider The Legend;
  • brief instructions for installing a video card.
It should be noted that this is the first video card of the GeForce 8800GT class with a DVI > HDMI adapter that has been in our test lab; Previously, only some video cards were equipped with such an adapter. AMD families Radeon.
And here is the first surprise! The Palit 8800GT sonic video card is based on a printed circuit board of its own design and is equipped with a proprietary cooling system.
The reverse side of the video card also has differences, but it is still difficult for us to judge the pros and cons of the new design. But we can fully judge the installation of video card components and its quality.
Since the height of the racks between the GPU heatsink and the board is less than the gap between them, and the heatsink is fastened with screws without any damping pads, the board itself and the graphics chip substrate are very curved. Unfortunately, this can lead to their damage, and the problem lies not in the strength of the textolite from which the board is made, but in the tracks, which can burst under tension. However, it is not at all necessary that this will happen, but the manufacturer should pay more attention to attaching cooling systems to their video cards.
The cooling system is made of painted aluminum and consists of three parts - for the GPU, video memory and power subsystem. The base of the heatsink for the GPU does not shine with any special processing, and a solid gray mass is used as a thermal interface.
Changes in the design of the printed circuit board affected the power subsystem, small elements were replaced with larger ones, their layout changed. As for the rest, we have before us the well-known GeForce 8800GT with the G92 graphic processor and eight video memory chips, totaling 512 MB.
Like the rest of today's testers, the memory chips are manufactured by Qimonda and have an access time of 1.0 ns.

Cooling efficiency and overclocking

We will test the effectiveness of the proprietary cooling system used in Palit 8800GT sonic using the game Oblivion with maximum settings, however, as always.


The video card warmed up from 51 to 61 degrees, which, in general, is a very good result. However, the fan speed increased noticeably, as a result of which the already not quiet cooling system became clearly audible against the general background. Therefore, it is difficult to recommend a video card from Palit to lovers of silence.

Despite changes in the power subsystem and improved cooling, the Palit 8800GT sonic video card overclocked to the usual frequencies of 734/1782 MHz for the GPU and 2000 MHz for the video memory.

So we have finished getting acquainted with the participants of today's testing, and therefore we will move on to reviewing the test results.

Testing and Conclusions

Today's testing differs not only in that we compare four video cards, but also in the fact that we made it on a different test bench than you are familiar with, the configuration of which is as follows:

The change in the test platform is due to the fact that initially it was planned to test the Leadtek 8800GTS 512 and ASUS EN8800GTS TOP video cards in SLI mode, but, unfortunately, the ASUS video card could not stand our bullying by the end of the tests, and the idea collapsed. Therefore, we decided to move SLI testing to a separate article as soon as we have the necessary hardware in our hands, but for now we will limit ourselves to tests of single video cards. We will be comparing seven video cards, one of which is GeForce 8800GTS 512 overclocked to 756/1890/2100 MHz. For comparison, we added GeForce 8800GT and GeForce 8800GTX operating at frequencies recommended by NVIDIA. To make it easier for you to navigate, here is a table with the clock frequencies of all test participants:

Video card name GPU frequency, core / shader unit, MHz Effective video memory frequency, MHz
Leadtek 8800GTS 512 650 / 1625 1944
ASUS EN8800GTS TOP 740 / 1780 2072
Leadtek 8800GT 678 / 1674 2000
Palit 8800GT 650 / 1625 1900
Overclocked GeForce 8800GTS 512 (on the diagram 8800GTS 512 756/1890/2100) 756 / 1890 2100
GeForce 8800GT (8800GT on the diagram) 600 / 1500 1800
GeForce 8800GTX (8800GTX on the diagram) 575 / 1350 1800

We used the ForceWare 169.21 and ForceWare 169.25 drivers for Windows XP and Windows Vista, respectively. We will traditionally start our acquaintance with the test results with 3DMark tests:
Based on the results of 3DMark tests, of course, you can see who is stronger and who is weaker, but the difference is so small that there are no obvious leaders. But still, it is worth noting the fact that the most expensive of the participants - the video card GeForce 8800GTX - took the last places. To complete the picture, it is necessary to familiarize yourself with the results of gaming tests, which, as before, we produced with 4x anti-aliasing and 16x anisotropic filtering.
In the Call of Duty 4 game, attention is drawn to the fact that the Leadtek 8800GT video card is almost on a par with Leadtek 8800GTS 512, and the ASUS EN8800 TOP video card is almost not far behind the overclocked GeForce 8800GTS 512. GeForce 8800 GT. The winner was the video card GeForce 8800GTX, apparently due to the wider (compared to other test participants) memory bus.
In the Call of Juarez game under Windows XP, the Leadtek 8800GTS 512 video card is almost on a par with the GeForce 8800GTX, which is no longer saved by a wider memory bus. Let us note the fact that Leadtek 8800GT does not lag behind them, and at 1024x768 even outperforms them, which is explained by higher frequencies compared to the other two video cards. The leaders are the video card from ASUS and the overclocked GeForce 8800GTS 512, and the penultimate place is taken again by the video card from Palit, immediately after the GeForce 8800GT.
In the game Call of Juarez under Windows control Vista had problems with 1600x1200, which had large drops in speed and very strong "brakes" in places. We assume that the problem lies in the lack of video memory in such a heavy mode, and whether this is or not, we will check in the next review using the example of an ASUS 8800GT video card with 1 GB of video memory. Let's note right away that there were no problems with the GeForce 8800GTX. Based on the results at two lower resolutions, it can be seen that the alignment of forces has not changed compared to Windows XP, except that the GeForce 8800GTX reminded of its noble origin, but did not become a leader.
In Crysis under Windows XP, the alignment of forces has changed a little, but in fact everything remains the same: the Leadtek 8800GTS 512 and Leadtek 8800GT video cards are approximately on the same level, the ASUS EN8800GTS TOP video cards and the overclocked GeForce 8800GTS 512 are the leaders, and the last place goes to the video card GeForce 8800 GT. We also note the fact that as the resolution grows, the gap between the overclocked GeForce 8800GTS 512 and GeForce 8800GTX narrows due to the latter's wider memory bus. However, high clock speeds still prevail, and yesterday's champion remains out of work.
The problem in Windows Vista with a resolution of 1600x1200 did not bypass the Crysis game either, leaving only the GeForce 8800GTX behind. Similar to Call of Juarez, it experienced bursts of speed and in places very severe drops in performance, sometimes below one frame per second. Based on the results at two lower resolutions, it can be seen that this time the Leadtek 8800GTS 512 outperformed its younger sister, taking the third place. The first places were taken by ASUS EN8800GTS TOP video cards, overclocked by GeForce 8800GTS 512 and GeForce 8800GTX, which finally took the lead at 1280x1024.
AT Need game for Speed ​​Pro Street Racing, the leader is GeForce 8800GTX, and at a resolution of 1024x768 it is far behind. It is followed by the video card Leadtek 8800GTS 512, followed by ASUS EN8800GTS TOP and the overclocked GeForce 8800GTS 512, and the last places went to GeForce 8800GT and Palit 8800GT sonic. Since the video card GeForce 8800GTX has become the leader, we can conclude that the game is heavily dependent on the video memory bandwidth. After that, we can guess why the overclocked versions of the GeForce 8800GTS 512 turned out to be slower than the non-overclocked version. Apparently, the reason for this is the increased video memory delays due to an increase in its clock frequency.
In Game Need for Speed ​​Carbon we see a familiar picture: Leadtek 8800GTS 512 and Leadtek 8800GT video cards are approximately on a par, the overclocked GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the first place, and the GeForce 8800GT took the last place. The video card GeForce 8800GTX looks good, but nothing more.
In the game Oblivion, attention is drawn to the fact that at a resolution of 1024x768 the overclocked video card GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the last places. We assumed that it was the memory delays that increased due to the increase in frequency, and we were right: after lowering the memory frequency of the overclocked GeForce 8800GTS 512 video card to nominal, it showed a result of over 100 frames per second. As the resolution grows, the situation returns to normal, and former outsiders become leaders. By the way, the fact that Leadtek 8800GT outperforms Leadtek 8800GTS 512 is noteworthy, most likely due to the higher frequency of the shader unit.
The Prey game turned out to be undemanding to all video cards, and they settled down according to their clock frequencies. Except that the GeForce 8800GTX behaved a little differently, but this is understandable, because it has a wider memory bus, and the game depends heavily on its bandwidth.

conclusions

The purpose of today's testing was to find out how much video cards differ from each other, and how much the high price for the "advanced" video card GeForce 8800GTS 512 is justified. GeForce 8800GTS 512 outperforms GeForce 8800GT in characteristics, including active functional blocks inside the GPU. The obvious advantages of the new GeForce 8800GTS 512 video cards are a high-quality and quiet cooling system and a higher overclocking potential than the GeForce 8800GT. The video card from ASUS deserves special attention, which, thanks to factory overclocking, occupies a leading position. Of course, you can overclock the card yourself, and, most likely, all GeForce 8800GTS 512 video cards will "take" the frequencies of the video card from ASUS. On the whole, we would like to note once again that the new family of video cards based on the G92 graphics chips turned out to be very successful and may well replace the recent leader GeForce 8800GTX.

Pros and cons of individual video cards:

Leadtek 8800GTS 512

Pros:
  • good overclocking potential;
  • good equipment;
  • bright and comfortable packaging.
Minuses:
  • not noticed.

ASUS EN8800GTS TOP

  • Pros:
  • factory overclock;
  • high-quality cooling system;
  • good overclocking potential.
Minuses:
  • too big and uncomfortable packaging.

Leadtek 8800GT

Pros:
  • factory overclock;
  • decent kit.
Minuses:
  • not noticed.

Palit 8800GT sonic

Pros:
  • factory overclock;
  • alternative cooling system;
  • decent kit.
Minuses:
  • heavily curved board in the GPU area;
  • noticeable fan noise.