Quantcast
Channel: in the lab
Viewing all articles
Browse latest Browse all 11

Everything You Need to Know About GDDR Memory

$
0
0

We invariably refer to the video memory in modern videocards as GDDR, differentiating it only by version (GDDR2, GDDR3, GDDR4, and now GDDR5), but the technology’s full acronym is actually GDDR SDRAM, which stands for Graphics Double Data Rate Synchronous Dynamic Random Access Memory.

“Double data rate” describes the memory’s capacity for double-pumping data: Transfers occur on both the rising and falling edges of the clock signal. This endows memory clocked at 800MHz with an effective data-transfer rate of 1.6GHz. “Synchronous” refers to the memory’s ability to operate in time with the computer’s system bus. This allows the memory to accept a new instruction without having to wait for a previous instruction to be processed, a practice known as instruction pipelining.

GDDR2 memory was never a very popular solution among GPU manufacturers: The technology required 2.5 volts to power its input buffers and core logic (i.e., VDD voltage), which is the same as GDDR. GDDR2 operated at much higher clock speeds than its predecessor, however, which produced a tremendous amount of heat. The fact that GDDR2’s VDDQ voltage requirement (the electricity needed to power the memory’s output buffers) was only 1.8 volts didn’t compensate for this problem.

GDDR Memory Features Compared

Survival of the Fittest 

GDDR3—an open standard developed by ATI in conjunction with the standards organization JEDEC Solid State Technology Association—is the most widely used graphics memory technology in use today. Ironically, Nvidia introduced the first graphics processors designed to use GDDR3: The GeForce FX 5700 Ultra, followed by the GeForce 6800 Ultra. ATI didn’t deploy a GDDR3 solution until it shipped the Radeon X800.

GDDR3 improved on previous GDDR designs by supporting higher clock speeds while requiring less power. These chips consume less electricity, so they produce less heat and can rely on simpler cooling hardware (GDDR3’s VDD and VDDQ voltage requirements are both 1.8 volts). GDDR3 also has separate read and write data strobes, which contributes to a much faster read-to-write ratio (meaning the turnaround from a read operation to a write operation occurs much more quickly) than GDDR2 supported. GDDR3 chips have a hardware reset feature that can wipe their memory clean to start receiving new data should such an operation be necessary.

ATI and Nvidia (in conjunction with JDEC) both had a hand in establishing the specification for the next generation of graphics memory, GDDR4, but Nvidia has so far decided not to use the new technology in any of its reference designs. ATI, meanwhile, incorporated the new memory first in its Radeon X1950 XTX cards and subsequently in several models of its Radeon HD 2000, 3000, and 4000 series.

Evolutionary Dead End

GDDR4’s improvements over GDDR3 were mostly incremental. It seemed to offer a power advantage in that it could operate with just 1.5 volts, compared to GDDR3’s 1.8 volts. Board designers, however, quickly discovered that they needed 1.8 volts anyway to ensure stability at higher clock rates.

Two other GDDR4 enhancements are more significant in that they increase the memory’s overall performance: The new memory doubled the size of GDDR3’s prefetch scheme from 4 bits to 8 bits, and its burst length was locked at 8 bits (GDDR3 supports either 4- or 8-bit burst lengths). Prefetch enables the memory chip to anticipate the need for data and grab it before the GPU asks for it, reducing the time the processor has to wait. Burst length defines the amount of data sent in burst mode, a process in which data is transmitted without waiting for input from another device, such as the GPU.

GDDR4’s 8-bit burst length might be one reason Nvidia ultimately passed on this type of memory: Nvidia’s processors support only 4-bit burst lengths. With ATI (now AMD) being the only major customer for GDDR4, just two manufacturers—Samsung and Hynix—decided to manufacture it. This circumstance has kept the price of the memory relatively high.

Successful Mutation?

GDDR5 is the next major development in graphics, and as with GDDR4, AMD’s ATI division has already paired it with its higher-end GPU: the Radeon HD 4870. Nvidia continues to hang back, professing satisfaction with the performance of GDDR3.

GDDR5 requires just 1.5 volts of electrical power, which should make the memory run cooler—a feature that could aid in overclocking, reduce manufacturing costs, and extend battery life if used in a notebook PC. The new memory’s prefetch and burst length remain the same as that of GDDR4: 8 bits on both counts.

GDDR5 technology supports densities ranging from 512Mb to 2Gb, so it would require just four 2Gb modules to create a 1GB frame buffer (here again, however, real-world parts are currently limited to 512Mb and 1Gb). Boasting a raw theoretical data rate ranging from 3.6Gb/s to 6Gb/s (although we won’t see that upper limit for several years), GDDR5 promises to deliver twice the memory bandwidth of GDDR3 running at the same clock frequency.

More practically, that high data rate also enables a GPU manufacturer to achieve nearly the same memory bandwidth with an economical 256-bit interface as it would by building a much more expensive 512-bit bus into its GPU.

Nvidia’s professed ambivalence toward GDDR5 hasn’t stopped a third major memory manufacturer—Qimonda—from joining Hynix and Samsung in the market for GDDR5 memory. Hmm, is anyone taking bets that Nvidia’s next-generation GPU will tap GDDR5?  


Viewing all articles
Browse latest Browse all 11

Latest Images

Trending Articles





Latest Images