3.4559543230098 (1226)
Posted by r2d2 03/24/2009 @ 09:15

Tags : rambus, memory chips, semi conductors, technology

News headlines
Hynix Must Post $250 Million to Appeal Rambus Case - Bloomberg
By Joel Rosenblatt May 26 (Bloomberg) -- Hynix Semiconductor Inc., the world's second-largest memory-chip maker, must post a bond of $250 million to appeal a final legal judgment siding with Rambus Inc. in the companies' eight-year patent-infringement...
Rambus Hopes the Industry Adopts Its Technologies for Future ... - X-bit Labs
by Anton Shilov Rambus, a leading designer of memory and interface technologies, on Tuesday unveiled a list of its new technologies that should enable dynamic random access memory at beyond 3.20GHz clock-speeds next decade....
Rambus unfolds DRAM innovations beyond DDR3 - EE Herald
Rambus has released new techniques in improving DRAM speed, power consumption and performance to take them beyond current DDR3 data rate limits to 3200Mbps. 1. FlexPhase Technology - introduced in the XDR memory architecture, can enable higher data...
US FTC Drops Rambus Antitrust Case - PC World
The US Federal Trade Commission has officially dropped its antitrust case against memory maker Rambus, after losing a US Supreme Court case in February. The FTC announced late Thursday that it has dismissed the case, in which the agency accused Rambus...
STOCKS NEWS US-Rambus jumps after FTC dismissal - Reuters
Rambus shares climbed nearly 11 percent to $11.93. The PHLX Semiconductor index .SOXX rose 2.9 percent. Reuters Messaging:rm://Charles.mikolajczak.reuters.com@reuters.net 1002 ET 14May2009 Tivo's expiring call open interest may move shares...
Small Cap Network Issues Analysis of Technology Stocks RMBS - TransWorldNews (press release)
Supreme Court nominee Sonia Sotomayor would be positive for patent-rich companies like Rambus Inc., American Superconductor Corporation, SanDisk Corp., and Macrovision Solutions Corp. The Small Cap Network (smallcapnetwork.com), a leading online...
FTC: Rambus Did Not Obtain Patents Illegally. - X-bit Labs
by Anton Shilov Rambus, a developer of memory and interconnection technologies, on Monday announced that the Federal Trade Commission (FTC) has issued an order dismissing the remainder of its case against Rambus. This follows a recent denial of the...
DigitalGlobe, Jack in the Box, MGM Mirage, Rambus - MarketWatch
By MarketWatch An earlier version of this story gave the incorrect day of the Genworth announcement. It was Wednesday. US-listed shares of Alcatel-Lucent (ALU)(FR:ALU) rose 3.2%. WestLB cut its rating on the telecom-equipment expert to reduce,...
Rambus Roars; Stock-Sale Plan Fells Forest City - Wall Street Journal
Rambus gained 1.52, or 14%, to 12.30, as the Federal Trade Commission dismissed the remainder of its antitrust case against the Los Altos, Calif., developer of technology for memory chips. On the earnings front, Aegean Marine Petroleum Network (NYSE)...
5 Cold Stocks Heating Up - Motley Fool
DRAM maker Rambus has generated a lot of ire for the industry patents it holds, and while some investors like onepremise have long chafed at its profound power, really, why shouldn't the company protect its intellectual property? Rambus, like PCI and...


Plaques on a wall at the Rambus headquarters in Los Altos, California, each marking a U.S. patent issued to the company.

Rambus Incorporated (NASDAQ: RMBS), founded in 1990, is a provider of high-speed interface technology. The company became particularly well known for its aggressive intellectual property based litigation practices following the introduction of DDR-SDRAM memory.

Rambus, a California company, was incorporated in 1990 and re-incorporated in Delaware in 1997. The company was listed on NASDAQ in 1997 under the code RMBS. As of February 2006, Rambus derived the majority of its annual revenue by licensing patents for chip interfaces to its customers.

Companies such as AMD, Elpida, Infineon, Intel, Matsushita, NECEL, Qimonda, Renesas, Sony, and Toshiba have taken licenses to Rambus patents for use in their own products.

Rambus' share price has ranged between a high of nearly $150 in 2000 to a low of approxmiately $3 in 2002 with a 4:1 split on June 15, 2000.

As a company with no chip production facilities of its own, Rambus conducts business by filing patents and then licensing technologies. For example, Nintendo licensed Rambus memory for the Nintendo 64, as did Sony for use in the PlayStation 2. However, the most famous agreement was with Intel Corporation in 1996, under which Intel became obligated to use RDRAM as the primary memory technology for all Intel platforms until 2002.

In exchange for this, Intel was given a cut of Rambus's royalties, which Intel management anticipated would be a lucrative source of high margin revenue. In reality, the RDRAM standard did not prove to be popular, and motherboard manufacturers simply bought chipsets that supported SDRAM technology from VIA Technologies rather than more expensive RDRAM chipsets from Intel. Ironically in this manner, one of the most enduring achievements of Rambus was to facilitate the rise of VIA Technologies by creating a lucrative market vacuum.

In addition to Intel, SiS also licensed RDRAM, which was used in the SiS R658 chipset. However, it was never popular. The proposed SiS R659, which supports 4 channels of 16-bit 1200 MHz RDRAM, was only available as prototype.

As the market for RDRAM was overtaken, Rambus developed new memory interfaces for high speed activity and has continued to license these. Rambus has targeted the graphics card industry and licensed its technology to Sony for incorporation into Cell Technology as implemented with the PlayStation 3. It also developed PCI-E interfaces, and in 2006 it licensed its XDR DRAM memory controller to Toshiba.

The first PC motherboards with support for RDRAM debuted in 1999. They supported PC800 RDRAM, which operated at 400 MHz but presented data on both rise and fall of clock cycle resulting in effectively 800 MHz, and delivered 1600 MB/s of bandwidth over a 16-bit bus using a 184-pin RIMM form factor. This was significantly faster than the previous standard, PC133 SDRAM, which operated at 133 MHz and delivered 1066 MB/s of bandwidth over a 64-bit bus using a 168-pin DIMM form factor.

Some downsides of RDRAM technology, however, included significantly increased latency, heat output, manufacturing complexity, and cost. PC800 RDRAM operated with a latency of 45 ns, compared to only 7.5 ns for PC133 SDRAM. RDRAM memory chips also put out significantly more heat than SDRAM chips, necessitating heatsinks on all RIMM devices. RDRAM also includes a memory controller on each memory chip, significantly increasing manufacturing complexity compared to SDRAM, which used a single memory controller located on the northbridge chipset. RDRAM was also two to three times the price of PC133 SDRAM due to manufacturing costs, license fees and other market factors. DDR SDRAM, introduced in 2000, operated at an effective clockspeed of 266 MHz and delivered 2100 MB/s over a 64-bit bus using a 184-pin DIMM form factor.

With the introduction of the i840 chipset, Intel added support for dual-channel PC800 RDRAM, doubling bandwidth to 3200 MB/s by increasing the bus width to 32-bit. This was followed in 2002 by the i850E chipset, which introduced PC1066 RDRAM, increasing total dual-channel bandwidth to 4200 MB/s. Also in 2002, Intel released the E7205 Granite Bay chipset, which introduced dual-channel DDR support for a total bandwidth of 4200 MB/s, but at a much lower latency than competing RDRAM. In 2003, Intel released the i875P chipset, and along with it dual-channel DDR400. With a total bandwidth of 6400 MB/s, it marked the end of RDRAM as a technology with competitive performance.

Rambus survived the obsolescence of RDRAM and moved to support DDR and DDR2 in the area of video card technology and in particular, PCI-E. Rambus also developed and licensed its XDR RAM technology.

In the early 1990s, Rambus was invited to join the JEDEC. Rambus had been trying to interest memory manufacturers in licensing their proprietary memory interface, and numerous companies had signed non-disclosure agreements to view Rambus' technical data. During the later Infineon v. Rambus trial, Infineon memos from a meeting with representatives of other manufacturers surfaced, including the line “ne day all computers will be built this way, but hopefully without the royalties going to Rambus”, and continuing with a strategy discussion for reducing or eliminating royalties to be paid to Rambus. As Rambus continued its participation in JEDEC, it became apparent that they were not prepared to agree to JEDEC’s patent policy requiring owners of patents included in a standard to agree to license that technology under terms that are ‘reasonable and non-discriminatory’, and Rambus withdrew from the organization in 1995. Memos from Rambus at that time showed they were tailoring new patent applications to cover features of SDRAM being discussed, which were public knowledge (JEDEC meetings were not considered secret) and perfectly legal for patent owners who have patented underlying innovations, but were seen as evidence of bad faith by the jury in the first Infineon v. Rambus trial. The Federal Court of Appeals rejected this theory of bad faith in its decision overturning the fraud conviction Infineon achieved in the first trial (see below).

In 2000, Rambus began filing lawsuits against the largest memory manufacturers, claiming that they owned SDRAM and DDR technology. Seven manufacturers, including Samsung, quickly settled with Rambus and agreed to pay royalties on SDRAM and DDR memory. When Rambus sued Infineon Technologies, however, Micron and Hynix joined forces with Infineon to fight the lawsuit, countersuing with claims of fraud. This trio of memory manufacturers became known as “The Three Amigos”. In May 2001, Rambus was found guilty of fraud for having claimed that they owned SDRAM and DDR technology, and all infringement claims against memory manufacturers were dismissed. In January 2003, the Federal Court of Appeals overturned the fraud verdict of the jury trial in Virginia under Judge Payne, issued a new claims construction, and remanded the case back to Virginia for re-trial on infringement. In October 2003, the US Supreme Court refused to hear the case. Thus, the case returned to Virginia per the Federal Court of Appeals ruling.

In January 2005, Rambus filed four more lawsuits against memory chip makers Hynix Semiconductor, Nanya Technology, Inotera Memories and Infineon Technology claiming that DDR 2, GDDR 2 and GDDR 3 chips contain Rambus technology. In March 2005, Rambus had its claim for patent infringements against Infineon dismissed. Rambus was accused of shredding key documents prior to court hearings, the judge agreed and dismissed Rambus' case against Infineon. This sent Rambus to the settlement table with Infineon. Infineon has agreed to pay Rambus quarterly license fees of $5.9m and in return, both companies ceased all litigation against each other. The agreement runs from November 2005 to November 2007. After this date, if Rambus has enough other agreements in place, Infineon may make extra payments up to $100m. Currently, cases involving Micron and Hynix remain in court. In June 2005, Rambus also sued one of its strongest proponents, Samsung, the world's largest memory manufacturer, and terminated Samsung's license. Samsung had promoted Rambus's RDRAM and currently remains a licensee of Rambus's XDR memory.

In May 2002, the United Stated Federal Trade Commission (FTC) filed charges against Rambus for antitrust violations. Specifically, the FTC complaint asserted that through the use of patent continuations and divisionals, Rambus pursued a strategy of expanding the scope of its patent claims to encompass the emerging SDRAM standard. The FTC's antitrust allegations against Rambus went to trial in the summer of 2003 after the organization formally accused Rambus of anti-competitive behavior the previous June, itself the result of an investigation launched in May 2002 at the behest of the memory manufacturers. The FTC's chief administrative-law judge, Stephen J. McGuire, dismissed the antitrust claims against Rambus in 2004, saying that the memory industry had no reasonable alternatives to Rambus technology and was aware of the potential scope of Rambus patent rights, according to the company. Soon after, FTC investigators filed a brief to appeal against that ruling.

In 2004, Infineon pled guilty to price-fixing in an attempt to manipulate the market spot-price of all major DRAM types. They later paid a fine of $160 million. Hynix and Samsung followed suit in 2005 and paid $185 million and $300 million respectively. Elpida is the most recent company to plead guilty and paid a fine of $85 million, the lowest of all memory manufacturers. It is widely believed that the evidence collected during the FTC's investigation of Rambus led directly to the guilty pleas.

On August 2, 2006, the Federal Trade Commission overturned McGuire's ruling, stating that Rambus illegally monopolized the memory industry under section 2 of the Sherman Antitrust Act, and also practiced deception that violated section 5 of the Federal Trade Commission Act.

February 5, 2007, U.S. Federal Trade Commission issued a ruling that limits maximum royalties that Rambus may demand from manufacturers of dynamic random access memory (DRAM), which was set to 0.5% for DDR SDRAM for 3 years from the date the Commission’s Order is issued and then going to 0; while SDRAM's maximum royalty was set to 0.25%. The Commission claimed that halving the DDR SDRAM rate for SDRAM would reflect the fact that while DDR SDRAM utilizes four of the relevant Rambus technologies, SDRAM uses only two. In addition to collecting fees for DRAM chips, Rambus will also be able to receive 0.5% and 1.0% royalties for SDRAM and DDR SDRAM memory controllers or other non-memory chip components respectively. However, the ruling did not prohibit Rambus from collecting royalties on products based on (G)DDR2 SDRAM and other JEDEC post-DDR memory standards. Rambus has appealed the FTC Opinion/Remedy and awaits a court date for the appeal.

July 30, 2007, the European Commission launched antitrust investigations against Rambus, taking the view that Rambus engaged in intentional deceptive conduct in the context of the standard-setting process, for example by not disclosing the existence of the patents which it later claimed were relevant to the adopted standard. This type of behaviour is known as a "patent ambush". Against this background, the Commission provisionally considered that Rambus breached the EC Treaty's rules on abuse of a dominant market position (Article 82 EC Treaty) by subsequently claiming unreasonable royalties for the use of those relevant patents. The Commission's preliminary view is that without its "patent ambush", Rambus would not have been able to charge the royalty rates it currently does.

On March 26, 2008, the jury of the U.S. District Court in San Jose determined that Rambus acted properly while a member of the standard-setting organization JEDEC during its participating in the early 1990s, finding that the memory manufacturers did not meet their burden of proving antitrust and fraud claims.

On April 22, 2008, the DC Court of Appeals overturned the FTC reversal of McGuire's 2004 ruling, saying that the FTC had not established that Rambus had harmed the competition.

On April 29, 2008, the Court of Appeals for the Federal Circuit issued a ruling vacating the order of the United States District Court for the Eastern District of Virginia, saying the case with Samsung should be dismissed, saying Judge Robert E. Payne's findings critical of Rambus, were on a case that had already been settled, and thus had no legal standing.

On January 9, 2009, A Delaware federal judge ruled that Rambus could not enforce patents against Micron Technology Inc stating that Rambus had a "clear and convincing" show of bad faith, and ruled that Rambus' destruction of key related documents nullified its right to enforce its patents against Micron.

On February 23, 2009, the U.S. Supreme Court rejected the bids by the FTC to impose royalty sanctions on Rambus via anti-trust penalties.

To the top

Dynamic random access memory

Common DRAM packages.  From top to bottom: DIP, SIPP, SIMM 30 pin, SIMM 72 pin, DIMM (168-pin), DDR DIMM (184-pin).

Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.

The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Unlike Flash memory, it is volatile memory (cf. non-volatile memory), since it loses its data when the power supply is removed.

In 1964, Arnold Farber and Eugene Schlig working for IBM created a memory cell that was hard wired; using a transistor gate and tunnel diode latch, they later replaced the latch with two transistors and two resistors, which became known as the Farber-Schlig cell. In 1965, Benjamin Agusta and his team working for IBM managed to create a 16-bit silicon chip memory cell based on the Farber-Schlig cell which consisted of 80 transistors, 64 resistors and 4 diodes. In 1966 DRAM was invented by Dr. Robert Dennard at the IBM Thomas J. Watson Research Center and he was awarded U.S. patent number 3,387,286 in 1968. Capacitors had been used for earlier memory schemes such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.

The Toshiba "Toscal" BC-1411 electronic calculator, which went into production in November 1965, uses a form of dynamic RAM built from discrete components.

In 1969 Honeywell asked Intel to make a DRAM using a 3-transistor cell that they had developed. This became the Intel 1102 (1024x1) in early 1970. However the 1102 had many problems, prompting Intel to begin work on their own improved design (in secrecy to avoid conflict with Honeywell). This became the first commercially-available DRAM memory, the Intel 1103 (1024x1) in October 1970 (despite initial problems with low yield until the 5th revision of the masks).

DRAM is usually arranged in a square array of one capacitor and transistor per cell. The illustrations to the right show a simple example with only 4 by 4 cells (modern DRAM can be thousands of cells in length/width).

The long lines connecting each row are known as word lines. Each column is actually composed of two bit lines, each one connected to every other storage cell in the column. (The illustration to the right does not include this important detail.) They are generally known as the + and − bit lines. A sense amplifier is essentially a pair of cross-connected inverters between the bit lines. That is, the first inverter is connected from the + bit line to the − bit line, and the second is connected from the − bit line to the + bit line. This is an example of positive feedback, and the arrangement is only stable with one bit line high and one bit line low.

To write to memory, the row is opened and a given column's sense amplifier is temporarily forced to the desired state, so it drives the bit line which charges the capacitor to the desired value. Due to the positive feedback, the amplifier will then hold it stable even after the forcing is removed. During a write to a particular cell, the entire row is read out, one value changed, and then the entire row is written back in, as illustrated in the figure to the right.

Thus, the generally quoted number is the /RAS access time. This is the time to read a random bit from a precharged DRAM array. The time to read additional bits from an open page is much less.

When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in 5 clock cycles, and additional reads within the same page every 2 clock cycles. This was generally described as "5-2-2-2" timing, as bursts of 4 reads within a page were common.

When describing synchronous memory, timing is also described by clock cycle counts separated by hyphens, but the numbers have very different meanings! These numbers represent tCL– tRCD– tRP– tRAS in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing is 3-4-4-8 with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at 2-2-2-5 timing.

It is worth noting that the improvement over 11 years is not that large. Minimum random access time has improved from tRAC = 50 ns to tRCD + tCL = 23.5 ns, and even the premium 20 ns variety is only 2.5× better. CAS latency has improved even less, from tCAC = 13 ns to 10 ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns (1600 Mword/s), while the EDO DRAM can output one word per tPC = 20 ns (50 Mword/s).

Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. It was initially thought that this was mainly due to alpha particles emitted by contaminants in chip packaging material, but research has shown that the majority of one-off ("soft") errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries which may change the contents of one or more memory cells, or interfere with the circuitry used to read/write them. There is some concern that as DRAM density increases further, and thus the components on DRAM chips get smaller, while at the same time operating voltages continue to fall, DRAM chips will be affected by such radiation more frequently - since lower energy particles will be able to change a memory cell's state. On the other hand, smaller cells make smaller targets, and moves to technologies such as SOI may make individual cells less susceptible and so counteract, or even reverse this trend.

This problem can be mitigated by using DRAM modules that include extra memory bits and memory controllers that exploit these bits. These extra bits are used to record parity or to use an error-correcting code (ECC). Parity allows the detection of a single-bit error (actually, any odd number of wrong bits). The most common error correcting code, Hamming code, allows a single-bit error to be corrected and (in the usual configuration, with an extra parity bit) double-bit errors to be detected.

Error detection and correction in computer systems seems to go in and out of fashion. Seymour Cray famously said "parity is for farmers" when asked why he left this out of the CDC 6600. He included parity in the CDC 7600, and reputedly said "I learned that a lot of farmers buy computers." The original IBM PC and all PCs until the early 1990s used parity checking. Later ones mostly did not. Wider memory buses make parity and especially ECC more affordable. Many current microprocessor memory controllers, including almost all AMD 64-bit offerings, support ECC, but many motherboards and in particular those using low-end chipsets do not.

Error detection and correction depends on an expectation of the kinds of errors that occur. Implicitly, we have assumed that the failure of each bit in a word of memory is independent and hence that two simultaneous errors are improbable. This used to be the case when memory chips were one bit wide (typical in the first half of the 1980s). Now many bits are in the same chip. This weakness does not seem to be widely addressed; one exception is Chipkill.

Testsgive widely varying error rates, but about 10-12upset/bit-hr is typical, roughly one bit error, per month, per gigabyte of memory.

In most computers used for serious scientific or financial computing and as servers, ECC is the rule rather than the exception, as can be seen by examining manufacturers' specifications.

For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as Playstation and Xbox) normally consists of dynamic RAM (DRAM). Other parts of the computer, such as cache memories and data buffers in hard disks, normally use static RAM (SRAM).

While the fundamental DRAM cell and array has maintained the same basic structure (and performance) for many years, there have been many different interfaces for speaking with DRAM chips. When one speaks about "DRAM types", one is generally referring to the interface that is used.

This interface provides direct control of internal timing. When /RAS is driven low, a /CAS cycle must not be attempted until the sense amplifiers have sensed the memory state, and /RAS must not be returned high until the storage cells have been refreshed. When /RAS is driven high, it must be held high long enough for precharging to complete.

VRAM is a dual-ported variant of DRAM which was once commonly used to store the frame-buffer in some graphics adaptors.

It was invented by F. Dill and R. Matick at IBM Research in 1980, with a patent issued in 1985 (US Patent 4,541,075). The first commercial use of VRAM was in the high resolution graphics adapter introduced in 1986 by IBM with the PC/RT system.

VRAM has two sets of data output pins, and thus two ports that can be used simultaneously. The first port, the DRAM port, is accessed by the host computer in a manner very similar to traditional DRAM. The second port, the video port, is typically read-only and is dedicated to providing a high bandwidth data channel for the graphics chipset.

Typical DRAM arrays normally access a full row of bits (i.e. a word line) at up to 1024 bits at one time, but only use one or a few of these for actual data, the remainder being discarded. Since DRAM cells are destructively read, each bit accessed must be sensed, and re-written. Thus, typically, 1024 sense amplifiers are typically used. VRAM operates by not discarding the excess bits which must be accessed, but making full use of them in a simple way. If each horizontal scan line of a display is mapped to a full word, then upon reading one word and latching all 1024 bits into a separate row buffer, these bits can subsequently be serially streamed to the display circuitry. This will leave access to the DRAM array free to be accessed (read or write) for many cycles, until the row buffer is almost depleted. A complete DRAM read cycle is only required to fill the row buffer, leaving most DRAM cycles available for normal accesses.

Such operation is described in the paper "All points addressable raster display memory" by R. Matick, D. Ling, S. Gupta, and F. Dill, IBM Journal of R&D, Vol 28, No. 4, July 1984, pp379-393. To use the video port, the controller first uses the DRAM port to select the row of the memory array that is to be displayed. The VRAM then copies that entire row to an internal row-buffer which is a shift-register. The controller can then continue to use the DRAM port for drawing objects on the display. Meanwhile, the controller feeds a clock called the shift clock (SCLK) to the VRAM's video port. Each SCLK pulse causes the VRAM to deliver the next datum, in strict address order, from the shift-register to the video port. For simplicity, the graphics adapter is usually designed so that the contents of a row, and therefore the contents of the shift-register, corresponds to a complete horizontal line on the display.

In the late 1990s, standard DRAM technologies (e.g. SDRAM) became cheap, dense, and high performance enough to completely displace VRAM, even though it was only single-ported and some memory bits were wasted.

Fast page mode DRAM is also called FPM DRAM, Page mode DRAM, Fast page mode memory, or Page mode memory.

In page mode, a row of the DRAM can be kept "open" by holding /RAS low while performing multiple reads or writes with separate pulses of /CAS. so that successive reads or writes within the row do not suffer the delay of precharge and accessing the row. This increases the performance of the system when reading or writing bursts of data.

Static column is a variant of page mode in which the column address does not need to be strobed in, but rather, the address inputs may be changed with /CAS held low, and the data output will be updated accordingly a few nanoseconds later.

Nibble mode is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of /CAS. The difference from normal page mode is that the address inputs are not used for the second through fourth /CAS edges; they are generated internally starting with the address supplied for the first /CAS edge.

Classic asynchronous DRAM is refreshed by opening each row in turn. This can be done by supplying a row address and pulsing /RAS low; it is not necessary to perform any /CAS cycles. An external counter is needed to iterate over the row addresses in turn.

For convenience, the counter was quickly incorporated into RAM chips themselves. If the /CAS line is driven low before /RAS (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open. This is known as /CAS-before-/RAS (CBR) refresh.

This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM.

EDO DRAM is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance. It was 5% faster than Fast Page Mode DRAM, which it began to replace in 1993.

To be precise, EDO DRAM begins data output on the falling edge of /CAS, but does not stop the output when /CAS rises again. It holds the output valid (thus extending the data output time) until either /RAS is deasserted, or a new /CAS falling edge selects a different column address.

Single-cycle EDO has the ability to carry out a complete memory transaction in one clock cycle. Otherwise, each sequential RAM access within the same page takes two clock cycles instead of three, once the page has been selected. EDO's performance and capabilities allowed it to somewhat replace the then-slow L2 caches of PCs. It created an opportunity to reduce the immense performance loss associated with a lack of L2 cache, while making systems cheaper to build. This was also good for notebooks due to difficulties with their limited form factor, and battery life limitations. An EDO system with L2 cache was tangibly faster than the older FPM/L2 combination.

Single-cycle EDO DRAM became very popular on video cards towards the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.

EDO was sometimes referred to as Hyper Page Mode.

Much equipment taking 72-pin SIMMs could use either FPM or EDO. Problems were possible, particularly when mixing FPM and EDO. Early Hewlett-Packard printers had FPM RAM built in; some, but not all, models worked if additional EDO SIMMs were added.

An evolution of the former, Burst EDO DRAM, could process four memory addresses in one burst, for a maximum of 5-1-1-1, saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipelined stage allowing page-access cycle to be divided into two components. During a memory-read operation, the first component accessed the data from the memory array to the output stage (second latch). The second component drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, quicker access time is achieved (up to 50% for large blocks of data) than with traditional EDO.

Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM . Even though BEDO RAM was superior to SDRAM in some ways, the latter technology gained significant traction and quickly displaced BEDO.

BEDO slightly improved upon EDO, but was inferior to SDRAM, which was introduced at about the same time, and so never became popular.

Multibank RAM applies the interleaving technique for main memory to second level cache memory to provide a cheaper and faster alternative to SRAM. The chip splits its memory capacity into small blocks of 256 kB and allows operations to two different banks in a single clock cycle.

This memory was primarily used in graphic cards with Tseng Labs ET6x00 chipsets, and was made by MoSys. Boards based upon this chipset often used the unusual RAM size configuration of 2.25 MiB, owing to MDRAM's ability to be implemented in various sizes more easily. This size of 2.25 MiB allowed 24-bit color at a resolution of 1024×768, a very popular display setting in the card's time.

SGRAM is a specialized form of SDRAM for graphics adaptors. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.

Single Data Rate (SDR) SDRAM is a synchronous form of DRAM.


Double data rate (DDR) SDRAM was a later development of SDRAM, used in PC memory beginning in 2000. DDR2 SDRAM was originally seen as a minor enhancement (based upon the industry standard single-core CPU) on DDR SDRAM that mainly afforded higher clock rates and somewhat deeper pipelining. However, with the introduction and rapid acceptance of the multi-core CPU in 2006, it is generally expected in the industry that DDR2 will revolutionize the existing physical DDR-SDRAM standard. Further, with the development and introduction of DDR3 SDRAM in 2007, it is anticipated DDR3 will rapidly replace the more limited DDR and newer DDR2.

Some DRAM components have a "self-refresh mode". While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, not to allow operation without a separate DRAM controller as is the case with PSRAM.

An embedded variant of pseudostatic RAM is sold by MoSys under the name 1T-SRAM. It is technically DRAM, but behaves much like SRAM. It is used in Nintendo Gamecube and Wii consoles.

1T DRAM is commercialized under the name Z-RAM.

Note that classic one-transistor/one-capacitor (1T/1C) DRAM cell is also sometimes referred to as "1T DRAM".

Reduced Latency DRAM is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth. RLDRAM is mainly designed for networking and caching applications.

Although dynamic memory is only guaranteed to retain its contents when supplied with power and refreshed every 64 ms, the memory cell capacitors will often retain their values for significantly longer, particularly at low temperatures.

Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes.

This property can be used to recover "secure" data kept in memory by quickly rebooting the computer and dumping the contents of the RAM or by cooling the chips and transferring them to a different computer. Such an attack was demonstrated to circumvent popular disk encryption systems, like the open source TrueCrypt, Microsoft's BitLocker Drive Encryption, as well as Apple's FileVault.

To the top


Direct Rambus DRAM or DRDRAM (sometimes just called Rambus DRAM or RDRAM) is a type of synchronous dynamic RAM, designed by the Rambus Corporation.

The first PC motherboards with support for RDRAM debuted in 1999. They supported PC-800 RDRAM, which operated at 400 MHz and delivered 1600 MB/s of bandwidth over a 16-bit bus using a 184-pin RIMM form factor. Data is transferred on both the rising and falling edges of the clock signal, a technique known as double data rate. For marketing reasons the physical clock rate was multiplied by two (because of the DDR operation); therefore, the 400 MHz Rambus standard was named PC-800. This was significantly faster than the previous standard, PC-133 SDRAM, which operated at 133 MHz and delivered 1066 MB/s of bandwidth over a 64-bit bus using a 168-pin DIMM form factor.

Moreover, if a mainboard has a dual- or quad-channel memory subsystem, all of the memory channels must be upgraded simultaneously. Sixteen-bit modules provide one channel of memory, while 32-bit modules provide two channels. Therefore, a dual channel mainboard accepting 16-bit modules must have RIMMs added or removed in pairs. A dual channel mainboard accepting 32-bit modules can have single RIMMs added or removed as well.

Rambus's RDRAM saw use in several video game consoles, beginning in 1996 with the Nintendo 64. The Nintendo console utilized 4 MB RDRAM running with a 500 MHz clock on a 9-bit bus, providing 500 MB/s bandwidth. RDRAM allowed N64 to be equipped with a large amount of memory bandwidth while maintaining a lower cost due to design simplicity. RDRAM's narrow bus allows circuit board designers to use simpler design techniques to minimize cost. The memory, however, was disliked for its high random access latencies. In the N64, the RDRAM modules are cooled by a passive heatspreader assembly.

Sony uses RDRAM in the PlayStation 2. The PS2 was equipped with 32 MB of the memory, and implemented a dual-channel configuration resulting in 3200 MB/s available bandwidth. The PlayStation 3 utilizes 256 MB of Rambus's XDR DRAM, which could be considered a successor to RDRAM, on a 64-bit bus at 400 MHz with an octal data rate (cf. double data rate) providing a clock rate of 3.2 GHz, allowing a large 204.8 Gbit/s (25.6 GB/s) bandwidth.

Cirrus Logic implemented RDRAM support in their Laguna graphics chip, with two members of the family; the 2D-only 5462 and the 5464, a 2D chip with 3D acceleration. RDRAM offered a cost-advantage while being potentially faster than competing DRAM technologies with its high bandwidth. The chips were used on the Creative Graphics Blaster MA3xx series, among others.

Compared to other contemporary standards, Rambus shows a slight increase in latency, heat output, manufacturing complexity, and cost. Some criticized RDRAM's larger die size, which is required to house the added interface and results in a 10-20 percent price premium at 16-megabit densities and adds about a 5 percent penalty at 64M.

PC-800 RDRAM operated with a latency of 45 ns, which was more latency than other comparable DRAM technologies of the time. RDRAM memory chips also put out significantly more heat than SDRAM chips, necessitating heatspreaders on all RIMM devices. RDRAM includes a memory controller on each memory chip, significantly increasing manufacturing complexity compared to SDRAM, which used a single memory controller located on the northbridge chipset. RDRAM was also two to three times the price of PC-133 SDRAM due to a combination of high manufacturing costs and high license fees. PC-2100 DDR SDRAM, introduced in 2000, operated with a clock rate of 133 MHz and delivered 2100 MB/s over a 64-bit bus using a 184-pin DIMM form factor.

When installing multiple RIMMs on a memory channel, performance impact is greater than SDRAM design because the data in the further memory module has to travel across all memory chips installed physically closer to the memory controller, instead of just 1 or 2 chips in production SDRAM motherboards.

The design of many common Rambus memory controllers dictated that memory sticks be installed in sets of two. Any remaining open memory slots must be filled with CRIMMs. These sticks provide no extra memory, and only served to propagate the signal to termination resistors on the motherboard instead of providing a dead end where signals would reflect. The picture on the lower right depicts a CRIMM stick.

With the introduction of the i820 (Pentium III), Intel 850 (Pentium 4), Intel 860 (Pentium 4 Xeon) chipsets, Intel added support for dual-channel PC-800 RDRAM, doubling bandwidth to 3200 MB/s by increasing the bus width to 32-bit. This was followed in 2002 by the i850E chipset, which introduced PC-1066 RDRAM, increasing total dual-channel bandwidth to 4200 MB/s. Then in 2002, Intel released the E7205 Granitebay chipset, which introduced dual-channel DDR support for a total bandwidth of 4200 MB/s, at a slightly lower latency than competing RDRAM.

To achieve RDRAM's 800 MHz clock rate, the memory module only runs on 16-bit bus, instead of 64-bit bus in contemporary SDRAM DIMM. Furthermore, not all production RDRAM module at the time of Intel 820 launch can run at 800 MHz, but rather at slower clock rate.

Benchmark tests conducted in 1998 showed most everyday applications to run minimally slower with RDRAM. In 1999, benchmarks comparing the Intel i840 and Intel i820 RDRAM chipsets with the Intel i440BX SDRAM chipset lead to the conclusion that the performance gain of RDRAM did not justify its premium price over SDRAM except for use in workstations. In 2002, benchmarks pointed out that single-channel DDR400 SDRAM modules could closely match dual-channel 1066 MHz RDRAM in everyday applications.

In November, 1996, Rambus entered into a development and license contract with Intel.. Intel announced to the Wintel development community that it would only support the Rambus memory interface for its microprocessors, Intel was granted rights to purchase 1M shares of Rambus' stock at $10 per share.

In 1998, Intel planned to make a $500 million equity investment in Micron Technology, to accelerate the adoption of Direct RDRAM. Other investment included paying $100 million to Samsung Electronics in 1999.

As a transition strategy, Intel planned to support PC-100 SDRAM DIMM on future Intel 82x chipset using Memory Translation Hub (MTH). In 2000, Intel recalled Intel 820 motherboard with memory translator hub (MTH) because the MTH can, while doing simultaneous switching, produce noise that may cause the computer to hang mysteriously or to spontaneously reboot. Since then, no production Intel 820 motherboards contain MTH.

In 2000, Intel subsidized RDRAM by bundling retail boxes of Pentium 4 CPU with 2 RIMMs. Intel began to phase out Rambus subsidies in 2001.

In 2003, Intel introduced Intel 865 and Intel 875 chipsets, which were marketed as high end replacement of Intel 850. Furthermore, the future memory roadmap did not include Rambus.

Few DRAM manufacturers have ever obtained the license to produce RDRAM, and those who did license the technology failed to make enough RIMMs to satisfy PC market demand, causing RIMM to be priced higher than SDRAM DIMMs, even when memory prices skyrocketed during 2002. During RDRAM's decline, DDR continued to advance in performance while, at the same time, it was still cheaper than RDRAM. Meanwhile, a massive price war in the DDR SDRAM allowed DDR SDRAM to be sold at or below production cost. DDR SDRAM makers were losing massive amounts of money, while RDRAM suppliers were making a good profit for every module sold. While it is still produced today, few motherboards support RDRAM. Between 2002-2005, market share of RDRAM had never extended beyond 5%.

In 2004, it was revealed that SDRAM manufacturers Infineon, Hynix, Samsung, Micron, and Elpida had entered into a price-fixing scheme . Infineon, Hynix, Samsung and Elpida all entered plea agreements with the US DOJ, pleading guilty to price fixing over 1999-2002. They paid fines totalling over $700 million and numerous executives were sentenced to jail time.

Rambus has alleged that, as part of the conspiracy, the DRAM manufacturers acted to depress the price of DDR memory in an effort to prevent RDRAM from succeeding in the market. Those allegations are the subject of lawsuits by Rambus against the various companies.

To the top


XDR2 DRAM is a type of Dynamic Random Access Memory that is offered by Rambus. It was announced on July 7, 2005 and the specification for which was released on March 26, 2008. Rambus has designed XDR2 as an evolution of, and the successor to, XDR DRAM.

XDR2 DRAM is intended for use in high-end graphics cards and networking equipment.

As a fabless semiconductor company, Rambus only produces a design; it must make deals with memory manufacturers to produce XDR2 DRAM chips, and there has been a notable lack of interest in doing so.

In addition to a higher clock rate (up to 800 MHz), the XDR2 differential data lines transfer data at 16 times the system clock rate, transferring 16 bits per pin per clock cycle. This "Hex Data Rate" is twice XDR's 8× multiplier. The basic burst size has also doubled.

Unlike XDR, memory commands are also transmitted over differential point-to-point links at this high data rate. The command bus varies between 1 and 4 bits wide. Even though each bit requires 2 wires, this is still less than the 12-wire XDR request bus, but it must grow with the number of chips addressed.

There is a basic limit to how frequently data can be fetched from the currently open row. This is typically 200 MHz for standard SDRAM and 400–600 MHz for high-performance graphics memory. Increasing interface speeds require fetching larger blocks of data in order to keep the interface busy without violating the internal DRAM frequency limit. At 16×800 MHz, to stay within a 400 MHz column access rate would require a 32-bit burst transfer. Multiplied by a 32-bit wide chip, this is a minimum fetch of 128 bytes, inconveniently large for many applications.

Typical memory chips are internally divided into 4 quadrants, with left and right halves connected to different halves of the data bus, and top or bottom halves being selected by bank number. (Thus, in a typical 8-bank DRAM, there would be 4 half-banks per quadrant.) XDR2 permits independently addressing each quadrant, so the two halves of the data bus can fetch data from different banks. Additionally, the data fetched from each half-bank is only half of what is needed to keep the data bus full; accesses to an upper half-bank must be alternated with access to a lower half-bank.

This effectively doubles the number of banks and reduces the minimum data access size by a factor of 4, albeit with the limitation that accesses must be spread uniformly across all 4 quadrants.

To the top


XDR DRAM or extreme data rate dynamic random access memory is a high-performance RAM interface and successor to the Rambus RDRAM it is based on, competing with the rival DDR2 SDRAM and GDDR4 technology. XDR was designed to be effective in small, high-bandwidth consumer systems, high-performance memory applications, and high-end GPUs. It eliminates the unusually high latency problems that plagued early forms of RDRAM. Also, the XDR DRAM have heavy emphasis on per pin bandwidth, which can benefit further cost control on PCB production. This is because fewer lanes are needed for the same amount of bandwidth. Rambus owns the rights to the technology. XDR is used by Sony in the PlayStation 3 console.

An XDR RAM chip's high-speed signals are a differential clock input (clock from master, CFM/CFMN), a 12-bit single-ended request/command bus (RQ11..0), and a bidirectional differential data bus up to 16 bits wide (DQ15..0/DQN15..0). The request bus may be connected to several memory chips is parallel, but the data bus is point to point; only one RAM chip may be connected to it. To support different amounts of memory with a fixed-width memory controller, the chips have a programmable interface width. A 32-bit-wide DRAM controller may support 2 16-bit chips, or be connected to 4 memory chips each of which supplies 8 bits of data, or up to 16 chips configured with 2-bit interfaces.

In addition, each chip has a low-speed serial bus used to determine its capabilities and configure its interface. This consists of three shared inputs: a reset line (RST), a serial command input (CMD) and a serial clock (SCK), and serial data in/out lines (SDI and SDO) that are daisy-chained together and eventually connect to a single pin on the memory controller.

All single-ended lines are active-low; an asserted signal or logical 1 is represented by a low voltage.

The request bus operates at double data rate relative to the clock input. Two consecutive 12-bit transfers (beginning with the falling edge of CFM) make a 24-bit command packet.

The data bus operates at 8x the speed of the clock; a 400 MHz clock generates 3200 MT/s. All data reads and writes operate in 16-transfer bursts lasting 2 clock cycles.

There are a large number of timing constraints giving minimum times that must elapse between various commands (see Dynamic random access memory: Memory timing); the DRAM controller sending them must ensure they are all met.

Some commands contain delay fields. These delay the effect of the command by the given number of clock cycles. This permits multiple commands (to different banks) to take effect on the same clock cycle.

These operate analogously to a standard SDRAM's read or write commands, specifying a column address. Data is provided to the chip a few cycles after a write command (typically 3), and is output by the chip several cycles after a read command (typically 6). Just as with other forms of SDRAM, the DRAM controller is responsible for ensuring that the data bus is not scheduled for use in both directions at the same time. Data is always transferred in 16-transfer bursts, lasting 2 clock cycles. Thus, for a ×16 device, 256 bits (32 bytes) are transferred per burst.

If the chip is using a data bus less than 16 bits wide, one or more of the sub-column address bits are used to select the portion of the column to be presented on the data bus. If the data bus is 8 bits wide, SC3 is used to identify which half of the read data to access; if the data bus is 4 bits wide, SC3 and SC2 are used, etc.

Unlike conventional DRAM, there is no provision for choosing the order in which the data is supplied within a burst. Thus, it is not possible to perform critical-word-first reads.

Each byte is the 8 consecutive bits transferred across one data line during a particular clock cycle. M0 is matched to the first data bit transferred during a clock cycle, and M7 is matched to the last bit.

This convention also interferes with performing critical-word-first reads; and word must include bits from at least the first 8 bits transferred.

This command is similar to a combination of a conventional SDRAM's precharge and refresh commands. The POPx and BPx bits specify a precharge operation, while the ROPx, DELRx, and BRx bits specify a refresh operation. Each may be separately enabled. If enabled, each may have a different command delay and must be addressed to a different bank.

Precharge commands may only be sent to one bank at a time; unlike a conventional SDRAM, there is no "precharge all banks" command.

Refresh commands are also different from a conventional SDRAM. There is no "refresh all banks" command, and the refresh operation is divided into separate activate and precharge operations so the timing is determined by the memory controller. The refresh counter is also programmable by the controller.

This command performs a number of miscellaneous functions, as determined by the XOPx field. Although there are 16 possibilities, only 4 are actually used. Three subcommands start and stop output driver calibration (which must be performed periodically (every 100 ms).

The fourth subcommand places the chip in power-down mode. In this mode, it performs internal refresh and ignores the high-speed data lines. It must be woken up using the low-speed serial bus.

XDR DRAMs are probed and configured using a low-speed serial bus. The RST, SCK, and CMD signals are driven by the controller to every chip in parallel. The SDI and SDO lines are daisy-chained together, with the last SDO output connected to the controller, and the first SDI input tied high (logic 0).

On reset, each chip drives its SDO pin low (1). When reset is released, a series of SCK pulses are sent to the chips. Each chip drives its SDO output high (0) one cycle after seeing its SDI input high (0). Further, it counts the number of cycles that elapse between releasing reset and seeing its SDI input high, and copies that count to an internal chip ID register. Commands sent by the controller over the CMD line include an address which must match the chip ID field.

Each command either reads or writes a single 8-bit register, using an 8-bit address. This allows up to 256 registers, but only the range 1–31 is currently assigned.

To the top

Source : Wikipedia