3.416773094957 (2349)
Posted by sonny 03/02/2009 @ 04:43

Tags : memory, components, hardware, technology

News headlines
Facing Crew a nice memory for Verlander -
"It dregs up some memories," Verlander said, "but it's two years ago -- totally different team. There's four or five guys [still there], I think. The stars have to align just right. I could've faced the exact same team with the exact same stuff the...
Rambus Slumps After Cutting Its Revenue Forecast - Bloomberg
By Ian King and Kelly Riddell June 23 (Bloomberg) -- Rambus Inc., a designer of computer- memory chips, fell the most in more than five months in Nasdaq trading after cutting its second-quarter revenue forecast, citing declining demand for consumer...
Tree to be planted in Sandra Cantu's memory - Tracy Press
by Jennifer Wadsworth The family of 8-year-old murder victim Sandra Cantu will join city officials at a Tracy park on Wednesday to plant a tree in the girl's memory. Sandra's favorite color was pink, so Tracy police Officer Irene Rose thought it...
EMDR therapy for post-traumatic stress disorder (PTSD) - Lex 18
EMDR combines focused eye movements with the processing of memories and feelings about the trauma. The theory is that by switching your focus between two things (the memory and eye movement), you can reprocess the memory in a way that is less upsetting...
Former No. 1 pick Webber shares his Draft day memories -
Webber -- who also played for Washington, Sacramento and Detroit -- recently chatted with about his memories of the NBA Draft and doled out some rookie pointers. Question: What advice can you give the first pick of this year's draft?...
Committed To Memory - NPR
Not to worry, I have their mega-hits "Hold On" and "Release Me" committed to memory, as well. And how did these gems find their way back into my conscious mind? How else? Watching someone sing them at karaoke! Ugh. Thanks to an entertaining and...
Cars With A Memory - Chip Design Magazine
That includes everything from IP to memory and software. But automakers also require parts to be low cost, because the market is particularly cost sensitive, and low power, because of the emphasis on improving mileage. All of this helps explain why...
Memories May Be Formed Throughout The Day, Not Just While Sleeping - Science Daily (press release)
ScienceDaily (June 23, 2009) — Scientists have long thought that processes occurring during sleep were responsible for cementing the salient experiences of the day into long-term memories. Now, however, a study of scampering rats suggests that the...
5-ingredient recipes won't tax the memory - Indianapolis Star
By Joanie Fuson / Star correspondent How many times have you pulled into the grocery store parking lot and realized your grocery list is sitting at home? This week's recipes, adapted from "Five Ingredient Dinners" (Good Housekeeping, $14.95),...
Take a drive down Memory Lane - Voorhees Sun
Take a drive down Memory Lane on Saturday, June 27, at Voorhees Senior Living for the Second Annual Classic Car Show & Cookout fundraiser to support the Alzheimer's Association. This special event will be held from 11 am to 3 pm, and will feature 40 to...

Magnetic core memory

A 16×16 cm area core memory plane of 128×128 bits, or 2048 bytes (2 KiB)

Magnetic core memory, or ferrite-core memory, is an early form of random access computer memory. It uses small magnetic ceramic rings, the cores, through which wires are threaded to store information via the polarity of the magnetic field they contain. Such memory is often just called core memory, or, informally, core.

Although computer memory long ago moved to silicon chips, memory is still occasionally called "core". This is most obvious in the naming of the core dump, which refers to the contents of memory recorded at the time of a program error.

The earliest work on core memory was carried out by the Shanghai-born American physicists, An Wang and Way-Dong Woo, who created the pulse transfer controlling device in 1949. The name referred to the way that the magnetic field of the cores could be used to control the switching of current in electro-mechanical systems. Wang and Woo were working at Harvard University's Computation Laboratory at the time, but unlike MIT, Harvard was not interested in promoting inventions created in their labs. Instead Wang was able to patent the system on his own while Woo took ill.

Jay Forrester's group, working on the Whirlwind project at MIT, became aware of this work. This machine required a fast memory system for realtime flight simulator use. At first, Williams tubes (more accurately, Williams-Kilburn tubes) — a storage system based on cathode ray tubes — were used, but these devices were always temperamental and unreliable.

Two key inventions led to the development of magnetic core memory in 1951, which enabled the development of computers as we know them. The first, An Wang's, was the write-after-read cycle, which solved the puzzle of how to use a storage medium in which the act of reading was also an act of erasure. The second, Jay Forrester's, was the coincident-current system, which enabled a small number of wires to control a large number of cores (see Description section below for details).

Forrester's coincident-current system required one of the wires to be run at 45 degrees to the cores, which proved impossible to wire by machine, so that core arrays had to be assembled by workers with fine motor control under microscopes. Initially, garment workers were used.

It was during the early 50s that Seeburg developed the use of this coincident current ferrite core memory storage in the 'Tormat' memory of its new range of jukeboxes, starting with the V200 released in 1955. Development work was completed in 1953.

By the late 1950s industrial plants had been set up in the Far East to build core. Inside, hundreds of workers strung cores for low pay. This lowered the cost of core to the point where it became largely universal as main memory by the early 1960s, replacing both the low-cost and low-performance drum memory as well as the high-cost and high-performance systems using vacuum tubes, later transistors, as memory. Certain manufacturers also employed Scandinavian seamstresses who had been laid off due to mechanization of the textile industry.

The cost of core memory declined sharply over the lifetime of the technology: costs began at roughly US$1.00 per bit and eventually approached roughly US$0.01 per bit. Core was in turn replaced by integrated silicon RAM chips in the 1970s.

Dr. Wang's patent was not granted until 1955, and by that time core was already in use. This started a long series of lawsuits, which eventually ended when IBM paid Wang several million dollars to buy the patent outright. Wang used the funds to greatly increase the size of Wang Laboratories which he co-founded with Dr. Ge-Yao Chu, a school mate from China.

Core memory was part of a family of related technologies, now largely forgotten, which exploited the magnetic properties of materials to perform switching and amplification. By the 1950s vacuum-tube electronics was well-developed and very sophisticated, but tubes had a limited lifetime, used a lot of power, and their operating characteristics changed in value over their life. Magnetic devices had many of the virtues of the transistor and solid-state devices that would replace them, and saw considerable use in military applications. A notable example was the portable (truck-based) MOBIDIC computer developed by Sylvania for the United States Army Signal Corps in the late Fifties. Core memory was non-volatile: the contents of memory were not lost if the power supply was interrupted or the software crashed.

The most common form of core memory, X/Y line coincident-current – used for the main memory of a computer, consists of a large number of small ferrite (ferromagnetic ceramic) rings, cores, held together in a grid structure (each grid called a plane), with wires woven through the holes in the cores' middle. In early systems there were four wires, X, Y, Sense and Inhibit, but later cores combined the latter two wires into one Sense/Inhibit line. Each ring stores one bit (a 0 or 1). One bit in each plane could be accessed in one cycle, so each machine word in an array of words was spread over a stack of planes. Each plane would manipulate one bit of a word in parallel, allowing the full word to be read or written in one cycle.

Core relies on the hysteresis of the magnetic material used to make the rings. Only a magnetic field over a certain intensity (generated by the wires through the core) can cause the core to change its magnetic polarity. To select a memory location, one of the X and one of the Y lines are driven with half the current required to cause this change. Only the combined magnetic field generated where the X and Y lines cross is sufficient to change the state; other cores will see only half the needed field, or none at all. By driving the current through the wires in a particular direction, the resulting induced field forces the selected core's magnetic field to point in one direction or the other (north or south).

Reading from core memory is somewhat complex. Basically the read operation consists of doing a "flip to 0" operation to the bit in question, that is, driving the selected X and Y lines in the direction that causes the core to flip to whatever polarity the machine considers to be zero. If the core was already in the 0 state, nothing will happen. However if the core was in the 1 state it will flip to 0. If this flip occurs, a brief current pulse is induced into the Sense line, saying, in effect, that the memory location used to hold a 1. If the pulse is not seen, that means no flip occurred, so the core must have already been in the 0 state. Note that every read forces the core in question into the 0 state, so reading is destructive, which is one of the attributes of core memory.

Writing is similar in concept, but always consists of a "flip to 1" operation, relying on the memory already having been set to the 0 state in a previous read. For the write operation, the current in the X and Y lines goes in the opposite direction as it did for the read operation. If the core in question is to hold a 1, then the operation proceeds normally and the core flips to 1. However if the core is to instead hold a zero, the same amount of current as is used on the X and Y lines is also sent into the Inhibit line, which drops the combined field from the X, Y and Inhibit lines to half of the field needed to flip the core magnetization state. This leaves the core in the 0 state.

Note that the Sense and Inhibit wires are used one after the other, never at the same time. For this reason later core systems combined the two into a single wire, and used circuitry in the memory controller to switch the duty of the wire from Sense to Inhibit.

Because core always requires a write after read, many computers included instructions that took advantage of this. These instructions would be used when the same location was going to be read, changed and then written, such as an increment operation. In this case the computer would ask the memory controller to do the read, but then signal it to pause before doing the write that would normally follow. When the instruction was complete the controller would be unpaused, and the write would occur with the new value. For certain types of operations, this effectively doubled the performance.

Word line core memory was often used to provide register memory. This form of core memory typically wove three wires through each core on the plane, word read, word write, and bit sense/write, To read or clear words, the full current is applied to one or more word read lines; this clears the selected cores and any that flip induce voltage pulses in their bit sense/write lines. For read, normally only one word read line would be selected; but for clear, multiple word read lines could be selected while the bit sense/write lines ignored. To write words, the half current is applied to one or more word write lines, and half current is applied to each bit sense/write line for a bit to be set. For write, multiple word write lines could be selected. This offered a performance advantage over X/Y line coincident-current in that multiple words could be cleared or written with the same value in a single cycle. A typical machine's register set usually used only one small plane of this form of core memory.

Another form of core memory called core rope memory provided read-only storage. In this case, the cores were simply used as transformers; no information was actually stored magnetically within the individual cores. An example was the Apollo Guidance Computer used for the moon landings.

The performance of early core memories can be characterized in today's terms as being very roughly comparable to a clock rate of 1 MHz (equivalent to early 1980s home computers, like the Apple II and Commodore 64). Early core memory systems had cycle times of about 6 µs, which had fallen to 1.2 µs by the early 1970s, and by the mid-70s it was down to 600 ns (0.6 µs). Everything possible was done in order to increase access, including the simultaneous use of multiple grids of core, each storing one bit of a data word. For instance a machine might use 32 grids of core with a single bit of the 32-bit word in each one, and the controller could access the entire 32-bit word in a single read/write cycle.

Core memory is non-volatile storage – it can retain its contents indefinitely without power. It is also relatively unaffected by EMP and radiation. These were important advantages for some applications like first generation industrial programmable controllers, military installations and vehicles like fighter aircraft, as well as spacecraft, and led to core being used for a number of years after availability of semiconductor MOS memory (see also MOSFET). For example, the Space Shuttle flight computers initially used core memory, which preserved the contents of memory even through the Challenger's explosion and subsequent plunge into the sea in 1986.

A characteristic of core was that it is current-based, not voltage-based. The "half select current" was typically about 400 mA for later, smaller, faster cores. Earlier, larger cores required more current.

Another characteristic of core is that the hysteresis loop was temperature sensitive, the proper half select current at one temperature is not the proper half select current at another temperature. So the memory controllers would include temperature sensors (typically a thermistor) to adjust the current levels correctly for temperature changes. An example of this is the core memory used by Digital Equipment Corporation for their PDP-1 computer; this strategy continued through all of the follow-on core memory systems built by DEC for their PDP line of air-cooled computers. Another method of handling the temperature sensitivity was to enclose the magnetic core "stack" in a temperature controlled oven. Examples of this are the heated air core memory of the IBM 1620 (which could take up to 30 minutes to reach operating temperature, about 106 °F, 41 °C) and the heated oil bath core memory of the IBM 709, IBM 7090, and IBM 7030.

In 1980, the price of a 16KW (KiloWord, equivalent to 32KB) core memory board that fitted into a DEC Q-bus computer was around USD 3000. At that time, core array and supporting electronics fit on a single printed circuit board about 25 x 20cm in size, the core array was mounted a few mm above the PCB and was protected with a metal or plastic plate.

Diagnosing hardware problems in core memory required time-consuming diagnostic programs to be run. While a quick test checked if every bit could contain a one and a zero, these diagnostics tested the core memory with worst-case patterns and had to run for several hours. As most computers just had one single core memory board, these diagnostics also moved themselves around in memory, making it possible to test every bit. In many occasions, errors could be resolved by gently tapping the printed circuit board with the core array on a table. This slightly changed the position of the cores to the wires running through and could fix the problem. The procedure was seldom needed, as core memory proved to be very reliable compared to other computer components of the day.

To the top

Working memory

Working memory (also referred to as short-term memory, depending on the specific theory) is a theoretical construct within cognitive psychology that refers to the structures and processes used for temporarily storing and manipulating information. There are numerous theories as to both the theoretical structure of working memory (see the "organizational map" that follows) as well as to the specific parts of the brain responsible for working memory. However, most researchers agree that the frontal cortex, parietal cortex, anterior cingulate, and parts of the basal ganglia are crucial for functioning. Much of the understanding of the neural basis of working memory has come from lesion experiments in animals and imaging experiments in humans.

The term was first used in the 1960s in the context of theories that likened the mind to a computer. Before then, what we now call working memory was referred to as short-term memory, primary memory, immediate memory, operant memory, or provisional memory. Short-term memory is the ability to remember information over a brief period of time (in the order of seconds). Most theorists today use the concept of working memory to replace or include the older concept of short-term memory, thereby marking a stronger emphasis on the notion of manipulation of information instead of passive maintenance.

The earliest mention of experiments on the neural basis of working memory can be traced back to over 100 years ago, when Hitzigand Ferrier described ablation experiments of the prefrontal cortex (PFC). They concluded that the frontal cortex was important for cognitive processes rather than sensory ones. In 1935 and 1936, Jacobsen and colleagues were the first to conclude that the cognitive processes in the PFC were notable in delay-dependent tasks; in other words, they suffered from short-term memory loss.

There have been numerous models proposed regarding how working memory functions, both anatomically and cognitively. Of those, three have received the distinct notice of wide acceptance.

Baddeley and Hitch (1974) introduced and made popular the multicomponent model of working memory. This theory proposes that two "slave systems" are responsible for short-term maintenance of information, and a "central executive" is responsible for the supervision of information integration and for coordinating the slave systems. One slave system, the phonological loop, stores phonological information (i.e., the sound of language) and prevents its decay by continuously articulating its contents, thereby refreshing the information in a rehearsal loop. It can, for example, maintain a seven-digit telephone number for as long as one repeats the number to oneself again and again. The other slave system, the visuo-spatial sketch pad, stores visual and spatial information. It can be used, for example, for constructing and manipulating visual images, and for the representation of mental maps. The sketch pad can be further broken down into a visual subsystem (dealing with, for instance, shape, colour, and texture), and a spatial subsystem (dealing with location). The central executive (see executive system) is, among other things, responsible for directing attention to relevant information, suppressing irrelevant information and inappropriate actions, and for coordinating cognitive processes when more than one task must be done at the same time. Baddeley (2000) extended the model by adding a fourth component, the episodic buffer, which holds representations that integrate phonological, visual, and spatial information, and possibly information not covered by the slave systems (e.g., semantic information, musical information). The component is episodic because it is assumed to bind information into a unitary episodic representation. The episodic buffer resembles Tulving's concept of episodic memory, but it differs in that the episodic buffer is a temporary store.

Cowan regards working memory not as a separate system, but as a part of long-term memory. Representations in working memory are a subset of the representations in long-term memory. Working memory is organized in two embedded levels. The first level consists of long-term memory representations that are activated. There can be many of these, there is no limit to activation of representations in long-term memory. The second level is called the focus of attention. The focus is regarded as capacity limited and holds up to four of the activated representations. Oberauer has extended the Cowan model by adding a third component, a more narrow focus of attention that holds only one chunk at a time. The one-element focus is embedded in the four-element focus and serves to select a single chunk for processing. For example, you can hold four digits in mind at the same time in Cowan's "focus of attention". Now imagine that you wish to perform some process on each of these digits, for example, adding the number two to each digit. Separate processing is required for each digit, as most individuals can not perform several mathematical processes in parallel. Oberauer's attentional component selects one of the digits for processing, and then shifts the attentional focus to the next digit, continuing until all of the digits have been processed.

Whereas most adults can repeat about seven digits in correct order, some individuals have shown impressive enlargements of their digit span - up to 80 digits. This feat is possible by extensive training on an encoding strategy by which the digits in a list are grouped (usually in groups of three to five) and these groups are encoded as a single unit (a chunk). To do so one must be able to recognize the groups as some known string of digits. One person studied by K. Anders Ericsson and his colleagues, for example, used his extensive knowledge of racing times from the history of sports. Several such chunks can then be combined into a higher-order chunk, thereby forming a hierarchy of chunks. In this way, only a small number of chunks at the highest level of the hierarchy must be retained in working memory. At retrieval, the chunks are unpacked again. That is, the chunks in working memory act as retrieval cues that point to the digits that they contain. It is important to note that practicing memory skills such as these do not expand working memory capacity proper. This can be shown by using different materials - the person who could recall 80 digits was not exceptional when it came to recalling words. Ericsson and Kintsch (1995) have argued that we use skilled memory in most everyday tasks. Tasks such as reading, for instance, require to maintain in memory much more than seven chunks - with a capacity of only seven chunks our working memory would be full after a few sentences, and we would never be able to understand the complex relations between thoughts expressed in a novel or a scientific text. We accomplish this by storing most of what we read in long-term memory, linking them together through retrieval structures. We need to hold only a few concepts in working memory, which serve as cues to retrieve everything associated to them by the retrieval structures. Anders Ericsson and Walter Kintsch refer to this set of processes as "long-term working memory". Retrieval structures vary according to the domain of expertise, yet as suggested by Gobet they can be categorized in three typologies: generic retrieval structures, domain knowledge retrieval structures and the episodic text structures. The first corresponds to Ericsson and Kintsch’s ‘classic’ retrieval structure and the second to the elaborated memory structure. The first kind of structure is developed deliberately and is arbitrary (e.g. the method of loci), the second one is similar to patterns and schemas and the last one takes place exclusively during text comprehension. Concerning this last typology, Kintsch, Patel and Ericsson consider that every confirmed reader is able to form an episodic text structure during text comprehension, if the text is well written and if the content is familiar. Guida and colleagues using this last feature have proposed the ‘personalisation method’ as a way to operationalise the long-term working memory.

Working memory is generally considered to have limited capacity. The earliest quantification of the capacity limit associated with short-term memory was the "magical number seven" introduced by Miller (1956). He noticed that the memory span of young adults was around seven elements, called chunks, regardless whether the elements were digits, letters, words, or other units. Later research revealed that span does depend on the category of chunks used (e.g., span is around seven for digits, around six for letters, and around five for words), and even on features of the chunks within a category. For instance, span is lower for long words than for short words. In general, memory span for verbal contents (digits, letters, words, etc.) strongly depends on the time it takes to speak the contents aloud, and on the lexical status of the contents (i.e., whether the contents are words known to the person or not). Several other factors also affect a person's measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan (2001) has proposed that working memory has a capacity of about four chunks in young adults (and fewer in children and old adults).

Working memory capacity can be tested by a variety of tasks. A commonly used measure is a dual-task paradigm combining a memory span measure with a concurrent processing task, sometimes referred to as "complex span". Daneman and Carpenter invented the first version of this kind of task, the "reading span", in 1980. Subjects read a number of sentences (usually between 2 and 6) and try to remember the last word of each sentence. At the end of the list of sentences, they repeat back the words in their correct order. Other tasks that don't have this dual-task nature have also been shown to be good measures of working memory capacity. The question of what features a task must have to qualify as a good measure of working memory capacity is a topic of ongoing research.

Measures of working-memory capacity are strongly related to performance in other complex cognitive tasks such as reading comprehension, problem solving, and with any measures of the intelligence quotient. Some researchers have argued that working memory capacity reflects the efficiency of executive functions, most notably the ability to maintain a few task-relevant representations in the face of distracting irrelevant information. The tasks seem to reflect individual differences in ability to focus and maintain attention, particularly when other events are serving to capture attention. These effects seem to be a function of frontal brain areas.

Others have argued that the capacity of working memory is better characterized as the ability to mentally form relations between elements, or to grasp relations in given information. This idea has been advanced, among others, by Graeme Halford, who illustrated it by our limited ability to understand statistical interactions between variables. These authors asked people to compare written statements about the relations between several variables to graphs illustrating the same or a different relation, for example "If the cake is from France then it has more sugar if it is made with chocolate than if it is made with cream but if the cake is from Italy then it has more sugar if it is made with cream than if it is made of chocolate". This statement describes a relation between three variables (country, ingredient, and amount of sugar), which is the maximum most of us can understand. The capacity limit apparent here is obviously not a memory limit - all relevant information can be seen continuously - but a limit on how many relationships we can discern simultaneously.

Why is working memory capacity limited at all? If we knew the answer to this question, we would understand much better why our cognitive abilities are as limited as they are. There are several hypotheses about the nature of the capacity limit. One is that there is a limited pool of cognitive resources needed to keep representations active and thereby available for processing, and for carrying out processes. Another hypothesis is that memory traces in working memory decay within a few seconds, unless refreshed through rehearsal, and because the speed of rehearsal is limited, we can maintain only a limited amount of information. Yet another idea is that representations held in working memory capacity interfere with each other. There are several forms of interference discussed by theorists. One of the oldest ideas is that new items simply replace older ones in working memory. Another form of interference is retrieval competition. For example, when the task is to remember a list of 7 words in their order, we need to start recall with the first word. While trying to retrieve the first word, the second word, which is represented in close proximity, is accidentally retrieved as well, and the two compete for being recalled. Errors in serial recall tasks are often confusions of neighboring items on a memory list (so-called transpositions), showing that retrieval competition plays a role in limiting our ability to recall lists in order, and probably also in other working memory tasks. A third form of interference assumed by some authors is feature overwriting. The idea is that each word, digit, or other item in working memory is represented as a bundle of features, and when two items share some features, one of them steals the features from the other. The more items are held in working memory, and the more their features overlap, the more each of them will be degraded by the loss of some features.

None of these hypotheses can explain the experimental data entirely. The resource hypothesis, for example, was meant to explain the trade-off between maintenance and processing: The more information must be maintained in working memory, the slower and more error prone concurrent processes become, and with a higher demand on concurrent processing memory suffers. This trade-off has been investigated by tasks like the reading-span task described above. It has been found that the amount of trade-off depends on the similarity of the information to be remembered and the information to be processed. For example, remembering numbers while processing spatial information, or remembering spatial information while processing numbers, impair each other much less than when material of the same kind must be remembered and processed. Also, remembering words and processing digits, or remembering digits and processing words, is easier than remembering and processing materials of the same category. These findings are also difficult to explain for the decay hypothesis, because decay of memory representations should depend only on how long the processing task delays rehearsal or recall, not on the content of the processing task. A further problem for the decay hypothesis comes from experiments in which the recall of a list of letters was delayed, either by instructing participants to recall at a slower pace, or by instructing them to say an irrelevant word once or three times in between recall of each letter. Delaying recall had virtually no effect on recall accuracy.. The interference hypothesis seems to fare best with explaining why the similarity between memory contents and the contents of concurrent processing tasks affects how much they impair each other. More similar materials are more likely to be confused, leading to retrieval competition, and they have more overlapping features, leading to more feature overwriting. One experiment directly manipulated the amount of overlap of phonological features between words to be remembered and other words to be processed. Those to-be-remembered words that had a high degree of overlap with the processed words were recalled worse, lending some support to the idea of interference through feature overwriting.

The theory most successful so far in explaining experimental data on the interaction of maintenance and processing in working memory is the "time-based resource sharing model". This theory assumes that representations in working memory decay unless they are refreshed. Refreshing them requires an attentional mechanism that is also needed for any concurrent processing task. When there are small time intervals in which the processing task does not require attention, this time can be used to refresh memory traces. The theory therefore predicts that the amount of forgetting depends on the temporal density of attentional demands of the processing task - this density is called "cognitive load". The cognitive load depends on two variables, the rate at which the processing task requires individual steps to be carried out, and the duration of each step. For example, if the processing task consists of adding digits, then having to add another digit every half second places a higher cognitive load on the system than having to add another digit every two seconds. Adding larger digits takes more time than adding smaller digits, and therefore cognitive load is higher when larger digits must be added. In a series of experiments, Barrouillet and colleagues have shown that memory for lists of letters depends on cognitive load, but not on the number of processing steps (a finding that is difficult to explain by an interference hypothesis) and not on the total time of processing (a finding difficult to explain by a simple decay hypothesis). One difficulty for the time-based resource-sharing model, however, is that the similarity between memory materials and materials processed also affects memory accuracy.

One theory of attention-deficit hyperactivity disorder states that ADHD can lead to deficits in working memory. Recent studies suggest that working memory can be improved by training in ADHD patients through the Cogmed computerized program developed by Torkel Klingberg and his colleagues at the Karolinska Institutet in Sweden. This random controlled study has found that a period of working memory training increases a range of cognitive abilities and increases IQ test scores. Consequently, this study supports previous findings suggesting that working memory underlies general intelligence. Another study of the same group has shown that, after training, measured brain activity related to working memory increased in the prefrontal cortex, an area that many researchers have associated with working memory functions. A further study has shown that training with a working memory task (the dual n-back task) improves performance on a task of fluid intelligence in healthy young adults. Improving or augmenting the brain's working memory ability may prove to be a reliable method for increasing a person's IQ scores.

The first insights into the neuronal basis of working memory came from animal research. Fuster recorded the electrical activity of neurons in the prefrontal cortex (PFC) of monkeys while they were doing a delayed matching task. In that task, the monkey sees how the experimenter places a bit of food under one of two identical looking cups. A shutter is then lowered for a variable delay period, screening off the cups from the monkey’s view. After the delay, the shutter opens and the monkey is allowed to retrieve the food from under the cups. Successful retrieval in the first attempt – something the animal can achieve after some training on the task – requires holding the location of the food in memory over the delay period. Fuster found neurons in the PFC that fired mostly during the delay period, suggesting that they were involved in representing the food location while it was invisible. Later research has shown similar delay-active neurons also in the posterior parietal cortex, the thalamus, the caudate, and the globus pallidus.

Localization of brain functions in humans has become much easier with the advent of brain imaging methods (PET and fMRI). This research has confirmed that areas in the PFC are involved in working memory functions. During the 1990s much debate has centered on the different functions of the ventrolateral (i.e., lower areas) and the dorsolateral (higher) areas of the PFC. One view was that the dorsolateral areas are responsible for spatial working memory and the ventrolateral areas for non-spatial working memory. Another view proposed a functional distinction, arguing that ventrolateral areas are mostly involved in pure maintenance of information, whereas dorsolateral areas are more involved in tasks requiring some processing of the memorized material. The debate is not entirely resolved but most of the evidence supports the functional distinction.

Brain imaging has also revealed that working memory functions are by far not limited to the PFC. A review of numerous studies shows areas of activation during working memory tasks scattered over a large part of the cortex. There is a tendency for spatial tasks to recruit more right-hemisphere areas, and for verbal and object working memory to recruit more left-hemisphere areas. The activation during verbal working memory tasks can be broken down into one component reflecting maintenance, in the left posterior parietal cortex, and a component reflecting subvocal rehearsal, in the left frontal cortex (Broca’s area, known to be involved in speech production)).

There is an emerging consensus that most working memory tasks recruit a network of PFC and parietal areas. One study has shown that during a working memory task the connectivity between these areas increases. Other studies have demonstrated that these areas are necessary for working memory, and not just accidentally activated during working memory tasks, by temporarily blocking them through transcranial magnetic stimulation (TMS), thereby producing an impairment in task performance.

Most brain imaging studies of working memory have used recognition tasks such as delayed recognition of one or several stimuli, or the n-back task, in which each new stimulus in a long series must be compared to the one presented n steps back in the series. The advantage of recognition tasks is that they require minimal movement (just pressing one of two keys), making fixation of the head in the scanner easier. Experimental research and research on individual differences in working memory, however, has used largely recall tasks (e.g., the reading span task, see above). It is not clear to what degree recognition and recall tasks reflect the same processes and the same capacity limitations.

A few brain imaging studies have been conducted with the reading span task or related tasks. Increased activation during these tasks was found in the PFC and, in several studies, also in the anterior cingulate cortex (ACC). People performing better on the task showed larger increase of activation in these areas, and their activation was correlated more over time, suggesting that their neural activity in these two areas was better coordinated, possibly due to stronger connectivity.

Much has been learned over the last two decades on where in the brain working memory functions are carried out. Much less is known on how the brain accomplishes short-term maintenance and goal-directed manipulation of information. The persistent firing of certain neurons in the delay period of working memory tasks shows that the brain has a mechanism of keeping representations active without external input.

Keeping representations active, however, is not enough if the task demands maintaining more than one chunk of information. In addition, the components and features of each chunk must be bound together to prevent them from being mixed up. For example, if a red triangle and a green square must be remembered at the same time, one must make sure that “red” is bound to “triangle” and “green” is bound to “square”. One way of establishing such bindings is by having the neurons that represent features of the same chunk fire in synchrony, and those that represent features belonging to different chunks fire out of sync. In the example, neurons representing redness would fire in synchrony with neurons representing the triangular shape, but out of sync with those representing the square shape. So far, there is no direct evidence that working memory uses this binding mechanism, and other mechanisms have been proposed as well. It has been speculated that synchronous firing of neurons involved in working memory oscillate with frequencies in the theta band (4 to 8 Hz). Indeed, the power of theta frequency in the EEG increases with working memory load, and oscillations in the theta band measured over different parts of the skull become more coordinated when the person tries to remember the binding between two components of information.

There is now extensive evidence that working memory is linked to key learning outcomes in literacy and numeracy. In a screening study of over 3,000 primary-school children, 10% of those in mainstream classrooms were identified with working memory impairments. Inspection of their learning profiles indicates that two-thirds achieved standard scores below age-expected levels (<86) in reading and math. Without appropriate intervention, these children lag behind their peers. Recent research has also confirmed that working memory capacity, but not IQ, predicts learning outcomes two years later. This suggests that working memory impairments are associated with low learning outcomes and constitute a high risk factor for educational under achievement for children. In children with learning disabilities such as dyslexia, ADHD, and developmental coordination disorder, a similar pattern is evident. Common characteristics of working memory impairments in the classroom include failing to remember instructions and difficulty completing learning activities, thereby jeopardising future academic success.

Today there are hundreds of research laboratories around the world studying various aspects of working memory. There are numerous applications of working memory in the field, such as using working memory capacity to explain intelligence, success at emotion regulation, and other cognitive abilities, furthering the understanding of autism and ADHD, improving teaching methods, and creating artificial intelligence based on the human brain.

To the top

Flash memory

A flash memory cell.

Flash memory is a non-volatile computer memory that can be electrically erased and reprogrammed. It is a technology that is primarily used in memory cards and USB flash drives for general storage and transfer of data between computers and other digital products. It is a specific type of EEPROM (Electrically Erasable Programmable Read-Only Memory) that is erased and programmed in large blocks; in early flash the entire chip had to be erased at once. Flash memory costs far less than byte-programmable EEPROM and therefore has become the dominant technology wherever a significant amount of non-volatile, solid state storage is needed. Example applications include PDAs (personal digital assistants), laptop computers, digital audio players, digital cameras and mobile phones. It has also gained popularity in the game console market, where it is often used instead of EEPROMs or battery-powered SRAM for game save data.

Flash memory is non-volatile, which means that no power is needed to maintain the information stored in the chip. In addition, flash memory offers fast read access times (although not as fast as volatile DRAM memory used for main memory in PCs) and better kinetic shock resistance than hard disks. These characteristics explain the popularity of flash memory in portable devices. Another feature of flash memory is that when packaged in a "memory card," it is enormously durable, being able to withstand intense pressure, extremes of temperature, and even immersion in water.

Although technically a type of EEPROM, the term "EEPROM" is generally used to refer specifically to non-flash EEPROM which is erasable in small blocks, typically bytes. Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over old-style EEPROM when writing large amounts of data.

Flash memory (both NOR and NAND types) was invented by Dr. Fujio Masuoka while working for Toshiba circa 1980. According to Toshiba, the name "flash" was suggested by Dr. Masuoka's colleague, Mr. Shoji Ariizumi, because the erasure process of the memory contents reminded him of a flash of a camera. Dr. Masuoka presented the invention at the IEEE 1984 International Electron Devices Meeting (IEDM) held in San Francisco, California.

Intel saw the massive potential of the invention and introduced the first commercial NOR type flash chip in 1988. NOR-based flash has long erase and write times, but provides full address and data buses, allowing random access to any memory location. This makes it a suitable replacement for older ROM chips, which are used to store program code that rarely needs to be updated, such as a computer's BIOS or the firmware of set-top boxes. Its endurance is 10,000 to 1,000,000 erase cycles. NOR-based flash was the basis of early flash-based removable media; CompactFlash was originally based on it, though later cards moved to less expensive NAND flash.

Toshiba announced NAND flash at the 1987 International Electron Devices Meeting. It has faster erase and write times, and requires a smaller chip area per cell, thus allowing greater storage densities and lower costs per bit than NOR flash; it also has up to ten times the endurance of NOR flash. However, the I/O interface of NAND flash does not provide a random-access external address bus. Rather, data must be read on a block-wise basis, with typical block sizes of hundreds to thousands of bits. This made NAND flash unsuitable as a drop-in replacement for program ROM since most microprocessors and microcontrollers required byte-level random access. In this regard NAND flash is similar to other secondary storage devices such as hard disks and optical media, and is thus very suitable for use in mass-storage devices such as memory cards. The first NAND-based removable media format was SmartMedia, and many others have followed, including MultiMediaCard, Secure Digital, Memory Stick and xD-Picture Card. A new generation of memory card formats, including RS-MMC, miniSD and microSD, and Intelligent Stick, feature extremely small form factors. For example, the microSD card has an area of just over 1.5 cm², with a thickness of less than 1 mm; microSD capacities range from 64 MB to 16 GB, as of October 2008.

Despite the need for high programming and erasing voltages, virtually all flash chips today require only a single supply voltage, and produce the high voltages via on-chip charge pumps.

One limitation of flash memory is that although it can be read or programmed a byte or a word at a time in a random access fashion, it must be erased a "block" at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations, but cannot offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written value's. For example, a nibble value may be erased to 1111, then written as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. In practice few algorithms take advantage of this successive write capability and in general the entire block is erased and rewritten at once.

Although data structures in flash memory cannot be updated in completely general ways, this allows members to be "removed" by marking them as invalid. This technique may need to be modified for multi-level devices, where one memory cell holds more than one bit.

Another limitation is that flash memory has a finite number of erase-write cycles. Most commercially available flash products are guaranteed to withstand around 100,000 write-erase-cycles, before the wear begins to deteriorate the integrity of the storage. The guaranteed cycle count may apply only to block zero (as is the case with TSOP NAND parts), or to all blocks (as in NOR). This effect is partially offset in some chip firmware or file system drivers by counting the writes and dynamically remapping blocks in order to spread write operations between sectors; this technique is called wear levelling. Another approach is to perform write verification and remapping to spare sectors in case of write failure, a technique called bad block management (BBM). For portable consumer devices, these wearout management techniques typically extend the life of the flash memory beyond the life of the device itself, and some data loss may be acceptable in these applications. For high reliability data storage, however, it is not advisable to use flash memory that would have to go through a large number of programming cycles. This limitation is meaningless for 'read-only' applications such as thin clients and routers, which are only programmed once or at most a few times during their lifetime.

The low-level interface to flash memory chips differs from those of other memory types such as DRAM, ROM, and EEPROM, which support bit-alterability (both zero to one and one to zero) and random-access via externally accessible address buses.

While NOR memory provides an external address bus for read and program operations (and thus supports random-access); unlocking and erasing NOR memory must proceed on a block-by-block basis. With NAND flash memory, read and programming operations must be performed page-at-a-time while unlocking and erasing must happen in block-wise fashion.

Reading from NOR flash is similar to reading from random-access memory, provided the address and data bus are mapped correctly. Because of this, most microprocessors can use NOR flash memory as execute in place (XIP) memory, meaning that programs stored in NOR flash can be executed directly without the need to first copy the program into RAM. NOR flash may be programmed in a random-access manner similar to reading. Programming changes bits from a logical one to a zero. Bits that are already zero are left unchanged. Erasure must happen a block at a time, and resets all the bits in the erased block back to one. Typical block sizes are 64, 128, or 256 KB.

Bad block management is a relatively new feature in NOR chips. In older NOR devices not supporting bad block management, the software or device driver controlling the memory chip must correct for blocks that wear out, or the device will cease to work reliably.

The specific commands used to lock, unlock, program, or erase NOR memories differ for each manufacturer. To avoid needing unique driver software for every device made, a special set of CFI commands allow the device to identify itself and its critical operating parameters.

Apart from being used as random-access ROM, NOR memories can also be used as storage devices by taking advantage of random-access programming. Some devices offer read-while-write functionality so that code continues to execute even while a program or erase operation is occurring in the background. For sequential data writes, NOR flash chips typically have slow write speeds compared with NAND flash.

NAND flash architecture was introduced by Toshiba in 1989. These memories are accessed much like block devices such as hard disks or memory cards. Each block consists of a number of pages. The pages are typically 512 or 2,048 or 4,096 bytes in size. Associated with each page are a few bytes (typically 12–16 bytes) that should be used for storage of an error detection and correction checksum.

While reading and programming is performed on a page basis, erasure can only be performed on a block basis. Another limitation of NAND flash is data in a block can only be written sequentially. Number of Operations (NOPs) is the number of times the sectors can be programmed. So far this number for MLC flash is always one whereas for SLC flash it is four.

NAND devices also require bad block management by the device driver software, or by a separate controller chip. SD cards, for example, include controller circuitry to perform bad block management and wear leveling. When a logical block is accessed by high-level software, it is mapped to a physical block by the device driver or controller. A number of blocks on the flash chip may be set aside for storing mapping tables to deal with bad blocks, or the system may simply check each block at power-up to create a bad block map in RAM. The overall memory capacity gradually shrinks as more blocks are marked as bad.

NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation. This ECC may correct as little as one bit error in each 2048 bits, or up to 22 bits in each 2048 bits. If ECC cannot correct the error during read, it may still detect the error. When doing erase or program operations, the device can detect blocks that fail to program or erase and mark them bad. The data is then written to a different, good block, and the bad block map is updated.

Most NAND devices are shipped from the factory with some bad blocks which are typically identified and marked according to a specified bad block marking strategy. By allowing some bad blocks, the manufacturers achieve far higher yields than would be possible if all blocks had to be verified good. This significantly reduces NAND flash costs and only slightly decreases the storage capacity of the parts.

When executing software from NAND memories, virtual memory strategies are often used: memory contents must first be paged or copied into memory-mapped RAM and executed there (leading to the common combination of NAND + RAM). A memory management unit (MMU) in the system is helpful, but this can also be accomplished with overlays. For this reason, some systems will use a combination of NOR and NAND memories, where a smaller NOR memory is used as software ROM and a larger NAND memory is partitioned with a file system for use as a nonvolatile data storage area.

NAND is best suited to systems requiring high capacity data storage. This type of flash architecture offers higher densities and larger capacities at lower cost with faster erase, sequential write, and sequential read speeds, sacrificing the random-access and execute in place advantage of the NOR architecture.

The ONFI group is supported by major NAND Flash manufacturers, including Hynix, Intel, Micron Technology, and Numonyx, as well as by major manufacturers of devices incorporating NAND flash chips.

A group of vendors, including Intel, Dell, and Microsoft formed a Non-Volatile Memory Host Controller Interface (NVMHCI) Working Group. The goal of the group is to provide standard software and hardware programming interfaces for nonvolatile memory subsystems, including the "flash cache" device connected to the PCI Express bus.

It is important to understand that these two are linked by the design choices made in the development of NAND flash. An important goal of NAND flash development was to reduce the chip area required to implement a given capacity of flash memory, and thereby to reduce cost per bit and increase maximum chip capacity so that flash memory could compete with magnetic storage devices like hard disks.

NOR and NAND flash get their names from the structure of the interconnections between memory cells. In NOR flash, cells are connected in parallel to the bit lines, allowing cells to be read and programmed individually. The parallel connection of cells resembles the parallel connection of transistors in a CMOS NOR gate. In NAND flash, cells are connected in series, resembling a NAND gate, and preventing cells from being read and programmed individually: the cells connected in series must be read in series.

When NOR flash was developed, it was envisioned as a more economical and conveniently rewritable ROM than contemporary EPROM, EAROM, and EEPROM memories. Thus random-access reading circuitry was necessary. However, it was expected that NOR flash ROM would be read much more often than written, so the write circuitry included was fairly slow and could only erase in a block-wise fashion; random-access write circuitry would add to the complexity and cost unnecessarily.

Because of the series connection and removal of wordline contacts, a large grid of NAND flash memory cells will occupy perhaps only 60% of the area of equivalent NOR cells (assuming the same CMOS process resolution, e.g. 130 nm, 90 nm, 65 nm). NAND flash's designers realized that the area of a NAND chip, and thus the cost, could be further reduced by removing the external address and data bus circuitry. Instead, external devices could communicate with NAND flash via sequential-accessed command and data registers, which would internally retrieve and output the necessary data. This design choice made random-access of NAND flash memory impossible, but the goal of NAND flash was to replace hard disks, not to replace ROMs.

The write endurance of SLC Floating Gate NOR flash is typically equal or greater than that of NAND flash, while MLC NOR & NAND Flash have similar Endurance capabilities. Example Endurance cycle ratings listed in datasheets for NAND and NOR Flash are provided.

Because of the particular characteristics of flash memory, it is best used with either a controller to perform wear-levelling and error correction or specifically designed flash file systems, which spread writes over the media and deal with the long erase times of NOR flash blocks. The basic concept behind flash file systems is: When the flash store is to be updated, the file system will write a new copy of the changed data over to a fresh block, remap the file pointers, then erase the old block later when it has time.

In practice, flash file systems are only used for "Memory Technology Devices" ("MTD"), which are embedded flash memories that do not have a controller. Removable flash memory cards and USB flash drives have built-in controllers to perform wear-levelling and error correction so use of a specific flash file system does not add any benefit. These removable flash memory devices use the FAT file system to allow universal compatibility with computers, cameras, PDAs and other portable devices with memory card slots or ports.

Multiple chips are often arrayed to achieve higher capacities for use in consumer electronic devices such as multimedia players or GPS. The capacity of flash chips generally follows Moore's Law because they are manufactured with many of the same integrated circuits techniques and equipment.

Consumer flash drives typically have sizes measured in powers of two (e.g. 512 MB, 8 GB). This includes SSDs as hard drive replacements, even though traditional hard drives tend to use decimal units. Thus, a 64 GB SSD is actually 64 × 10243 bytes. In reality, most users will have slightly less capacity than this available, due to the space taken by filesystem metadata.

In 2005, Toshiba and SanDisk developed a NAND flash chip capable of storing 1 GB of data using Multi-level Cell (MLC) technology, capable of storing 2 bits of data per cell. In September 2005, Samsung Electronics announced that it had developed the world’s first 2 GB chip.

In March 2006, Samsung announced flash hard drives with a capacity of 4 GB, essentially the same order of magnitude as smaller laptop hard drives, and in September 2006, Samsung announced an 8 GB chip produced using a 40 nanometer manufacturing process.

In January 2008 Sandisk announced availability of their 16 GB MicroSDHC and 32 GB SDHC Plus cards.

But there are still flash-chips manufactured with low capacities like 1 MB, e.g., for BIOS-ROMs.

Commonly advertised is the maximum read speed, NAND flash memory cards are much faster at reading than writing. As a chip gets worn out, its erase/program operations slow down considerably, requiring more retries and bad block remapping. Transferring multiple small files, smaller than the chip specific block size, could lead to much lower rate. Access latency has an influence on performance but is less of an issue than with their hard drive counterpart.

The speed is sometimes quoted in MB/s (megabytes per second), or as a multiple of that of a legacy single speed CD-ROM, such as 60x, 100x or 150x. Here 1x is equivalent to 150 kilobytes per second. For example, a 100x memory card gives 150 KB x 100 = 15000 KB/s = 14.65 MB/s.

Serial flash is a small, low-power flash memory that uses a serial interface, typically SPI, for sequential data access. When incorporated into an embedded system, serial flash requires fewer wires on the PCB than parallel flash memories, since it transmits and receives data one bit at a time. This may permit a reduction in board space, power consumption, and total system cost.

With the increasing speed of modern CPUs, parallel flash devices are often much slower than the memory bus of the computer they are connected to. Conversely, modern SRAM offers access times below 10 ns, while DDR2 SDRAM offers access times below 20 ns. Because of this, it is often desirable to shadow code stored in flash into RAM; that is, the code is copied from flash into RAM before execution, so that the CPU may access it at full speed. Device firmware may be stored in a serial flash device, and then copied into SDRAM or SRAM when the device is powered-up. Using an external serial flash device rather than on-chip flash removes the need for significant process compromise (a process that is good for high speed logic is generally not good for flash and vice-versa). Once it is decided to read the firmware in as one big block it is common to add compression to allow a smaller flash chip to be used. Typical applications for serial flash include storing firmware for hard drives, Ethernet controllers, DSL modems, wireless network devices, etc.

An obvious extension of flash memory would be as a replacement for hard disks. Flash memory does not have the mechanical limitations and latencies of hard drives, so the idea of a solid-state drive, or SSD, is attractive when considering speed, noise, power consumption, and reliability.

There remain some aspects of flash-based SSDs that make the idea unattractive. Most important, the cost per gigabyte of flash memory remains significantly higher than that of platter-based hard drives. Although this ratio is decreasing rapidly for flash memory, it is not yet clear that flash memory will catch up to the capacities and affordability offered by platter-based storage. Still, research and development is sufficiently vigorous that it is not clear that it will not happen, either.

There is also some concern that the finite number of erase/write cycles of flash memory would render flash memory unable to support an operating system. This seems to be a decreasing issue as warranties on flash-based SSDs are approaching those of current hard drives.

As of May 24, 2006, South Korean consumer-electronics manufacturer Samsung Electronics had released the first flash-memory based PCs, the Q1-SSD and Q30-SSD, both of which have 32 GB SSDs. Dell Computer introduced the Latitude D430 laptop with 32 GB flash-memory storage in July 2007 -- at a price significantly above a hard-drive equipped version.

At the Las Vegas CES 2007 Summit Taiwanese memory company A-DATA showcased SSD hard disk drives based on Flash technology in capacities of 32 GB, 64 GB and 128 GB. Sandisk announced an OEM 32 GB 1.8" SSD drive at CES 2007. The XO-1, developed by the One Laptop Per Child (OLPC) association, uses flash memory rather than a hard drive. As of June 2007, a South Korean company called Mtron claims the fastest SSD with sequential read/write speeds of 100 MB/80 MB per second.

Rather than entirely replacing the hard drive, hybrid techniques such as hybrid drive and ReadyBoost attempt to combine the advantages of both technologies, using flash as a high-speed cache for files on the disk that are often referenced, but rarely modified, such as application and operating system executable files. Also, Addonics has a PCI adapter for 4 CF cards, creating a RAID-able array of solid-state storage that is much cheaper than the hardwired-chips PCI card kind.

The ASUS Eee PC uses a flash-based SSD of 2 GB to 20 GB, depending on model. The Apple Inc. Macbook Air has the option to upgrade the standard hard drive to a 128 GB Solid State hard drive. The Lenovo ThinkPad X300 also features a built-in 64 GB Solid State Drive.

Sharkoon has devoloped a device that uses six SDHC cards in RAID-0 as an SSD alternative; users may use more affordable High-Speed 8GB SDHC cards to get similar or better results than can be obtained from traditional SSDs at a lower cost.

One source states that, in 2008, the flash memory industry includes about US$9.1 billion in production and sales. Apple Inc. is the third largest purchaser of flash memory, consuming about 13% of production by itself. Other sources put the flash memory market at a size of more than US$20 billion in 2006, accounting for more than eight percent of the overall semiconductor market and more than 34 percent of the total semiconductor memory market.

Due to its relatively simple structure and high demand for higher capacity, NAND Flash memory is the most aggressively scaled technology among electronic devices. The heavy competition among the top few manufacturers only adds to the aggression. Current projections show the technology to reach approximately 20 nm by around 2010. While the expected shrink timeline is a factor of two every three years per original version of Moore's law, this has recently been accelerated in the case of NAND flash to a factor of two every two years.

As the feature size of Flash memory cells reach the minimum limit (currently estimated ~20 nm), further Flash density increases will be driven by greater levels of MLC, possibly 3-D stacking of transistors, and process improvements. Even with these advances, it may be impossible to economically scale Flash to smaller and smaller dimensions. Many promising new technologies (such as FeRAM, MRAM, PMC, PCM, and others) are under investigation and development as possible more scalable replacements for Flash.

To the top

Memory address

In computer science, a memory address is an identifier for a memory location, at which a computer program or a hardware device can store a piece of data and later retrieve it. Generally this is done through a binary number from a finite monotonically ordered sequence that uniquely describes the memory itself.

In modern byte-addressable computers, each address identifies a single byte of storage; data too large to be stored in a single byte may reside in multiple bytes occupying a sequence of consecutive addresses. Some microprocessors were designed to be word-addressable, so that the addressable storage unit was larger than a byte. Both the Texas Instruments TMS9900 and the National Semiconductor IMP-16, used 16 bit words.

In a computer program, an absolute address, (sometimes called an explicit address or specific address), is a memory address that uniquely identifies a location in memory. This is different from a relative address, that is not unique and specifies a location only in relation to somewhere else (the base address).

Each memory location, in both ROM and RAM, holds a binary number of some sort. How it is interpreted, its type meaning and use, only depends on the context of the instructions which retrieve and manipulate it. Each coded item has a unique physical position which is described by another number, the address of that single word, much like each house on a street has a unique number. A pointer is an address itself, stored in some other memory location.

So memory can be thought of either as data or instructions or both. This is called the Von Neumann architecture. One can think of memory as just a bunch of numbers or as data—text data, binary numeric data, or as instructions themselves. This uniformity was introduced in the 1950s. It is usually credited to Von Neumann, though some would be inclined to credit Alan Turing.

Some early programmers encouraged this practice as a way to save memory, when it was expensive: The Manchester Mark 1 had space in its words to store little bits of data— its processor ignored a small section in the middle of a word— and that was often exploited as extra data storage. Self-replicating programs such as viruses also exploit this, treating themselves sometimes as data and sometimes as instructions. Self modifying code is generally deprecated nowadays, as it makes testing and maintenance disproprortionally difficult to the saving of a few bytes, and can also give incorrect results because of the compiler or processor's assumptions about the machine's state. But is still used sometimes deliberately, with great care.

Instructions in a storage address are interpreted in their context to the system's main processing unit, and data is read or written by them first to an internal and isolated memory structure called a processor register, where the next instruction can manipulate it together with data put into other internal memory locations (or internal addresses).

Registers are the memory addresses within the part of the central processing unit known as the ALU, which responds to instructions in registers and uses Combinational logic to determine how to add, subtract, shift or multiply (and so on) the contents of its data registers.

A word size is characteristic to a given computer architecture. It denotes the number of bits that a CPU can process at one time. Historically it has been sized in multiples of four and eight bits (nibbles and bytes, respectively), so sizes of 4, 8, 12, 16, 24, 32, 48, 64, and larger came into vogue with technological advances.

Very often, when referring to the word size of a modern computer, one is also describing the size of address space on that computer. For instance, a computer said to be "32-bit" also usually allows 32-bit memory addresses; a byte-addressable 32-bit computer can address 232 = 4,294,967,296 bytes of memory, or 4 gibibytes (GiB). This seems logical and useful, as it allows one address to be efficiently stored in one word.

But this does not always hold. Computers often have memory addresses larger or smaller than their word size. For instance, almost all 8-bit processors, such as 6502, supported 16-bit addresses&emdash; if not they would have been limited to a mere 256 byte memory. The 16-bit Intel 8088 had only an 8-bit external memory bus on early IBM PCs, and the 16-bit Intel 8086 supported 20-bit addressing, allowing it to access 1 MiB rather than 64 KiBs of memory. Popular Pentium processors since introduction of Physical Address Extensions (PAE) support 36-bit physical addresses, while generally having only a 32-bit word.

The distinction between words and bytes has changed over the years; the DEC PDP 11 had nine-bit words. Knuth, in his seminal work The Art of Computer Programming, based his MIX abstract machine on the PDP 10 which had a shorter, 7-bit word, but he never referred to it as a byte.

A modern byte-addressable 64-bit computer—with proper OS support—has the capability of addressing 264 bytes (or 16 exbibytes) which as of 2008 is considered practically unlimited, being far more than the total amount of RAM ever manufactured. That being said, perhaps the most notable quote of Bill Gates (even if innacurate) is 640kb is ought to be good enough for anybody.

Virtual memory is now universally used on desktop machines. It maps physical memory to different addresses by way of page tables. In so doing, the Operating System can decide to allocate memory as it deems best without causing the program to halt from a long garbage collection process. Some literature {{citation needed}] suggests that on the whole garbage collection is more efficient, but it is not ideal at the moment you are firing a missile or landing on the moon, because it is non-deterministic.

The physical blocks of memory (typically 4Kb chunks) are mapped to virtual addresses by the virtual memory program in the Kernel. This is now done in hardware, formerly was done in software. The purpose of virtual memory is to abstract memory allocation, allowing the physical space to be allocated as is best for the hardware (that is, usually in good sized blocks), and to still be seen as contiguous from a program or compiler's perspective. Virtual memory is supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware. One may think of virtual memory as a filter, or an alternate set of memory addresses (that are mapped to real addresses) that allow programs (and by extension, programmers) to read from memory as quickly as possible without requiring that memory to be specifically ordered. Programs use these contiguous virtual addresses, rather than real, and often fragmented, physical addresses, to store instructions and data. When the program is actually executed, the virtual addresses are translated on the fly into real memory addresses. Logical address is a synonym of virtual address.

Virtual memory also allows enlarging the address space, the set of addresses a program can utilize and thus allows computers to make use of secondary storage that looks, to programs, like main memory. For example, virtual address space might contain twice as many addresses as main memory with the extra addresses mapped to hard disk space in the form of a swap file (also known as page file). It copies them back (called swapping) into main memory as soon as they are needed. These movements are performed in the background and in a way invisible for programs.

To the top

Source : Wikipedia