Computers

3.3702830188565 (2120)
Posted by pompos 03/22/2009 @ 15:13

Tags : computers, technology

News headlines
HP, Compaq Notebook Computer Batteries Recalled - Consumer Affairs
Hewlett-Packard is recalling about 70000 lithium-ion batteries used in its HP and Compaq notebook computers because they can overheat, posing a fire and burn hazard. The firm and government regulators are aware of two reports of batteries that...
Brier Dudley Not an easy time to pick a computer - Seattle Times
If you're shopping for a computer now, it may feel like purgatory. Early reviews of Windows 7 are glowing, but Microsoft's new operating system won't be available for at least three months. Meanwhile, most computers on store shelves have Windows Vista,...
Are computers transforming humanity? - Computerworld
But if primitive hand tools changed us from gatherers to hunters, and the invention of the printing press propagated literacy while downgrading the importance of the oral tradition, what individual and cultural transformations do new computer...
Medical Records: Internet-savvy Consumers Will Trade Some Privacy ... - Science Daily (press release)
"And we learned that, for the most part, patients are very comfortable with the idea of computers playing a central role in their care." In fact, she adds, patients said they not only want computers to bring them customized medical information,...
Police Blotter - Baltimore Sun
Burglary A DVD player and two laptop computers, all valued at nearly $2300, were stolen over the weekend from a house in the 2000 block of Riding Crop Way by someone who entered through a front window. Burglary An apartment in the 1600 block of...
Sugar Land Branch Library Presents Introductory Computer Class - FortBendNow
Fort Bend County Libraries' Adult Services staff at the Sugar Land Branch Library will present a free, introductory computer class, “Computers 101,” on Saturday, May 23, in the library's Tech Center. The class, which begins at 10 am, presents a basic...
Ghana: New Computer Lab for Suame Methodist Jjhs - AllAfrica.com
A NEW COMPUTER Science Laboratory, with the capacity to host about 30 pupils, has been constructed for the Suame Methodist Junior High School in Kumasi . The project, embarked upon by citizens of the Suame community domiciled in New York and New Jersey...
UC Davis seeing more students and faculty choosing Macs - Ars Technica
Tim Leamy, manager for the CLM, has conducted an annual survey of personal computer usage among students since 1997. According to the data that he sent to Ars, Macs accounted for 22 percent of student-owned computers at that time....
UK computer scientist hopes to unlock secrets of 2000-year-old scrolls - Kentucky.com
Brent Seales, a University of Kentucky computer science professor, specializes in reading unreadable ancient manuscripts using computer scans, pictured in Lexington, Ky., on Friday, May 15, 2009. On the screen behind him is a scan of the earliest...
SiCortex: New ENERGY STAR Ratings for Computer Servers a Step in ... - NewsBlaze
(BUSINESS WIRE) - ENERGY STAR recently announced a new ratings system for low-end computer servers. These specifications mark an industry milestone that will bring energy issues to the forefront in informing purchase decisions of computer servers...

Computer

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

A computer is a machine that manipulates data according to a list of instructions.

The first devices that resemble modern computers date to the mid-20th century (1940–1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers (PC). Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices—for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.

It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.

The history of the modern computer begins with two separate technologies—that of automated calculation and that of programmability.

Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.

The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November of 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.

Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.

Modern smartphones are fully-programmable computers in their own right, in a technical sense, and as of 2009 may well be the most common form of such computers in existence.

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.

In practical terms, a computer program may run from just a few instructions to many millions of instructions, as in a program for a word processor or a web browser. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.

Errors in computer programs are called "bugs". Bugs may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program to "hang" - become unresponsive to input such as mouse clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs may sometimes may be harnessed for malicious intent by an unscrupulous user writing an "exploit" - code designed to take advantage of a bug and disrupt a program's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers, it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

The task of developing large software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of software engineering concentrates specifically on this problem.

Suppose a computer is being employed to drive a traffic signal at an intersection between two streets. The computer has the following three basic instructions.

Comments are marked with a // on the left margin. Assume the streetnames are Broadway and Main.

With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again on both streets.

In this manner, the traffic signal will run a flash-red program when the switch is on, and will run the normal program when the switch is off. Both of these program examples show the basic layout of a computer program in a simple, familiar context of a traffic signal. Any experienced programmer can spot many software bugs in the program, for instance, not making sure that the green light is off when the switch is set to flash red. However, to remove all possible bugs would make this program much longer and more complicated, and would be confusing to nontechnical readers: the aim of this example is a simple demonstration of how computer instructions are laid out.

A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program—and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

The ALU is capable of performing two classes of operations: arithmetic and logic.

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC , the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.

One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls into an uncertain area somewhere between hardware and software.

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine language by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

As the use of computers has spread throughout society, there are an increasing number of careers involving computers. Following the theme of hardware, software and firmware, the brains of people who work in the industry are sometimes known irreverently as wetware or "meatware".

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

To the top



Personal computer game

Vg icon.svg

A personal computer game (also known as a computer game or simply PC game) is a game played on a personal computer, rather than on a video game console or arcade machine. Computer games have evolved from the simple graphics and gameplay of early titles like Spacewar!, to a wide range of more visually advanced titles.

PC games are created by one or more game developers, often in conjunction with other specialists (such as game artists) and either published independently or through a third party publisher. They may then be distributed on physical media such as DVDs and CDs, as Internet-downloadable shareware, or through online delivery services such as Direct2Drive and Steam. PC games often require specialized hardware in the user's computer in order to play, such as a specific generation of graphics processing unit or an Internet connection for online play, although these system requirements vary from game to game.

Although personal computers only became popular with the development of the microprocessor, mainframe and minicomputers, computer gaming has existed since at least the 1960s. One of the first computer games was developed in 1961, when MIT students Martin Graetz and Alan Kotok, with MIT employee Steve Russell, developed Spacewar! on a PDP-1 computer used for statistical calculations.

The first generation of PC games were often text adventures or interactive fiction, in which the player communicated with the computer by entering commands through a keyboard. The first text-adventure, Adventure, was developed for the PDP-11 by Will Crowther in 1976, and expanded by Don Woods in 1977. By the 1980s, personal computers had become powerful enough to run games like Adventure, but by this time, graphics were beginning to become an important factor in games. Later games combined textual commands with basic graphics, as seen in the SSI Gold Box games such as Pool of Radiance, or Bard's Tale.

By the mid-1970s, games were developed and distributed through hobbyist groups and gaming magazines, such as Creative Computing and later Computer Gaming World. These publications provided game code that could be typed into a computer and played, encouraging readers to submit their own software to competitions.

Microchess was one of the first games for microcomputers which was sold to the public. First sold in 1977, Microchess eventually sold over 50,000 copies on cassette tape.

As the video game market became flooded with poor-quality games created by numerous companies attempting to enter the market, and over-production of high profile releases such as the Atari 2600 adaptation of E.T. and Pacman grossly underperformed, the popularity of personal computers for education rose dramatically. In 1983, consumer interest in console video games dwindled to historical lows, as interest in computer games rose.

The effects of the crash were largely limited to the console market, as established companies such as Atari posted record losses over subsequent years. Conversely, the home computer market boomed, as sales of low-cost color computers such as the Commodore 64 rose to record highs and developers such as Electronic Arts benefited from increasing interest in the platform.

The console market experienced a resurgence in the United States with the release of the Nintendo Entertainment System. In Europe, computer gaming continued to boom for many years after.

Increasing adoption of the computer mouse, driven partially by the success of games such as the highly successful King's Quest series, and high resolution bitmap displays allowed the industry to include increasingly high-quality graphical interfaces in new releases. Meanwhile, the Commodore Amiga computer achieved great success in the market from its release in 1985, contributing to the rapid adoption of these new interface technologies.

Further improvements to game artwork were made possible with the introduction of the first sound cards, such as AdLib's Music Synthesizer Card, in 1987. These cards allowed IBM PC compatible computers to produce complex sounds using FM synthesis, where they had previously been limited to simple tones and beeps. However, the rise of the Creative Labs Sound Blaster card, which featured much higher sound quality due to the inclusion of a PCM channel and digital signal processor, led AdLib to file for bankruptcy in 1992.

The year before, id Software had produced one of the first first-person shooter games, Hovertank 3D, which was the company's first in their line of highly influential games in the genre. The same team went on to develop Wolfenstein 3D in 1992, which helped to popularize the genre, kick-starting a genre that would become one of the highest-selling in modern times. The game was originally distributed through the shareware distribution model, allowing players to try a limited part of the game for free but requiring payment to play the rest, and represented one of the first uses of texture mapping graphics in a popular game, along with Ultima Underworld.

While leading Sega and Nintendo console systems kept their CPU speed at 3-7 MHz, the 486 PC processor ran much faster, allowing it to perform many more calculations per second. The 1993 release of Doom on the PC was a breakthrough in 3D graphics, and was soon ported to various game consoles in a general shift toward greater realism. In the same time frame, games such as Myst took advantage of the new CD-ROM delivery format to include many more assets (sound, images, video) for a richer game experience.

Many early PC games included extras such as the peril-sensitive sunglasses that shipped with The Hitchhiker's Guide to the Galaxy. These extras gradually became less common, but many games were still sold in the traditional over-sized boxes that used to hold the extra "feelies". Today, such extras are usually found only in Special Edition versions of games, such as Battlechests from Blizzard .

By 1996, the rise of Microsoft Windows and success of 3D console titles such as Super Mario 64 sparked great interest in hardware accelerated 3D graphics on the PC, and soon resulted in attempts to produce affordable solutions with the ATI Rage, Matrox Mystique and Silicon Graphics ViRGE. Tomb Raider, which was released in 1996, was one of the first third person shooter games and was praised for its revolutionary graphics. As 3D graphics libraries such as DirectX and OpenGL matured and knocked proprietary interfaces out of the market, these platforms gained greater acceptance in the market, particularly with their demonstrated benefits in games such as Unreal. However, major changes to the Microsoft Windows operating system, by then the market leader, made many older MS-DOS-based games unplayable on Windows NT, and later, Windows XP (without using an emulator, such as DOSbox).

The faster graphics accelerators and improving CPU technology resulted in increasing levels of realism in computer games. During this time, the improvements introduced with products such as ATI's Radeon R300 and NVidia's GeForce 6 Series have allowed developers to increase the complexity of modern game engines. PC gaming currently tends strongly toward improvements in 3D graphics.

Unlike the generally accepted push for improved graphical performance, the use of physics engines in computer games has become a matter of debate since announcement and 2005 release of the nVidia PhysX PPU, ostensibly competing with middleware such as the Havok physics engine. Issues such as difficulty in ensuring consistent experiences for all players, and the uncertain benefit of first generation PhysX cards in games such as Tom Clancy's Ghost Recon Advanced Warfighter and City of Villains, prompted arguments over the value of such technology.

Similarly, many game publishers began to experiment with new forms of marketing. Chief among these alternative strategies is episodic gaming, an adaptation of the older concept of expansion packs, in which game content is provided in smaller quantities but for a proportionally lower price. Titles such as Half-Life 2: Episode One took advantage of the idea, with mixed results rising from concerns for the amount of content provided for the price.

Game development, as with console games, is generally undertaken by one or more game developers using either standardised or proprietary tools. While games could previously be developed by very small groups of people, as in the early example of Wolfenstein 3D, many popular computer games today require large development teams and budgets running into the millions of dollars.

PC games are usually built around a central piece of software, known as a game engine, that simplifies the development process and enables developers to easily port their projects between platforms. Unlike most consoles, which generally only run major engines such as Unreal Engine 3 and RenderWare due to restrictions on homebrew software, personal computers may run games developed using a larger range of software. As such, a number of alternatives to expensive engines have become available, including open source solutions such as Crystal Space, OGRE and DarkPlaces.

The multi-purpose nature of personal computers often allows users to modify the content of installed games with relative ease. Since console games are generally difficult to modify without a proprietary software development kit, and are often protected by legal and physical barriers against tampering and homebrew software, it is generally easier to modify the personal computer version of games using common, easy-to-obtain software. Users can then distribute their customised version of the game (commonly known as a mod) by any means they choose.

The inclusion of map editors such as UnrealEd with the retail versions of many games, and others that have been made available online such as GtkRadiant, allow users to create modifications for games easily, using tools that are maintained by the games' original developers. In addition, companies such as id Software have released the source code to older game engines, enabling the creation of entirely new games and major changes to existing ones.

Modding had allowed much of the community to produce game elements that would not normally be provided by the developer of the game, expanding or modifying normal gameplay to varying degrees. One notable example is the Hot Coffee mod for the PC port of Grand Theft Auto: San Andreas, which enables access to an abandoned sex minigame by simply modifying a bit of the game's data file.

Computer games are typically sold on standard storage media, such as compact discs, DVD, and floppy disks. These were originally passed on to customers through mail order services, although retail distribution has replaced it as the main distribution channel for video games due to higher sales. Different formats of floppy disks were initially the staple storage media of the 1980s and early 1990s, but have fallen out of practical use as the increasing sophistication of computer games raised the overall size of the game's data and program files.

The introduction of complex graphics engines in recent times has resulted in additional storage requirements for modern games, and thus an increasing interest in CDs and DVDs as the next compact storage media for personal computer games. The rising popularity of DVD drives in modern PCs, and the larger capacity of the new media (a single-layer DVD can hold up to 4.7 gigabytes of data, more than five times as much as a single CD), have resulted in their adoption as a format for computer game distribution. To date, CD versions are still offered for most games, while some games offer both the CD and the DVD versions.

Shareware marketing, whereby a limited or demonstration version of the full game is released to prospective buyers without charge, has been used as a method of distributing computer games since the early years of the gaming industry and was seen in the early days of Tanarus as well as many others. Shareware games generally offer only a small part of the gameplay offered in the retail product, and may be distributed with gaming magazines, in retail stores or on developers' websites free of charge.

In the early 1990s, shareware distribution was common among fledging game companies such as Apogee Software, Epic Megagames and id Software, and remains a popular distribution method among smaller game developers. However, shareware has largely fallen out of favor among established game companies in favour of traditional retail marketing, with notable exceptions such as Big Fish Games and PopCap Games continuing to use the model today.

With the increased popularity of the Internet, online distribution of game content has become more common. Retail services such as Direct2Drive and Download.com allow users to purchase and download large games that would otherwise only be distributed on physical media, such as DVDs, as well as providing cheap distribution of shareware and demonstration games. Other services, allow a subscription-based distribution model in which users pay a monthly fee to download and play as many games as they wish.

The Steam system, developed by Valve Corporation, provides an alternative to traditional online services. Instead of allowing the player to download a game and play it immediately, games are made available for "pre-load" in an encrypted form days or weeks before their actual release date. On the official release date, a relatively small component is made available to unlock the game. Steam also ensures that once bought, a game remains accessible to a customer indefinitely, while traditional mediums such as floppy disks and CD-ROMs are susceptible to unrecoverable damage and misplacement. The user would however depend on the Steam servers to be online to download its games. According to the terms of service for Steam, Valve has no obligation to keep the servers running. Therefore, if the Valve Corporation shut down, so would the servers.

The real-time strategy genre, which accounts for more than a quarter of all PC games sold, has found very little success on video game consoles, with releases such as Starcraft 64 failing in the marketplace. Strategy games tend to suffer from the design of console controllers, which do not allow fast, accurate movement.

Conversely, action games have found considerable popularity on video game consoles, making up nearly a third of all console video games sold in 2004, compared to just four percent on the computer. Sports games have also found greater support on game consoles compared to personal computers.

Modern computer games place great demand on the computer's hardware, often requiring a fast central processing unit (CPU) to function properly. CPU manufacturers historically relied mainly on increasing clock rates to improve the performance of their processors, but had begun to move steadily towards multi-core CPUs by 2005. These processors allow the computer to simultaneously process multiple tasks, called threads, allowing the use of more complex graphics, artificial intelligence and in-game physics.

Similarly, 3D games often rely on a powerful graphics processing unit (GPU), which accelerates the process of drawing complex scenes in realtime. GPUs may be an integrated part of the computer's motherboard, the most common solution in laptops, or come packaged with a discrete graphics card with a supply of dedicated Video RAM, connected to the motherboard through either an AGP or PCI-Express port. It is also possible to use multiple GPUs in a single computer, using technologies such as NVidia's Scalable Link Interface and ATI's CrossFire.

Sound cards are also available to provide improved audio in computer games. These cards provide improved 3D audio and provide audio enhancement that is generally not available with integrated alternatives, at the cost of marginally lower overall performance. The Creative Labs SoundBlaster line was for many years the de facto standard for sound cards, although its popularity dwindled as PC audio became a commodity on modern motherboards.

Physics processing units (PPUs), such as the Nvidia PhysX (formerly AGEIA PhysX) card, are also available to accelerate physics simulations in modern computer games. PPUs allow the computer to process more complex interactions among objects than is achievable using only the CPU, potentially allowing players a much greater degree of control over the world in games designed to use the card.

Virtually all personal computers use a keyboard and mouse for user input. Other common gaming peripherals are a headset for faster communication in online games, joysticks for flight simulators, steering wheels for driving games and gamepads for console-style games.

Computer games also rely on third-party software such as an operating system (OS), device drivers, libraries and more to run. Today, the vast majority of computer games are designed to run on the Microsoft Windows OS. Whereas earlier games written for MS-DOS would include code to communicate directly with hardware, today Application programming interfaces (APIs) provide an interface between the game and the OS, simplifying game design. Microsoft's DirectX is an API that is widely used by today's computer games to communicate with sound and graphics hardware. OpenGL is a cross-platform API for graphics rendering that is also used. The version of the graphics card's driver installed can often affect game performance and gameplay. It is not unusual for a game company to use a third-party game engine, or third-party libraries for a game's AI or physics.

Multiplayer gaming was largely limited to local area networks (LANs) before cost-effective broadband Internet access became available, due to their typically higher bandwidth and lower latency than the dial-up services of the time. These advantages allowed more players to join any given computer game, but have persisted today because of the higher latency of most Internet connections and the costs associated with broadband Internet.

LAN gaming typically requires two or more personal computers, a router and sufficient networking cables to connect every computer on the network. Additionally, each computer must have a network card installed or integrated onto its motherboard in order to communicate with other computers on the network. Optionally, any LAN may include an external connection to the Internet.

Online multiplayer games have achieved popularity largely as a result of increasing broadband adoption among consumers. Affordable high-bandwidth Internet connections allow large numbers of players to play together, and thus have found particular use in massively multiplayer online RPGs, Tanarus and persistent online games such as World War II Online.

Although it is possible to participate in online computer games using dial-up modems, broadband internet connections are generally considered necessary in order to reduce the latency between players (commonly known as "lag"). Such connections require a broadband-compatible modem connected to the personal computer through a network interface card (generally integrated onto the computer's motherboard), optionally separated by a router. Online games require a virtual environment, generally called a "game server." These virtual servers inter-connect gamers, allowing real time, and often fast paced action. To meet this subsequent need, Game Server Providers (GSP) have become increasingly more popular over the last half decade. While not required for all gamers, these servers provide a unique "home," fully customizable (such as additional modifications, settings, etc) - giving the end gamers the experience they desire. Today there are over 500,000 game servers hosted in North America alone.

Emulation software, used to run software without the original hardware, are popular for their ability to play legacy video games without the consoles or operating system for which they were designed. Console emulators such as NESticle and MAME are relatively commonplace, although the complexity of modern consoles such as the Xbox or Playstation makes them far more difficult to emulate, even for the original manufacturers.

Most emulation software mimics a particular hardware architecture, often to an extremely high degree of accuracy. This is particularly the case with classic home computers such as the Commodore 64, whose software often depends on highly sophisticated low-level programming tricks invented by game programmers and the demoscene.

PC games have long been a source of controversy, particularly related to the violence that has become commonly associated with video gaming in general. The debate surrounds the influence of objectionable content on the social development of minors, with organisations such as the American Psychological Association concluding that video game violence increases children's aggression, a concern that prompted a further investigation by the Center for Disease Control in September 2006. Industry groups have responded by noting the responsibility of parents in governing their children's activities, while attempts in the United States to control the sale of objectionable games have generally been found unconstitutional.

Video game addiction is another cultural aspect of gaming to draw criticism as it can have a negative influence on health and on social relations. The problem of addiction and its health risks seems to have grown with the rise of Massively Multiplayer Online Role Playing Games (MMORPGs).

To the top



Laptop

Laptop computers are portable and can be used in many locations (Former Mexican President, Vicente Fox).

A laptop (also known as a notebook) is a personal computer designed for mobile use small enough to sit on one's lap. A laptop includes most of the typical components of a typical desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a trackpad, or a pointing stick) as well as a battery, into a single small and light unit. The rechargeable battery required is charged from an AC/DC adapter (ie, a wall wart) and typically stores enough energy to run the laptop for several hours.

Laptops are usually shaped like a large notebook with thicknesses between 0.7–1.5 inches (18–38 mm) and dimensions ranging from 10x8 inches (27x22cm, 13" display) to 15x11 inches (39x28cm, 17" display) and up. Modern laptops weigh 3 to 12 pounds (1.4 to 5.4 kg); older laptops were usually heavier. Most laptops are designed in the flip form factor to protect the screen and the keyboard when closed. Modern 'tablet' laptops have a complex joint between the keyboard housing and the display, permitting the display panel to twist and then lay flat on the keyboard housing. They usually have a touchscreen display and some include handwriting recognition or graphics drawing capability.

Laptops were originally considered to be "a small niche market" and were thought suitable mostly for "specialized field applications" such as "the military, the Internal Revenue Service, accountants and sales representatives". Battery-powered portable computers had just 2% worldwide market share in 1986. But today, there are already more laptops than desktops in businesses, and laptops are becoming obligatory for student use and more popular for general use. According to a forecast by Intel, more laptops than desktops will be sold in the general PC market as soon as 2009.

As the personal computer became feasible in the early 1970s, the idea of a portable personal computer followed. In particular, a "personal, portable information manipulator" was imagined by Alan Kay at Xerox PARC in 1968 and described in his 1972 paper as the "Dynabook".

The I.B.M. SCAMP project (Special Computer APL Machine Portable), was demonstrated in 1973. This prototype was based on the PALM processor (Put All Logic In Microcode).

The I.B.M. 5100, the first commercially available portable computer, appeared in September 1975, and was based on the SCAMP prototype.

As 8-bit CPU machines became widely accepted, the number of portables increased rapidly. The Osborne 1 used the Zilog Z80, weighed 23.5 pounds (10.7 kg). It had no battery, only a tiny 5" CRT screen and dual 5¼" single-density floppy drives. In the same year the first laptop-sized portable computer, the Epson HX-20, was announced. The Epson had a LCD screen, a rechargeable battery and a calculator-size printer in a 1.6 kg (4 pounds) chassis. Both Tandy/Radio Shack and HP also produced portable computers of varying designs during this period.

The first laptop using the clamshell design, used today by almost all laptops, appeared in 1982. The $8150 GRiD Compass 1100 was used at NASA and by the military among others. The Gavilan SC, released in 1983, was the first notebook marketed using the term "laptop".

Early laptops often had proprietary and incompatible system architectures, operating systems, and bundled applications, making third party hardware and software difficult and sometimes impossible to develop.

A desktop replacement computer is a laptop that provides most of the capabilities of a desktop computer, with a similar level of performance. Desktop replacements are usually larger and heavier than standard laptops. They contain more powerful components and numerous ports, and have a 15.4" or larger display. Because of their bulk, they are not as portable as other laptops and their operation time on batteries is typically shorter.

Some laptops in this class use a limited range of desktop components to provide better performance for the same price at the expense of battery life; in a few of those models, there is no battery at all, and the laptop can only be used when plugged in. These are sometimes called desknotes, a portmanteau of the words "desktop" and "notebook," though the term can also be applied to desktop replacement computers in general.

In the early 2000s, desktops were more powerful, easier to upgrade, and much cheaper in comparison with laptops. But in the last few years, the advantages have drastically changed or shrunk since the performance of laptops has markedly increased. In the second half of 2008, laptops have finally outsold desktops for the first time ever. In the U.S., the PC shipment declined 10 percent in the forth quarter of 2008. In Asia, the worst PC shipment growth went up 1.8 percent over the same quarter the previous year since PC statistics research started.

The names "Media Center Laptops" and "Gaming Laptops" are also used to describe this class of notebooks.

Although the term Notebook is now often used interchangeably with the term Laptop, it was originally introduced to differentiate a smaller, thinner and lighter range of devices (comparable with a traditional paper notebook) which supplanted their larger counterparts.

A subnotebook, also called an ultraportable by some vendors, is a laptop designed and marketed with an emphasis on portability (small size, low weight and long battery life) that retains the performance of a standard notebook. Subnotebooks are usually smaller and lighter than standard laptops, weighing between 0.8 and 2 kg (2 to 5 pounds); the battery life can exceed 10 hours when a large battery or an additional battery pack is installed.

To achieve the size and weight reductions, ultraportables use high resolution 13" and smaller screens (down to 6.4"), have relatively few ports, employ expensive components designed for minimal size and best power efficiency, and utilize advanced materials and construction methods. Some subnotebooks achieve a further portability improvement by omitting an optical/removable media drive; in this case they may be paired with a docking station that contains the drive and optionally more ports or an additional battery.

The term "subnotebook" is usually reserved to laptops that run general-purpose desktop operating systems such as Windows, Linux or Mac OS X, rather than specialized software such as Windows CE, Palm OS or Internet Tablet OS.

Netbooks are laptops that are light-weight, economical, energy-efficient and especially suited for wireless communication and Internet access. Hence the name netbook (as "the device excels in web-based computing performance") rather than notebook which pertains to size.

Especially suited for web browsing and e-mailing, netbooks "rely heavily on the Internet for remote access to web-based applications" and are targeted increasingly at cloud computing users who rely on servers and require a less powerful client computer.. While the devices range in size from below 5 inches to over 12, most are between 7 and 11 inches and weigh between 2 and 3 pounds.

Netbooks have a wide range of light-weight operating systems including Linux and Windows XP rather than more resource-intensive operating systems like Windows Vista as they have less processing power than traditional laptops.

A rugged (or ruggedized) laptop is designed to reliably operate in harsh usage conditions such as strong vibrations, extreme temperatures and wet or dusty environments. Rugged laptops are usually designed from scratch, rather than adapted from regular consumer laptop models. Rugged notebooks are bulkier, heavier, and much more expensive than regular laptops, and thus are seldom seen in regular consumer use.

The design features found in rugged laptops include rubber sheeting under the keyboard keys, sealed port and connector covers, passive cooling, superbright displays easily readable in daylight, cases and frames made of magnesium alloys or have a magnesium alloy rollcage that are much stronger than plastic found in commercial laptops and solid-state storage devices or hard disc drives that are shock mounted to withstand constant vibrations. Rugged laptops are commonly used by public safety services (police, fire and medical emergency), military, utilities, field service technicians, construction, mining and oil drilling personnel. Rugged laptops are usually sold to organizations, rather than individuals, and are rarely marketed via retail channels.

The basic components of laptops are similar in function to their desktop counterparts, but are miniaturized, adapted to mobile use, and designed for low power consumption. Because of the additional requirements, laptop components have worse performance than desktop parts of comparable price. Furthermore, the design bounds on power, size, and cooling of laptops limit the maximum performance of laptop parts compared to that of desktop components.

A docking station is a relatively bulky laptop accessory that contains multiple ports, expansion slots and bays for fixed or removable drives. A laptop connects and disconnects easily to a docking station, typically through a single large proprietary connector. A port replicator is a simplified docking station that only provides connections from the laptop to input/output ports. Both docking stations and port replicators are intended to be used at a permanent working place (a desk) to offer instant connection to multiple input/output devices and to extend a laptop's capabilities.

Docking stations became a common laptop accessory in the early 1990s. The most common use was in a corporate computing environment where the company had standardized on a common network card and this same card was placed into the docking station. These stations were very large and quite expensive. As the need for additional storage and expansion slots became less critical because of the high integration inside the laptop, the "port replicator" has gained popularity. The port replicator was a cheaper, often passive device that simply mated to the connectors on the back of the notebook and allowed the user to quickly connect his laptop so that his monitor, keyboard, printer and other devices were instantly attached. As higher speed ports such as USB and Firewire became common, the connection of a port replicator to a laptop was accomplished by a small cable connected to one of the USB or FireWire ports on the notebook. Wireless Port Replicators are available as well.

A recent variant of the port replicator is the combined power/display/USB hub cable found in the new Apple Cinema Display.

Some laptop components (optical drives, hard drives, memory and internal expansion cards) are relatively standardized, and it is possible to upgrade or replace them in many laptops as long as the new part is of the same type. Subtle incompatibilities and variations in dimensions, however, are not uncommon. Depending on the manufacturer and model, a laptop may range from having several standard, easily customizable and upgradeable parts to a proprietary design that can't be reconfigured at all.

In general, components other than the four categories listed above are not intended to be replaceable, and thus rarely follow a standard. In particular, motherboards, locations of ports, design and placement of internal components are usually make- and model-specific. Those parts are neither interchangeable with parts from other manufacturers nor upgradeable. If broken or damaged, they must be substituted with an exact replacement part. The users uneducated in the relevant fields are those the most affected by incompatibilities, especially if they attempt to connect their laptops with incompatible hardware or power adapters.

Intel, Asus, Compal, Quanta and other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards.

While the performance of mainstream desktops and laptops is comparable, laptops are significantly more expensive than desktop PCs at the same performance level. The upper limits of performance of laptops are a little bit lower, and "bleeding-edge" features usually appear first in desktops and only then, as the underlying technology matures, are adapted to laptops.

However, for Internet browsing and typical office applications, where the computer spends the majority of its time waiting for the next user input, even netbook-class laptops are generally fast enough. Standard laptops are sufficiently powerful for high-resolution movie playback, 3D gaming and video editing and encoding. Number-crunching software (databases, math, engineering, financial, etc.) is the area where the laptops are at the biggest disadvantage.

Upgradeability of laptops is very limited compared to desktops, which are thoroughly standardized. In general, hard drives and memory can be upgraded easily. Optical drives and internal expansion cards may be upgraded if they follow an industry standard, but all other internal components, including the CPU and graphics, are not intended to be upgradeable.

The reasons for limited upgradeability are both technical and economic. There is no industry-wide standard form factor for laptops; each major laptop manufacturer pursues its own proprietary design and construction, with the result that laptops are difficult to upgrade and have high repair costs. With few exceptions, laptop components can rarely be swapped between laptops of competing manufacturers, or even between laptops from the different product-lines of the same manufacturer.

Some upgrades can be performed by adding external devices, either USB or in expansion card format such a PC Card: sound cards, network adapters, hard and optical drives, and numerous other peripherals are available. But those upgrades usually impair the laptop's portability, because they add cables and boxes to the setup and often have to be disconnected and reconnected when the laptop is moved.

Because of their small and flat keyboard and trackpad pointing devices, prolonged use of laptops can cause repetitive strain injury. Usage of separate, external ergonomic keyboards and pointing devices is recommended to prevent injury when working for long periods of time; they can be connected to a laptop easily by USB or via a docking station. Some health standards require ergonomic keyboards at workplaces.

The integrated screen often causes users to hunch over for a better view, which can cause neck or spinal injuries. A larger and higher-quality external screen can be connected to almost any laptop to alleviate that and to provide additional "screen estate" for more productive work.

A study by State University of New York researchers found that heat generated from laptops can raise the temperature of the scrotum when balancing the computer on one's lap, potentially putting sperm count at risk. The small study, which included little more than two dozen men aged 21 to 35, found that the sitting position required to balance a laptop can raise scrotum temperature by as much as 2.1 °C (3.8 °F). Heat from the laptop itself can raise the temperature by another 0.7 °C (1.4 °F), bringing the potential total increase to 2.8 °C (5.2 °F). However, further research is needed to determine whether this directly affects sterility in men.

A common practical solution to this problem is to place the laptop on a table or desk. Another solution is to obtain a cooling unit for the laptop, these units are usually USB powered consist of a hard thin plastic case housing 1, 2 or 3 cooling fans (the whole thing is designed to sit under a laptop) which results in the laptop remaining cool to the touch, and greatly reduces laptop heat generation. There are several companies which make these coolers.

Heat from using a laptop on the lap can also cause skin discoloration on the thighs.

Due to their portability, laptops are subject to more wear and physical damage than desktops. Components such as screen hinges, latches, power jacks and power cords deteriorate gradually due to ordinary use. A liquid spill onto the keyboard, a rather minor mishap with a desktop system, can damage the internals of a laptop and result in a costly repair. One study found that a laptop is 3 times more likely to break during the first year of use than a desktop.

Original external components are expensive (a replacement AC adapter, for example, could cost $75); other parts are inexpensive—a power jack can cost a few dollars—but their replacement may require extensive disassembly and reassembly of the laptop by a technician. Other inexpensive but fragile parts often cannot be purchased separate from larger more expensive components. The repair costs of a failed motherboard or LCD panel may exceed the value of a used laptop.

Laptops rely on extremely compact cooling systems involving a fan and heat sink that can fail due to eventual clogging by accumulated airborne dust and debris. Most laptops do not have any sort of removable dust collection filter over the air intake for these cooling systems, resulting in a system that gradually runs hotter and louder as the years pass. Eventually the laptop starts to overheat even at idle load levels. This dust is usually stuck inside where casual cleaning and vacuuming cannot remove it. Instead, a complete disassembly is needed to clean the laptop.

Battery life of laptops is limited; the capacity drops with time, necessitating an eventual replacement after a few years.

Being expensive, common and portable, laptops are prized targets for theft. The cost of the stolen business or personal data and of the resulting problems (identity theft, credit card fraud, breach of privacy laws) can be many times the value of the stolen laptop itself. Therefore, both physical protection of laptops and the safeguarding of data contained on them are of the highest importance.

Most laptops have a Kensington security slot which is used to tether the computer to a desk or other immovable object with a security cable and lock. In addition to this, modern operating systems and third-party software offer disk encryption functionality that renders the data on the laptop's hard drive unreadable without a key or a passphrase.

There are several categories of portable computing devices that can run on batteries but are not usually classified as laptops: portable computers, keyboardless tablet PCs, Internet tablets, PDAs, Ultra Mobile PCs (UMPCs) and smartphones.

A Portable computer is a general-purpose computer that can be easily moved from place to place, but cannot be used while in transit, usually because it requires some "setting-up" and an AC power source. The most famous example is the Osborne 1. Also called a "transportable" or a "luggable" PC.

A Tablet PC that lacks a keyboard (also known as a non-convertible Tablet PC) is shaped like slate or a paper notebook, features a touchscreen with a stylus and handwriting recognition software. Tablets may not be best suited for applications requiring a physical keyboard for typing, but are otherwise capable of carrying out most tasks that an ordinary laptop would be able to perform.

An Internet tablet is an Internet appliance in tablet form. Unlike a Tablet PC, an Internet tablet does not have much computing power and its applications suite is limited—it can not replace a general purpose computer. Internet tablets typically feature an MP3 and video player, a web browser, a chat application and a picture viewer.

A Personal digital assistant (PDA) is a small, usually pocket-sized, computer with limited functionality. It is intended to supplement and to synchronize with a desktop computer, giving access to contacts, address book, notes, e-mail and other features.

An Ultra Mobile PC is a full-featured, PDA-sized computer running a general-purpose operating system.

A Smart phone is a PDA with an integrated cellphone functionality. Current smartphones have a wide range of features and installable applications.

Boundaries that separate these categories are blurry at times. For example, the OQO UMPC is also a PDA-sized tablet PC; the Apple eMate had the clamshell form factor of a laptop, but ran PDA software. The HP Omnibook line of laptops included some devices small enough to be called Ultra Mobile PCs. The hardware of the Nokia 770 internet tablet is essentially the same as that of a PDA such as the Zaurus 6000; the only reason it's not called a PDA is that it doesn't have PIM software. On the other hand, both the 770 and the Zaurus can run some desktop Linux software, usually with modifications.

There is a multitude of laptop brands and manufacturers; several major brands, offering notebooks in various classes, are listed in the box to the right.

The major brands usually offer good service and support, including well-executed documentation and driver downloads that will remain available for many years after a particular laptop model is no longer produced. Capitalizing on service, support and brand image, laptops from major brands are more expensive than laptops by smaller brands and ODMs.

Some brands are specializing in a particular class of laptops, such as gaming laptops (Alienware), netbooks (EeePC) and laptops for children (OLPC).

Many brands, including the major ones, do not design and do not manufacture their laptops. Instead, a small number of Original Design Manufacturers (ODMs) design new models of laptops, and the brands choose the models to be included in their lineup. In 2006, 7 major ODMs manufactured 7 of every 10 laptops in the world, with the largest one (Quanta Computer) having 30% world market share. Therefore, there often are identical models available both from a major label and from a low-profile ODM in-house brand.

For year 2008 it is estimated that 145.9 million notebooks were sold, and in 2009 the number will grow to 177.7 million. The third quarter of 2008 was the first time when notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units.

To the top



Central processing unit

Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.

A central processing unit (CPU) or processor is an electronic circuit that can execute computer programs. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage. The term itself and its initialism have been in use in the computer industry at least since the early 1960s (Weik 1961). The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same.

Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are suited for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones to children's toys.

Prior to the advent of machines that resemble today's CPUs, computers such as the ENIAC had to be physically rewired in order to perform different tasks. These machines are often referred to as "fixed-program computers," since they had to be physically reconfigured in order to run a different program. Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.

The idea of a stored-program computer was already present during ENIAC's design, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was even completed, mathematician John von Neumann distributed the paper entitled "First Draft of a Report on the EDVAC." It outlined the design of a stored-program computer that would eventually be completed in August 1949 (von Neumann 1945). EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the large amount of time and effort it took to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory.

While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him such as Konrad Zuse had suggested similar ideas. Additionally, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.

Being digital devices, all CPUs deal with discrete states and therefore require some kind of switching elements to differentiate between and change these states. Prior to commercial acceptance of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational and eventually stop functioning altogether. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failing component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.

Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely (Weik 1961:238). In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.

The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements like vacuum tubes and electrical relays. With this improvement more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.

During this period, a method of manufacturing many transistors in a compact space gained popularity. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip." At first only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based upon these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo guidance computer, usually contained transistor counts numbering in multiples of ten. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. As microelectronic technology advanced, an increasing number of transistors were placed on ICs, thus decreasing the quantity of individual ICs needed for a complete CPU. MSI and LSI (medium- and large-scale integration) ICs increased transistor counts to hundreds, and then thousands.

In 1964 IBM introduced its System/360 computer architecture which was used in a series of computers that could run the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM utilized the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs (Amdahl et al. 1964). The System/360 architecture was so popular that it dominated the mainframe computer market for the decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In the same year (1964), Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line that originally was built with SSI ICs but was eventually implemented with LSI components once these became practical. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits (Digital Equipment Corporation 1975).

Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of the switching elements (which were almost exclusively transistors by this time), CPU clock rates in the tens of megahertz were obtained during this period. Additionally while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc.

The introduction of the microprocessor in the 1970s significantly affected the design and implementation of CPUs. Since the introduction of the first microprocessor (the Intel 4004) in 1970 and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term "CPU" is now applied almost exclusively to microprocessors.

Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date.

While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.

The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.

The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. In other words, the program counter keeps track of the CPU's place in the current program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units. Often the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).

The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture(ISA). Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured.

After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, an arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set.

The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs. Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow.

After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "Classic RISC pipeline," which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontroller). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.

The way a CPU represents numbers is a design choice that affects the most basic ways in which the device functions. Some early digital computers used an electrical model of the common decimal (base ten) numeral system to represent numbers internally. A few other computers have used more exotic numeral systems like ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage.

Related to number representation is the size and precision of numbers that a CPU can represent. In the case of a binary CPU, a bit refers to one significant place in the numbers a CPU deals with. The number of bits (or numeral places) a CPU uses to represent numbers is often called "word size", "bit width", "data path width", or "integer precision" when dealing with strictly integer numbers (as opposed to floating point). This number differs between architectures, and often within different parts of the very same CPU. For example, an 8-bit CPU deals with a range of numbers that can be represented by eight binary digits (each digit having two possible values), that is, 28 or 256 discrete numbers. In effect, integer size sets a hardware limit on the range of integers the software run by the CPU can utilize.

Integer range can also affect the number of locations in memory the CPU can address (locate). For example, if a binary CPU uses 32 bits to represent a memory address, and each memory address represents one octet (8 bits), the maximum quantity of memory that CPU can address is 232 octets, or 4 GiB. This is a very simple view of CPU address space, and many designs use more complex addressing methods like paging in order to locate more memory than their integer range would allow with a flat address space.

Higher levels of integer range require more structures to deal with the additional digits, and therefore more complexity, size, power usage, and general expense. It is not at all uncommon, therefore, to see 4- or 8-bit microcontrollers used in modern applications, even though CPUs with much higher range (such as 16, 32, 64, even 128-bit) are available. The simpler microcontrollers are usually cheaper, use less power, and therefore dissipate less heat, all of which can be major design considerations for electronic devices. However, in higher-end applications, the benefits afforded by the extra range (most often the additional address space) are more significant and often affect design choices. To gain some of the advantages afforded by both lower and higher bit lengths, many CPUs are designed with different bit widths for different portions of the device. For example, the IBM System/370 used a CPU that was primarily 32 bit, but it used 128-bit precision inside its floating point units to facilitate greater accuracy and range in floating point numbers (Amdahl et al. 1964). Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose usage where a reasonable balance of integer and floating point capability is required.

Most CPUs, and indeed most sequential logic devices, are synchronous in nature. That is, they are designed and operate on assumptions about a synchronization signal. This signal, known as a clock signal, usually takes the form of a periodic square wave. By calculating the maximum time that electrical signals can move in various branches of a CPU's many circuits, the designers can select an appropriate period for the clock signal.

This period must be longer than the amount of time it takes for a signal to move, or propagate, in the worst-case scenario. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below).

However architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided in order to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue as clock rates increase dramatically is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does heat dissipation, causing the CPU to require more effective cooling solutions.

One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without utilizing a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers (Garside et al. 1999).

The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time.

This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock). However, the performance is nearly always subscalar (less than one instruction per cycle).

Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques. Instruction level parallelism (ILP) seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the utilization of on-die execution resources), and thread level parallelism (TLP) purposes to increase the number of threads (effectively individual programs) that a CPU can execute simultaneously. Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application.

One of the simplest methods used to accomplish increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is the simplest form of a technique known as instruction pipelining, and is utilized in almost all modern general-purpose CPUs. Pipelining allows more than one instruction to be executed at any given time by breaking down the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired.

Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. To cope with this, additional care must be taken to check for these sorts of conditions and delay a portion of the instruction pipeline if this occurs. Naturally, accomplishing this requires additional circuitry, so pipelined processors are more complex than subscalar ones (though not very significantly so). A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage).

Further improvement upon the idea of instruction pipelining led to the development of a method that decreases the idle time of CPU components even further. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units. In a superscalar pipeline, multiple instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so they are dispatched to available execution units, resulting in the ability for several instructions to be executed simultaneously. In general, the more instructions a superscalar CPU is able to dispatch simultaneously to waiting execution units, the more instructions will be completed in a given cycle.

Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly and correctly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and gives rise to the need in superscalar architectures for significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, and out-of-order execution crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may or may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies.

In the case where a portion of the CPU is superscalar and part is not, the part which is not suffers a performance penalty due to scheduling stalls. The original Intel Pentium (P5) had two superscalar ALUs which could accept one instruction per clock each, but its FPU could not accept one instruction per clock. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the Pentium architecture, P6, added superscalar capabilities to its floating point features, and therefore afforded a significant increase in floating point instruction performance.

Both simple pipelining and superscalar design increase a CPU's ILP by allowing a single processor to complete execution of instructions at rates surpassing one instruction per cycle (IPC). Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or ISA. The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the amount of work the CPU must perform to boost ILP and thereby reducing the design's complexity.

Another strategy of achieving performance is to execute multiple programs or threads in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as Multiple Instructions-Multiple Data or MIMD.

One technology used for this purpose was multiprocessing (MP). The initial flavor of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single silicon chip, the technology is known as a multi-core microprocessor.

It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). This approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU is replicated in order to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as block multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly switch to another thread which is ready to run, the switch often done in one CPU clock cycle. Another type of MT is known as simultaneous multithreading, where instructions of multiple threads are executed in parallel within one CPU clock cycle.

For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques.

CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or program.

This reversal of emphasis is evidenced by the proliferation of dual and multiple core CMP (chip-level multiprocessing) designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PS3's 8-core Cell microprocessor.

A less common but increasingly important paradigm of CPUs (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as SISD (single instruction, single data) and SIMD (single instruction, multiple data), respectively. The great utility in creating CPUs that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks are multimedia applications (images, video, and sound), as well as many types of scientific and engineering tasks. Whereas a scalar CPU must complete the entire process of fetching, decoding, and executing each instruction and value in a set of data, a vector CPU can perform a single operation on a comparatively large set of data with one instruction. Of course, this is only possible when the application tends to require many steps which apply one operation to a large set of data.

Most early vector CPUs, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose CPUs has become significant. Shortly after floating point execution units started to become commonplace to include in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose CPUs. Some of these early SIMD specifications like Intel's MMX were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating point numbers. Progressively, these early designs were refined and remade into some of the common, modern SIMD specifications, which are usually associated with one ISA. Some notable modern examples are Intel's SSE and the PowerPC-related AltiVec (also known as VMX).

To the top



Video game

Vg icon.svg

A video game is an electronic game that involves interaction with a user interface to generate visual feedback on a video device. The word video in video game traditionally referred to a raster display device. However, with the popular use of the term "video game", it now implies any type of display device. The electronic systems used to play video games are known as platforms; examples of these are personal computers and video game consoles. These platforms range from large computers to small handheld devices. Specialized video games such as arcade games, while previously common, have gradually declined in use.

The input device used to manipulate video games is called a game controller, and varies across platforms. For example, a dedicated console controller might consist of only a button and a joystick. Another may feature a dozen buttons and one or more joysticks. Early personal computer games often needed a keyboard for gameplay, or more commonly, required the user to buy a separate joystick with at least one button. Many modern computer games allow, or even require, the player to use a keyboard and mouse simultaneously.

Video games typically also use other ways of providing interaction and information to the player. Audio is almost universal, using sound reproduction devices, such as speakers and headphones. But other feedback may come via haptic peripherals, such as vibration force feedback.

Early games used interactive electronic devices with various display formats. The earliest example is from 1947—a "Cathode Ray Tube Amusement Device" was filed for a patent on January 25, 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on December 14, 1948 as U.S. Patent 2455992.

Inspired by radar displays, it consisted of an analog device that allowed a user to control a vector-drawn dot on the screen to simulate a missile being fired at targets, which were drawings fixed to the screen.

Each game used different means of display: NIMROD used a panel of lights to play the game of Nim, OXO used a graphical display to play tic-tac-toe, Tennis for Two used an oscilloscope to display a side view of a tennis court, and Spacewar! used the DEC PDP-1's vector display to have two spaceships battle each other.

In 1971, Computer Space, created by Nolan Bushnell and Ted Dabney, was the first commercially-sold, coin-operated video game. It used a black-and-white television for its display, and the computer system was made of 74 series TTL chips. The game was featured in the 1973 science fiction film Soylent Green. Computer Space was followed in 1972 by the Magnavox Odyssey, the first home console. Modeled after a late 1960s prototype console developed by Ralph H. Baer called the "Brown Box", it also used a standard television. These were followed by two versions of Atari's Pong; an arcade version in 1972 and a home version in 1975. The commercial success of Pong led numerous other companies to develop Pong clones and their own systems, spawning the video game industry.

The term "platform" refers to the specific combination of electronic or computer hardware which, in conjunction with low-level software, allows a video game to operate. The term "system" is also commonly used.

In common use a "PC game" refers to a form of media that involves a player interacting with a personal computer connected to a high-resolution video monitor. A "console game" is played on a specialized electronic device that connects to a standard television set or composite video monitor. A "handheld" gaming device is a self contained electronic device that is portable and can be held in a user's hands. "Arcade game" generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special cabinet. These distinctions are not always clear and there may be games that bridge one or more platforms. Beyond this there are platforms that have non-video game variations such as in the case of electro-mechanically based arcade machines. There are also devices with screens which have the ability to play games but are not dedicated video game machines (examples are mobile phones, PDAs and graphing calculators).

A video game, like most other forms of media, may be categorized into genres based on many factors such as method of game play, types of goals, and more. Because genres are dependent on content for definition, genres have changed and evolved as newer styles of video games are created. As the production values of video games have increased over the years both in visual appearance and depth of story telling, the video game industry has been producing more life-like and complex games that push the boundaries of the traditional game genres. Some genres represent combinations of others, such as massively multiplayer online role-playing games. It is also common to see higher level genre terms that are collective in nature across all other genres such as with action or horror-themed video games.

Video games are primarily meant for entertainment. However, some video games are made (at least in part) for other reasons. These include advergames, educational games, propaganda games (e.g. militainment), and others. Many of these fall under the category of serious games.

Video game development and authorship, much like any other form of entertainment is frequently a cross disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Although, over the years this has expanded to include almost every type of skill that one might see prevalent in any movie or television program including sound designers, musicians, and other technicians; all of which are managed by producers.

In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone as this still occurs in the casual gaming and handheld markets where single screen games are more prevalent due to technical limitations of the target platform (such as cellphones and PDAs).

With the growth of the size of development teams in the industry the problem of cost has become more critical then ever. Development studios need to be able to pay their staff a competitive wage in order to attract and retain the best talent, while publishers are constantly on the look to keep costs down in order to maintain profitability on their investment. Typically, a video game console development team can range in sizes of anywhere from 5 to 50 people, with some teams exceeding 100. The growth of team size combined with greater pressures to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines and unfinished products; Duke Nukem Forever is the quintessential example of these problems.

Games running on a PC are often designed with end-user modifications in mind, and this consequently allows modern computer games to be modified by gamers without much difficulty. These mods can add an extra dimension of replayability and interest. The Internet provides an inexpensive medium to promote and distribute mods, and they have become an increasingly important factor in the commercial success of some games. Developers such as id Software, Valve Software, Crytek, Epic Games and Blizzard Entertainment ship their games with the very development tools used to make the game in the first place, along with documentation to assist mod developers, which allows for the kind of success seen by popular mods such as the (previously) Half-Life mod Counter-Strike.

Cheating in computer games may involve cheat codes implemented by the game developers, modification of game code by third parties, or players exploiting a software glitch. Modifications are facilitated by either cheat cartridge hardware or a software trainer. Cheats usually make the game easier by providing an unlimited amount of some resource; for example lives, weapons, health, and/or ammunition. Other cheats might provide an unusual or amusing feature, like altered game colors or graphical appearances.

Software errors not detected by software testers during development can find their way into released versions of computer and video games. This may happen because the glitch only occurs under unusual circumstances in the game, was deemed too minor to correct, or because the game development was hurried to meet a publication deadline. Glitches can range from minor graphical errors to serious bugs that can delete saved data or cause the game to malfunction. In some cases publishers will release updates (referred to as patches) to repair glitches.

Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which we get to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ, and The Last Starfighter.

Ludologists break sharply and radically from this. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the curvaceous heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.

While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In generally any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow and/or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.

The November 2005 Nielsen Active Gamer Study, taking a survey of 2,000 regular gamers, found that the U.S. games market is diversifying. The age group among male players has expanded significantly up into the 25-40 age group. For casual online puzzle-style and simple mobile cell phone games, the gender divide is more or less equal between males and females. Females have been shown to be significantly attracted to playing certain online multi-user video games that offer a more communal experience, and a small number of young females have been shown to play aggressive games that are sometimes thought of as being "traditionally male" games. According to the ESRB almost 41% of PC gamers are women. With such video game social networks as Miss Video Game and Guild Cafe having a large percentages of female gamers, the "traditionally male" games are now considered cross-gendered.

When comparing today’s industry climate with that of 20 years ago, women and many adults are more inclined to be using products in the industry. While the market for teen and young adult men is still a strong market, it’s the other demographics which are posting significant growth. In 2008, the average American gamer has been playing for 12 years, and is now, on average, 35 years of age.

Video gaming has traditionally been a social experience. From its early beginnings, video games have commonly been playable by more than a single player. Multiplayer video games are those that can be played either competitively or cooperatively by using either multiple input devices, or by hotseating. Tennis for Two, arguably the first video game, was a two-player game, as was its successor Pong. The first commercially available game console, the Magnavox Odyssey, had two controller inputs.

Since then, most consoles have been shipped with two or four controller inputs. Some have had the ability to expand to four, eight or as many as twelve inputs with additional adapters, such as the Multitap. Multiplayer arcade games typically feature play for two to four players, sometimes tilting the monitor on its back for a top-down viewing experience allowing players to sit opposite one another.

Many early computer games for non-PC descendant based platforms featured multiplayer support. Personal computer systems from Atari and Commodore both regularly featured at least two game ports. PC-based computer games started with a lower availability of multiplayer options because of technical limitations. PCs typically had either one or no game ports at all. Network games for these early personal computers were generally limited to only text based adventures or MUDs that were played remotely on a dedicated server. This was due both to the slow speed of modems (300-1200-bit/s), and the prohibitive cost involved with putting a computer online in such a way where multiple visitors could make use of it. However, with the advent of widespread local area networking technologies and Internet based online capabilities, the number of players in modern games can be 32 or higher, sometimes featuring integrated text and/or voice chat. MMOs can offer extremely high numbers of simultaneous players; Eve Online set a record with just under 36,000 players on a single server in 2006.

It has been shown that action video game players have better visuomotor skills, such as their resistance to distraction, their sensitivity to information in peripheral vision, and their ability to count briefly presented objects than nonplayers. They found that such enhanced abilities could be acquired by training with an action game, involving challenges to switch attention to different locations, but not with a game requiring concentration on single objects. It has been suggested by a few studies that online/offline video gaming can be used as a therapeutic tool in the treatment of different mental health concerns.

In Steven Johnson's book, Everything Bad Is Good For You, he argues that video games in fact demand far more from a player than traditional games like Monopoly. To experience the game, the player must first determine the objectives, as well as how to complete them. They must then learn the game controls and how the human-machine interface works, including menus and HUDs. Beyond such skills, which after some time become quite fundamental and are taken for granted by many gamers, video games are based upon the player navigating (and eventually mastering) a highly complex system with many variables. This requires a strong analytical ability, as well as flexibility and adaptability. He argues that the process of learning the boundaries, goals, and controls of a given game is often a highly demanding one that calls on many different areas of cognitive function. Indeed, most games require a great deal of patience and focus from the player, and, contrary to the popular perception that games provide instant gratification, games actually delay gratification far longer than other forms of entertainment such as film or even many books. Some research suggests video games may even increase players' attention capacities.

Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they don't realize they're learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking.

The U.S. Army has deployed machines such as the PackBot which make use of a game-style hand controller to make it more familiar for young people.

According to research discussed at the 2008 Convention of the American Psychological Association, certain types of video games can improve the gamers’ dexterity as well as their ability to problem-solve. A study of 33 laparoscopic surgeons found that those who played video games were 27 percent faster at advanced surgical procedures and made 37 percent fewer errors compared to those who did not play video games. A second study of 303 laparoscopic surgeons (82 percent men; 18 percent women) also showed that surgeons who played video games requiring spatial skills and hand dexterity and then performed a drill testing these skills were significantly faster at their first attempt and across all 10 trials than the surgeons who did not play the video games first.

Whilst many studies have detected superior mental aptitudes amongst habitual gamers, research by Walter Boot at the University of Illinois found that non-gamers showed no improvement in memory or multitasking abilities after 20 hours of playing three different games. The researchers suggested that "individuals with superior abilities are more likely to choose video gaming as an activity in the first place".

Like related forms of media, computer and video games have been the subject of frequent controversy and censorship, due to the depiction of graphic violence, sexual themes, advergaming (a form of advertising in games), consumption of drugs, consumption of alcohol or tobacco, propaganda, or profanity in some games. Among others, critics of video games sometimes include parents' groups, politicians, organized religious groups, and other special interest groups, even though all of these can be found in all forms of entertainment and media. Various games have been accused of causing addiction and even violent behavior. "Video game censorship" is defined as the use of state or group power to control the playing, distribution, purchase, or sale of video games or computer games. Video game controversy comes in many forms, and censorship is a controversial subject. Proponents and opponents of censorship are often very passionate about their individual views.

Various national content rating organizations, such as the Entertainment Software Ratings Board or ESRB in North America, rate software for certain age groups and with certain content warnings. Some of these organizations are optional industry self-regulation (such as the ESRB), while others are part of national government censorship organizations. Also, parents are not always aware of the existence of these ratings.

The three largest producers of and markets for computer and video games (in order) are North America (US and Canada), Japan and the United Kingdom. Other significant markets include Australia, Spain, Germany, South Korea, Mexico, France and Italy. Both India and China are considered emerging markets in the video game industry and sales are expected to rise significantly in the coming years.

Sales of different types of games vary widely between these markets due to local preferences. Japanese consumers tend to purchase console games over computer games, with a strong preference for games catering to local tastes. In South Korea, computer games are preferred, especially MMORPG games and real-time strategy games. There are over 20,000 Internet cafés in South Korea where computer games can be played for an hourly charge.

PC games that are digitally distributed either directly or by networks such as Steam are not tracked by the NPD, and Steam does not list sales numbers for games downloaded through their service. Unauthorized distribution is also rampant on the PC.

These figures are sales in dollars, not units, Unit shipments for each category were higher than the dollar sales numbers indicate, because more software and hardware was discounted than in 2003. But with the release of the next-generation consoles in 2006, these numbers increased dramatically. The game and film industries are also becoming increasingly intertwined, with companies like Sony having significant stakes in both. A large number of summer blockbuster films spawn a companion game, often launching at the same time to share the marketing costs.

In Australia, the United Kingdom and other PAL regions, generally when compared to the US, PAL gamers pay 40% to 50% more for the same product.

As English is the main language in Australia and the UK there is little impetus for translation (although regional differences naturally exist). The differences between PAL and NTSC are these days irrelevant; most video displays run at least 60Hz. But there is a legal problem of regional lockout in Australia, with most DVD players release coming region-free to meet local laws.

But video game consoles are still sold fully region-locked in Australia. Some effort to increase awareness of the issue, specifically to Nintendo of Australia, was in the form of a formal report outlining the issues, published by Aaron Rex Davies. The report has gone on to gain a lot of attention in the public media.

To the top



Source : Wikipedia