Internet

3.3784540048616 (2859)
Posted by r2d2 03/02/2009 @ 05:02

Tags : internet, technology

News headlines
Experts Say Chinese Filter Would Make PCs Vulnerable - New York Times
By ANDREW JACOBS BEIJING — Filtering software that the government has mandated for all new computers in China is so technically flawed that outsiders can easily infiltrate a user's machine to monitor Internet activity, steal personal data or plant...
Week in Microsoft: No bundled IE8 for Europe, Apple's Win 7 ... - Ars Technica
By Emil Protalinski | Last updated June 13, 2009 3:30 PM CT Windows 7 to be shipped in Europe without Internet Explorer. Microsoft has responded to the EU's antitrust investigation into its bundling of its browser with Windows by deciding to ship...
Televisa, Univision rest in Internet rights trial - Reuters
By Gina Keating LOS ANGELES (Reuters) - Top Mexican broadcaster Televisa and its US licensee Univision on Friday rested their cases in a lawsuit to determine if Televisa can transmit its TV shows to US markets on the Internet. US District Judge Philip...
CAREER COACH: Internet an entryway to finding a job - The Star-Ledger - NJ.com
If, on the other hand, you don't have an online presence that makes you "findable" on the internet, you will be at a disadvantage in finding employment in today's highly connected business world. The internet has not only changed the way we work it has...
Minn woman who lost music-share suit gets replay - The Associated Press
The Recording Industry Association of America said in December it had stopped filing lawsuits like these and would work instead with Internet service providers to cut access to those it deems illegal file-sharers. But the recording industry plans to...
'Lone wolf' terrorists harder to stop - The Associated Press
It could be the guy next door, living in the basement of his mother's place, on the Internet just building himself up with hate, building himself up to a boiling point and finally using what he's learned," said John Perren, head of the counterterrorism...
Safari 4 Looks Like It Might Have Legs - eWeek
Apple claims that its new Safari 4 Web browser was downloaded 11 million times in its first three days of release, a sign that it could claim more market share against Internet Explorer, Chrome and others involved in the browser wars....
Goodbye, Comcast (or 'How I Learned to Love the Internet') - PC World
I'm talking everything from simply streaming TV to my Xbox and Internet TV to the recent addition of Netflix to the app. I took it for granted, too; my Costco-flavor HP desktop handled the job just fine. As the years went by, I added more digital...
Internet Traffic Growth Exploding, Study Reveals - Switched
by Leila Brillson — Jun 13th 2009 at 3:47PM The Internet is a seemingly endless resource for our watching, listening, and chatting needs. Bandwidth, however, is not. Cisco Systems, the mobile networking company, released a report earlier this week...
Nonprofit Journalism Gets Boost from AP - New York Times
The AP called the arrangement a six-month experiment that could later be broadened to include other investigative nonprofits, and to serve its nonmember clients, which include broadcast and Internet outlets. “It's something we've talked about for a...

History of the Internet

Number of internet hosts.svg

Prior to the widespread internetworking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe computer model. Several research programs began to explore unicorn homes and articulate principles of networking between separate physical networks. This led to the development of the packet switching model of digital networking. These research efforts included those of the laboratories of Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA.

The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s, including ARPANET and the X.25 protocols. Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this. This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.

Following commercialisation and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. This includes the rise of near instant communication by e-mail, text based discussion forums, and the World Wide Web. Investor speculation in new markets provided by these innovations would also lead to the inflation and collapse of the Dot-com bubble, a major market collapse. But despite this, the Internet continues to grow.

In the 1950s and early 1960s, prior to the widespread inter-networking that led to the Internet, most communication networks were limited in that they only allowed communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence.The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s, including ARPANET and the X.25 protocols. Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this. This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.

A fundamental pioneer in the call for a global network, J.C.R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis.

In October 1962, Licklider was appointed head of the United States Department of Defense's Advanced Research Projects Agency, now known as DARPA, within the information processing office. There he formed an informal group within DARPA to further computer research. As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obvious by the apparent waste of resources this caused.

At the tip of the inter-networking problem lay the issue of connecting separate physical networks to form one logical network, with much wasted capacity inside the assorted separate networks. During the 1960s, Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock (MIT) developed and implemented packet switching. Early networks used for the command and control of nuclear forces were message switched, not packet-switched, although current strategic military networks are, indeed, packet-switching and connectionless. Baran's research had approached packet switching from studies of decentralisation to avoid combat damage compromising the entire network.

Promoted to the head of the information processing office at DARPA, Robert Taylor intended to realize Licklider's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles and the Stanford Research Institute on 22:30 hours on October 29, 1969. By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.

ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and University College London.

Following on from ARPA's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976. This standard was based on the concept of virtual circuits.

The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.

Unlike ARPAnet, X.25 was also commonly available for business use. Telenet offered its Telemail electronic mail service, but this was oriented to enterprise use rather than the general email of ARPANET.

The first dial-in public networks used asynchronous TTY terminal protocols to reach a concentrator operated by the public network. Some public networks, such as CompuServe used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. There were also the America Online (AOL) and Prodigy dial in networks and many bulletin board system (BBS) networks such as FidoNet. FidoNet in particular was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.

In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, and ability to use existing leased lines, X.25 links or even ARPANET connections. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.

With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design.

The specification of the resulting protocol, RFC 675 - Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal and Carl Sunshine, Network Working Group, December, 1974, contains the first attested use of the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today.

With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first somewhat crude demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the Packet Radio Network and the Atlantic Packet Satellite network—all sponsored by DARPA. Stemming from the first specifications of TCP in 1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.

After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.

The networks based around the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.

Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid 1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

More explicitly, NASA developed a TCP/IP based Wide Area Network, NASA Science Network (NSN), in the mid 1980s connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a total integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.

In 1984 NSF developed CSNET exclusively based on TCP/IP. CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. This grew into the NSFNet backbone, established in 1986, and intended to connect and provide access to a number of supercomputing centers established by the NSF.

The term "Internet" was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974). It was around the time when ARPANET was interlinked with NSFNet, that the term Internet came into more general use, with "an internet" meaning any network using TCP/IP. "The Internet" came to mean a global and large network using TCP/IP. Previously "internet" and "internetwork" had been used interchangeably, and "internet protocol" had been used to refer to other networking systems such as Xerox Network Services.

As interest in wide spread networking grew and new applications for it arrived, the Internet's technologies spread throughout the rest of the world. TCP/IP's network-agnostic approach meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.

Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of e-mail, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple e-mail peering, such as allowing access to FTP sites via UUCP or e-mail.

Finally, the Internet was decentralized. BGP was created to replace the EGP routing protocol to allow fully decentralized routing in order to allow the removal of the NSFNet Internet backbone network. This allowed the Internet to become a truly decentralized system. Since 1994, version four of the protocol has been in use on the Internet. All previous versions are now obsolete. The major enhancement in version 4 was support of Classless Inter-Domain Routing and use of route aggregation to decrease the size of routing tables. Since January 2006, version 4 is codified in RFC 4271, which went through well over 20 drafts based on the earlier RFC 1771 version 4. The RFC 4271 version corrected a number of errors, clarified ambiguities, and also brought the RFC much closer to industry practices.

The first ARPANET connection outside the US was established to NORSAR in Norway in 1973, just ahead of the connection to Great Britain. These links were all converted to TCP/IP in 1982, at the same time as the rest of the ARPANET.

Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989.

In 1988 Daniel Karrenberg, from CWI in Amsterdam, visited Ben Segal, CERN's TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.

The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNet in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.

The first mobile phone to have Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The concept of a mobile phone based Internet did not take off until prices came down from that model and the network providers started to develop systems and services to enable the Internet on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-Mode in 1999 and this is considered the birth of the mobile phone based Internet. In 2001 the mobile phone based email system by Blackberry and its iconic phones were launched in America.

To make better use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a simpler programming environment was created for the mobile phone Internet, called WAP for Wireless Application protocol. Most mobile phone Internet services operate on WAP.

The growth of the mobile phone based internet was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing by phone rather than by PC. Developing World countries followed next, with India, South Africa, Kenya, Philippines and Pakistan all reporting that the majority of their domestic Internet users accessed on a mobile phone rather than on a PC.

The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet use was more gradual, but had reached national penetration levels of 20%-30% in most Western countries. In 2008 the cross-over happened, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user on the Internet.

While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.

At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications. In 1996 a USAID funded project, the Leland initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Côte d'Ivoire and Benin in 1998.

Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.

There are a wide range of programs both to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.

The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).

In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1995, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.

As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.

The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNet connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.

During the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first dial-up on the West Coast, Best Internet, now Verio, opened in 1986. The first dialup ISP in the East was world.std.com, opened in 1989.

This caused controversy amongst university users, who were outraged at the idea of noneducational use of their networks. Eventually, it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research.

By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close. In 1994, the NSFNet, now renamed ANSNET (Advanced Networks and Services) and allowing non-profit corporations access, lost its standing as the backbone of the Internet. Both government institutions and competing commercial providers created their own backbones and interconnections. Regional network access points (NAPs) became the primary interconnections between the many networks and the final commercial restrictions ended.

The Internet has developed a significant subculture dedicated to the idea that the Internet is not owned or controlled by any one person, company, group, or organization. Nevertheless, some standardization and control is necessary for the system to function.

The liberal Request for Comments (RFC) publication procedure engendered confusion about the Internet standardization process, and led to more formalization of official accepted standards. The IETF started in January 1985 as a quarterly meeting of U.S. government funded researchers. Representatives from non-government vendors were invited starting with the fourth IETF meeting in October of that year.

Acceptance of an RFC by the RFC Editor for publication does not automatically make the RFC into a standard. It may be recognized as such by the IETF only after experimentation, use, and acceptance have proved it to be worthy of that designation. Official standards are numbered with a prefix "STD" and a number, similar to the RFC naming style. However, even after becoming a standard, most are still commonly referred to by their RFC number.

In 1992, the Internet Society, a professional membership society, was formed and the IETF was transferred to operation under it as an independent international standards body.

The first central authority to coordinate the operation of the network was the Network Information Centre (NIC) at Stanford Research Institute (SRI) in Menlo Park, California. In 1972, management of these issues was given to the newly created Internet Assigned Numbers Authority (IANA). In addition to his role as the RFC Editor, Jon Postel worked as the manager of IANA until his death in 1998.

As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by Paul Mockapetris. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.

Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.

In 1998 both IANA and InterNIC were reorganized under the control of ICANN, a California non-profit corporation contracted by the US Department of Commerce to manage a number of Internet-related tasks. The role of operating the DNS system was privatized and opened up to competition, while the central management of name allocations would be awarded on a contract tender basis.

E-mail is often called the killer application of the Internet. However, it actually predates the Internet and was a crucial tool in creating it. E-mail started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is unclear, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.

The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report indicating experimental inter-system e-mail transfers on it shortly after ARPANET's creation. In 1971 Ray Tomlinson created what was to become the standard Internet e-mail address format, using the @ sign to separate user names from host names.

A number of protocols were developed to deliver e-mail among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET e-mail system. E-mail could be passed this way between a number of networks, including ARPANET, BITNET and NSFNet, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.

In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNet similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).

As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. Unfortunately, these projects fell short in being able to accommodate all the existing data types and in being able to grow without bottlenecks.

One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex" and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS. Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard. Gopher became the first commonly-used hypertext interface to the Internet. While Gopher menu items were examples of the national anthem, hypertext, they were not commonly perceived in that way.

In 1989, whilst working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread. For his work in developing the world wide web, Berners-Lee received the Millennium technology prize in 2004. One early popular web browser, modeled after HyperCard, was ViolaWWW.

A potential turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by then-Senator Al Gore's High Performance Computing and Communication Act of 1991 also known as the Gore Bill . Indeed, Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).

24 Hours in Cyberspace, the "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, cyber24.com. It was headed by photographer Rick Smolan. A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on January 23, 1997, featuring 70 photos from the project.

Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers.

As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1995) and Altavista (founded 1995) were the respective industry leaders.

By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.

Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.

Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Visionaries around the world developed new business models, and ran to their nearest venture capitalist. Of course some of the new entrepreneurs were truly talented at business administration, sales, and growth; but the majority were just people with ideas, and didn't manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.

The dot-com bubble burst on March 10, 2000, when the technology heavy NASDAQ Composite index peaked at 5048.62 (intra-day peak 5132.52), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit.

In its "Worldwide Online Population Forecast, 2006 to 2011," JupiterResearch anticipates that a 38 percent increase in the number of people with online access will mean that, by 2011, 22 percent of the Earth's population will surf the Internet regularly.

JupiterResearch says the worldwide online population will increase at a compound annual growth rate of 6.6 percent during the next five years, far outpacing the 1.1 percent compound annual growth rate for the planet's population as a whole. The report says 1.1 billion people currently enjoy regular access to the Web.

North America will remain on top in terms of the number of people with online access. According to JupiterResearch, online penetration rates on the continent will increase from the current 70 percent of the overall North American population to 76 percent by 2011. However, Internet adoption has "matured," and its adoption pace has slowed, in more developed countries including the United States, Canada, Japan and much of Western Europe, notes the report.

As the online population of the United States and Canada grows by about only 3 percent, explosive adoption rates in China and India will take place, says JupiterResearch. The report says China should reach an online penetration rate of 17 percent by 2011 and India should hit 7 percent during the same time frame. This growth is directly related to infrastructure development and increased consumer purchasing power, notes JupiterResearch.

By 2011, Asians will make up about 42 percent of the world's population with regular Internet access, 5 percent more than today, says the study.

Brazil "with its soaring economy," is predicted by JupiterResearch to experience a 9 percent compound annual growth rate, the fastest in Latin America, but China and India are likely to do the most to boost the world's online penetration in the near future.

For the study, JupiterResearch defined "online users" as people who regularly access the Internet by "dedicated Internet access" devices. Those devices do not include cell phones.

Some concerns have been raised over the historiography of the Internet's development. Specifically that it is hard to find documentation of much of the Internet's development, for several reasons, including a lack of centralized documentation for much of the early developments that led to the Internet.

To the top



Internet television

Internet television (Internet TV or iTV) is television service distributed via the Internet.

Internet television allows viewers to choose the show they want to watch from a library of shows. The primary models for Internet television are streaming Internet TV or selectable video on an Internet location, typically a website. The video can also be broadcast with a peer-to-peer network(P2PTV), which doesn't rely on a single website's streaming.

It differs from IPTV in that IPTV offerings, while also based on th IP protocol stacks, are typically offered on discrete service provider networks, highly managed to provide guaranteed quality of service and good bandwidth, and usually requiring a special IPTV set-top-box. However, some definitions of IPTV such as that defined by the ITU and the DVB, use the term IPTV as a superset of both 'managed' IPTV and Internet TV.

Note: In some countries, the term Internet TV should only be applied to a TV-like experience, that is a big screen with a 'ten foot interface', rather than an Internet Video service such as YouTube. This is because the term "television" has a regulatory significance in some territories, and claiming a service is "television" may bring regulatory obligations to deliver emergency alert, closed captioning, subtitles, must-carry, and/or other locally required elements of a traditional broadcast television services.

Internet TV can be a quick-to-market and relatively low investment service. Internet TV rides on existing infrastructure including broadband, ADSL, Wi-Fi, cable and satellite which makes it a valuable tool for a wide variety of service providers and content owners looking for new revenue streams.

Many programmers are streaming their content live on the Internet today to increase viewership (which in turn increases ad revenue) and protect market share. This model is efficient due to the relatively inexpensive multicasting protocol. Viewers may simply request access to the live feed and join into the live stream. This free model has been used in over-the-air broadcasting for years and still works because of the low cost of reaching viewers via multicast. Any viewer with a broadband connection and the correct free media player can watch live television from around the world.

Many Internet television "portals" are available which include links to live feeds as well as built-in viewers. Although the live television streams are free, most portals are supported by advertising revenue as well.

Those that create valued and interesting video products now have the opportunity to distribute them directly to a large audience - something impossible with the previous television distributing models (closed software, closed hardware, closed network). The free model has been used around the globe by local and independent television channels aiming for niche target audiences, or to build a collaborative environment for media production, a platform for citizens' media. It isn't strictly a citizen's format either as the broadcast model used in television for decades will begin to find competition in Internet television supported by advertising.

The recent rapid growth of fast broadband access, accelerated computer power and larger storage capacity has turned Internet TV into a real opportunity for service providers who want to open new revenue streams and increase average revenue per user.

A major advantage of Internet TV is that it allows content delivery to a huge population with virtually no geographical limitations. But while Internet TV is a much easier and cheaper way of publishing content, operators who are pondering whether to launch an Internet TV service nevertheless have to carefully assess the factors affecting their business cases.

High-quality Internet TV services require subscribers to have continuous access to high bandwidth, so pricing, bandwidth, and network neutrality (at least in the US) are all interdependent factors affecting the business case for Internet TV. For example, while subscribers are generally required to pay more for higher Internet bandwidth, it doesn't automatically guarantee good enough bandwidth quality for receiving Internet TV services. So to receive Internet TV, a subscriber will be required to subscribe to an even higher premium service which may present a barrier to scaling up subscribers quickly.

There are many ways to deliver video over an IP network and many buzzwords have been applied to these various ways and are sometimes used interchangeably.

IPTV is commonly referred to those services operated and controlled by the same company that operates and controls the "Last Mile" to the consumers' premises. An IPTV service is usually delivered over a complex and investment heavy walled garden network, which is carefully engineered to ensure bandwidth efficient delivery of vast amounts of multicast video traffic. The higher network quality also enables easy delivery of high quality SD or HD TV content to subscribers’ homes.

Internet TV, by definition, is created, managed and distributed via the open Internet. It rides on existing infrastructure and normally refers to those services sourced over the Internet by service providers that cannot control the final delivery. Again, transport streams in IP packets are used with one or more services per transport stream.

Other TV-like services are available on the Internet but these send the video and the audio in separate streams over the IP network and do not use transport streams.

Whilst the differences may seem irrelevant to the consumer, the underlying technology employed is quite different and directly affect the range and quality of service that can be achieved. IPTV users are limited to a relatively small range of programs but at high quality, whereas an Internet TV user may have access to many thousands of channels from literally all over the world but without any guarantee of being able to watch them. Streaming services such as YouTube generally offer User Generated Content UGC as individual short clips rather than professionally produced programs or films grouped as a channel.

To the top



Internet

Chris Young was voted into the 2007 Major League Baseball All-Star Game on the internet via the All-Star Final Vote.

The Internet is a global network of interconnected computers, enabling users to share information along multiple channels. Typically, a computer that connects to the Internet can access information from a vast array of available servers and other computers by moving information from them to the computer's local memory. The same connection allows that computer to send information to servers on the network; that information is in turn accessed and potentially modified by a variety of other interconnected computers. A majority of widely accessible information on the Internet consists of inter-linked hypertext documents and other resources of the World Wide Web (WWW). Computer users typically manage sent and received information with web browsers; other software for users' interface with computer networks includes specialized programs for electronic mail, online chat, file transfer and file sharing.

The movement of information in the Internet is achieved via a system of interconnected computer networks that share data by packet switching using the standardized Internet Protocol Suite (TCP/IP). It is a "network of networks" that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies.

The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.

The term internet is written both with capital and without capital, and is used both with and without the definite article.

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and networking as a potential unifying human revolution.

Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

At the IPTO, Licklider got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran, who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI (later SRI International) in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet.

Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.

X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems.

The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISP) were created: UUNET, PSINet and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of commercial routers from companies such as Cisco Systems, Proteon and Juniper, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation of TCP/IP on the UNIX operating system.

Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organisation for particle research, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989.

An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.

Using various statistics, AMD estimated the population of internet users to be 1.5 billion as of January 2009.

New findings in the field of communications during the 1960s, 1970s and 1980s were quickly adopted by universities across North America.

Examples of early university Internet communities are Cleveland FreeNet, Blacksburg Electronic Village and NSTN in Nova Scotia. Students took up the opportunity of free communications and saw this new phenomenon as a tool of liberation. Personal computers and the Internet would free them from corporations and governments (Nelson, Jennings, Stallman).

Graduate students played a huge part in the creation of ARPANET. In the 1960s, the network working group, which did most of the design for ARPANET's protocols, was composed mainly of graduate students.

Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

By December 31, 2008, 1.574 billion people were using the Internet according to Internet World Statistics.

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet.

The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in Requests for Comments (RFCs), freely available on the IETF web site.

The principal methods of networking that enable the Internet are contained in a series of RFCs that constitute the Internet Standards. These standards describe a system known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the space (Application Layer) of the software application, e.g., a web browser application, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model). The underlying network consists of two layers: the Internet Layer which enables computers to connect to one-another via intermediate (transit) networks and thus is the layer that establishes internetworking and the Internet, and lastly, at the bottom, is a software layer that provides connectivity between hosts on the same local link (therefor called Link Layer), e.g., a local area network (LAN) or a dial-up connection. This model is also known as the TCP/IP model of networking. While other models have been developed, such as the Open Systems Interconnection (OSI) model, they are not compatible in the details of description, nor implementation.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems for computers on the Internet and facilitates the internetworking of networks. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of data traffic. IPv6 is currently in commercial deployment phase around the world.

IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not accessible with IPv4 software. This means software upgrades are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development.

There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.

These in turn are built around relatively smaller networks. See also the list of academic computer network organizations.

In computer network diagrams, the Internet is often represented by a cloud symbol, into and out of which network communications can pass.

The Internet Corporation for Assigned Names and Numbers (ICANN) is the authority that coordinates the assignment of unique identifiers on the Internet, including domain names, Internet Protocol (IP) addresses, and protocol port and parameter numbers. A globally unified namespace (i.e., a system of names in which there is at most one holder for each possible name) is essential for the Internet to function. ICANN is headquartered in Marina del Rey, California, but is overseen by an international board of directors drawn from across the Internet technical, business, academic, and non-commercial communities. The US government continues to have the primary role in approving changes to the root zone file that lies at the heart of the domain name system. Because the Internet is a distributed network comprising many voluntarily interconnected networks, the Internet has no governing body. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet, but the scope of its authority extends only to the Internet's systems of domain names, IP addresses, protocol ports and parameter numbers.

On November 16, 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

The prevalent language for communication on the Internet is English. This may be a result of the Internet's origins, as well as English's role as a lingua franca. It may also be related to the poor capability of early computers, largely originating in the United States, to handle characters other than those in the English variant of the Latin alphabet.

After English (29% of Web visitors) the most requested languages on the World Wide Web are Chinese (19%), Spanish (9%), Japanese (6%), French (5%) and German (4%).

By region, 40% of the world's Internet users are based in Asia, 26% in Europe, 17% in North America, 10% in Latin America and the Caribbean, 4% in Africa, 3% in the Middle East and 1% in Australia.

The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in most widely used languages. However, some glitches such as mojibake (incorrect display of foreign language characters, also known as kryakozyabry) still remain.

The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and Web applications.

The Internet can now be accessed virtually anywhere by numerous means. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet from anywhere there is a cellular network supporting that device's technology.

Within the limitations imposed by the small screen and other limited facilities of such a pocket-sized device, all the services of the Internet, including email and web browsing, may be available in this way. Service providers may restrict the range of these services and charges for data access may be significant, compared to home usage.

The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Even today it can be important to distinguish between Internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them. Today you can send pictures and attach files on e-mail. Most e-mail servers today also feature the ability to send e-mail to multiple e-mail addresses.

Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous.

The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies of, these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet.

Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer, Firefox and Apple Safari, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations.

Through keyword-driven Internet research using search engines like Yahoo! and Google, millions of people worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.

Using the Web, it is also easier than ever before for individuals and organisations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page, a blog or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however.

Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organisations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.

Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.

In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organisation or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

The Internet allows computer users to connect to other computers and information stores easily, wherever they may be across the world. They may do this with or without the use of security, authentication and encryption technologies, depending on the requirements.

This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information e-mailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice.

An office worker away from his desk, perhaps on the other side of the world on a business trip or a holiday, can open a remote desktop session into his normal office PC using a secure Virtual Private Network (VPN) connection via the Internet. This gives the worker complete access to all of his or her normal files and data, including e-mail and other applications, while away from the office.

This concept is also referred to by some network security people as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into its employees' homes.

The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier. Not only can a group cheaply communicate and share ideas, but the wide reach of the Internet allows such groups to easily form in the first place. An example of this is the free software movement, which has produced Linux, Mozilla Firefox, OpenOffice.org etc.

Internet "chat", whether in the form of IRC chat rooms or channels, or via instant messaging systems, allow colleagues to stay in touch in a very convenient way when working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via e-mail. Extensions to these systems may allow files to be exchanged, "whiteboard" drawings to be shared or voice and video contact between team members.

Version control systems allow collaborating teams to work on shared sets of documents without either accidentally overwriting each other's work or having members wait until they get "sent" documents to be able to make their contributions.

Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing.

A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks.

In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—hopefully fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests.

These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.

Many existing radio and television broadcasters provide Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of material is much wider, from pornography to highly specialized, technical webcasts. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material on a worldwide basis.

Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound.

YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show the video files. Users are able to watch videos without signing up; however, if they do sign up, they are able to upload an unlimited amount of videos and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands, of videos daily.

VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL.

VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.

Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls.

Remaining problems for VoIP include emergency telephone number dialling and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the phone equipment and the Internet access devices.

VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak, and others. PlayStation 3 and Xbox 360 also offer VoIP chat features.

Common methods of home access include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G technology cell phones.

Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.

Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services.

High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.

The Internet has made possible entirely new forms of social interaction, activities and organizing, thanks to its basic features such as widespread usability and access.

Social networking websites such as Facebook and MySpace have created a new form of socialization and interaction. Users of these sites are able to add a wide variety of items to their personal pages, to indicate common interests, and to connect with others. It is also possible to find a large circle of existing acquaintances, especially if a site allows users to utilize their real names, and to allow communication among large existing groups of people.

Sites like meetup.com exist to allow wider announcement of groups which may exist mainly for face-to-face meetings, but which may have a variety of minor interactions over their group's site at meetup.org, or other similar sites.

In democratic societies, the Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States became famous for its ability to generate donations via the Internet. Many political groups use the Internet to achieve a whole new method of organizing, in order to carry out Internet activism.

Some governments, such as those of Iran, North Korea, Myanmar, the People's Republic of China, and Saudi Arabia, restrict what people in their countries can access on the Internet, especially political and religious content. This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.

In Norway, Denmark, Finland and Sweden, major Internet service providers have voluntarily (possibly to avoid such an arrangement being turned into law) agreed to restrict access to sites listed by police. While this list of forbidden URLs is only supposed to contain addresses of known child pornography sites, the content of the list is secret.

Many countries, including the United States, have enacted laws making the possession or distribution of certain material, such as child pornography, illegal, but do not use filtering software.

There are many free and commercially available software programs with which a user can choose to block offensive websites on individual computers or networks, such as to limit a child's access to pornography or violence. See Content-control software.

The Internet has been a major source of leisure since before the World Wide Web, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much of the main traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas.

The pornography and gambling industries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity.

One main area of leisure on the Internet is multiplayer gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing games to online gambling. This has revolutionized the way many people interact and spend their free time on the Internet.

While online gaming has been around since the 1970s, modern modes of online gaming began with services such as GameSpy and MPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of gameplay or certain games.

Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Some of these sources take more care over the original artists' rights and over copyright laws than others.

Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests.

People use chat, messaging and e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites like MySpace, Facebook and many others like them also put and keep people in contact for their enjoyment.

The Internet has seen a growing number of Web desktops, where users can access their files, folders, and settings via the Internet.

Cyberslacking has become a serious drain on corporate resources; the average UK employee spends 57 minutes a day surfing the Web at work, according to a study by Peninsula Business Services.

Many computer scientists see the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous. (For instance, data transfer rates and physical characteristics of connections vary widely.) The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. Further adding to the complexity of the Internet is the ability of more than one computer to use the Internet through only one node, thus creating the possibility for a very deep and hierarchal sub-network that can theoretically be extended infinitely (disregarding the programmatic limitations of the IPv4 protocol). Principles of this architecture date back to the 1960s and it might not be a solution best suited to modern needs. Thus, the possibility of developing alternative structures is currently being looked into.

According to a June 2007 article in Discover magazine, the combined weight of all the electrons moved within the Internet in a day is 0.2 millionths of an ounce. Others have estimated this at nearer 2 ounces (50 grams).

The Internet has also become a large market for companies; some of the biggest companies today have grown by taking advantage of the efficient nature of low-cost advertising and commerce through the Internet, also known as e-commerce. It is the fastest way to spread information to a vast number of people simultaneously. The Internet has also subsequently revolutionized shopping—for example; a person can order a CD online and receive it in the mail within a couple of days, or download it directly in some cases. The Internet has also greatly facilitated personalized marketing which allows a company to market a product to a specific person or a specific group of people more so than any other advertising medium.

Examples of personalized marketing include online communities such as MySpace, Friendster, Orkut, Facebook and others which thousands of Internet users join to advertise themselves and make friends online. Many of these users are young teens and adolescents ranging from 13 to 25 years old. In turn, when they advertise themselves they advertise interests and hobbies, which online marketing companies can use as information as to what those users will purchase online, and advertise their own companies' products to those users.

To the top



Source : Wikipedia