Internet Service Providers

3.4592326139017 (1668)
Posted by sonny 03/03/2009 @ 06:09

Tags : internet service providers, telecommunication, technology

News headlines
AT&T Poised For Battle With Cable Cos. - Hartford Courant
Cable has been emphasizing Internet speeds several times faster than the DSL service offered by AT&T. And shortly after AT&T began boasting about its 112 HD channels on U-verse, Comcast — the state's largest cable provider — began to aggressively...
Thai Gaming Sites Ordered Shut Down After Suicide - Slashdot
To prevent online gambling, the DSI, also a member of the internet safety committee, would notify all Internet service providers across the country about the court order. From now on, any provider found to encourage or provide online gambling will not...
Social-networking sites are no friend to Tehran - Chicago Tribune
Iranian Internet-service providers had long banned Facebook. Government officials were fearful it could be used by intelligence officials abroad to recruit operatives or by activists to organize protests. But in January, after watching activists use...
Cloud Computing -
Now to use a cloud computing service you must be connected to the Internet, have the bandwidth so you can reach those social networking sites like Flickr for photos and other sites with documents and home movies. Matter of fact where ever you happened... Combines Classified Ads and Background ... - WebWire (press release)
Pittsburgh, PA, USA, May 24, 2009 - Pittsburgh business woman teams up with Internet entrepreneurs to launch hybrid business directory, classified ad, and verification and background screening web site for home service providers and caregivers....
Save Money, Online and Off - Washington Post
And if your current provider still won't play ball, you may be better off going with that other company's promotional offer after all. While you're at it, try the same approach with your credit card debt. Just as telecom service providers are...
New IPv6 and DKIM Training Courses Added to Upcoming MAAWG Global ... - TMCnet
... Cross-border enforcement mechanisms - Botnet recovery and mitigation issues - Inter-provider feedback mechanisms - Spam on mobile - Deliverability and delivery in Europe A special MAAWG ISP Closed Colloquium, usually limited to service providers,...
The New Computing Pioneers - Chemical & Engineering News
Pfizer is beginning to employ cloud computing in other research operations, but there are some downsides, Day says, one being that users must come up with their own programming to coordinate with cloud service providers. Pfizer is working with the...
Defense Dept., Industry Join to Protect Data - Washington Post
And the Defense Department is in preliminary talks with telecommunications and Internet service providers about creating a similar partnership, industry officials say. The Defense Department's Cyber Crime Center, whose 277 employees are mostly...
Is your Internet Service Provider ripping you off? - TechRadar UK
Surprisingly, given the frequency with which poor ISP service complaints feature in this WatchDog section, customer service and technical support were only considered important by 12 per cent of the surveyed customers. Much more worrying were Uswitch's...

Internet Service Providers Association of Pakistan

Internet Service Providers Association of Pakistan (ISPAK) is a non-profit organization comprising of Internet Service providers of Pakistan. ISPAK was formed in 1997 to act as a platform from which service providers could deal with the Regulator, Pakistan Telecommunications Authority (PTA).

One of the most important achievements of ISPAK was to convince the Government of Pakistan and the incumbent Pakistan Telecommunication Company Limited (PTCL) to implement single-pulse metering for dialup internet, back in 1998. What this meant for the majority of the internet users was that they only had to pay for one single call even if they remained connected to their ISP Dialup account for hours. Local calls are not free in Pakistan and the a multi-metering pulse would have meant that dialup users would be charged every 3 minutes for a new call.

ISPAK proactively monitors the regulations introducted by the Regulator (PTA) and takes step to ensure that Internet Users are provided access to the Internet free of any censorship.

To the top

Internet Service Providers Association

The Internet Service Providers Association, or ISPA, is a British body representing providers of Internet Services.

ISPA was established in 1995 as the first trade association for ISPs, promoting competition, self-regulation and progress within the internet industry. Members are signatories to the ISPA Code of "good practice" binding ISPs to a common industry standard.

As a trade association, membership is voluntary but the companies who choose to become members of ISPA agree to abide by the ISPA United Kingdom Code. ISPA members' allegiance to the Code means that consumers can view the ISPA UK logo as a mark of commitment to good business practice.

ISPA's main activity is in making representations on behalf of the industry to Government bodies, such as the Home Office, the Department for Business, Enterprise and Regulatory Reform (former DTI) and Ofcom. Government and political representatives often approach ISPA for its knowledge and expertise.

ISPA represents members to Government and extra-parliamentary bodies in the UK, as well as dealing with media enquiries relating to ISPs' role online. Policy is directed by the ISPA Council, representing the interests of over 100 members in the UK. ISPA spends a good deal of its time dealing with Ofcom.

Policies are agreed by the ISPA Council, a body of up to ten people selected from and representing the various interests of the membership. The Council is served by a secretariat.

ISPA UK was instrumental in establishing EuroISPA, a European federation of Internet Services Providers' Associations. EuroISPA voices ISPs concerns to politicians and officials at European Union level and influences EU Internet policies. ISPA also organises "The Ispas", an annual award ceremony to showcase the "best" of the UK internet industry.

To the top

List of Internet service providers in Pakistan

This is a list of the licensed Internet service providers in Pakistan.

To the top

Broadband Internet access

Broadband subscriptions in 2005

Broadband Internet access, often shortened to just broadband, is high data rate Internet access—typically contrasted with dial-up access over a modem.

Dial-up modems are generally only capable of a maximum bitrate of 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply at least double this bandwidth and generally without disrupting telephone use.

Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 1.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States FCC, as of 2008, defines broadband as anything above 768 kbit/s. The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services.

Data rates are defined in terms of maximum download because several common consumer broadband technologies such as ADSL are "asymmetric"—supporting much slower maximum upload data rate than download.

Broadband is often called "high-speed" Internet, because it usually has a high rate of data transmission. In general, any connection to the customer of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet. The International Telecommunication Union Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity that is faster than primary rate ISDN, at 1.5 to 2 Mbit/s. The FCC definition of broadband is 768 kbit/s (0.8 Mbit/s). The Organization for Economic Co-operation and Development (OECD) has defined broadband as 256 kbit/s in at least one direction and this bit rate is the most common baseline that is marketed as "broadband" around the world. There is no specific bitrate defined by the industry, however, and "broadband" can mean lower-bitrate transmission methods. Some Internet Service Providers (ISPs) use this to their advantage in marketing lower-bitrate connections as broadband.

In practice, the advertised bandwidth is not always reliably available to the customer; ISPs often allow a greater number of subscribers than their backbone connection can handle, under the assumption that most users will not be using their full connection capacity very frequently. This aggregation strategy works more often than not, so users can typically burst to their full bandwidth most of the time; however, peer-to-peer (P2P) file sharing systems, often requiring extended durations of high bandwidth, stress these assumptions, and can cause major problems for ISPs who have excessively overbooked their capacity. For more on this topic, see traffic shaping. As takeup for these introductory products increases, telcos are starting to offer higher bit rate services. For existing connections, this most of the time simply involves reconfiguring the existing equipment at each end of the connection.

As the bandwidth delivered to end users increases, the market expects that video on demand services streamed over the Internet will become more popular, though at the present time such services generally require specialized networks. The data rates on most broadband services still do not suffice to provide good quality video, as MPEG-2 video requires about 6 Mbit/s for good results. Adequate video for some purposes becomes possible at lower data rates, with rates of 768 kbit/s and 384 kbit/s used for some video conferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC. The MPEG-4 format delivers high-quality video at 2 Mbit/s, at the low end of cable modem and ADSL performance.

Increased bandwidth has already made an impact on newsgroups: postings to groups such as alt.binaries.* have grown from JPEG files to entire CD and DVD images. According to NTL, the level of traffic on their network increased from a daily inbound news feed of 150 gigabytes of data per day and 1 terabyte of data out each day in 2001 to 500 gigabytes of data inbound and over 4 terabytes out each day in 2002.

The standard broadband technologies in most areas are DSL and cable modems. Newer technologies in use include VDSL and pushing optical fiber connections closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in fiber to the premises and fiber to the curb schemes, has played a crucial role in enabling Broadband Internet access by making transmission of information over larger distances much more cost-effective than copper wire technology. In a few areas not served by cable or ADSL, community organizations have begun to install Wi-Fi networks, and in some cities and towns local governments are installing municipal Wi-Fi networks. As of 2006, broadband mobile Internet access has become available at the consumer level in some countries, using the HSDPA and EV-DO technologies. The newest technology being deployed for mobile and stationary broadband access is WiMAX.

Roughly double the dial-up rate can be achieved with multilinking technology. What is required are two modems, two phone lines, two dial-up accounts, and ISP support for multilinking, or special software at the user end. This inverse multiplexing option was popular with some high-end users before ISDN, DSL and other technologies became available.

Diamond and other vendors had created dual phone line modems with bonding capability. The data rate of dual line modems is faster than 90 kbit/s. The Internet and phone charge will be twice the ordinary dial-up charge.

Load balancing takes two internet connections and feeds them into your network as one double data rate, more resilient internet connection. By choosing two independent internet providers the load balancing hardware will automatically use the line with least load which means should one line fail, the second one automatically takes up the slack.

Integrated Service Digital Network (ISDN) is one of the oldest broadband digital access methods for consumers and businesses to connect to the Internet. It is a telephone data service standard. Its use in the United States peaked in the late 1990s prior to the availability of DSL and cable modem technologies. Broadband service is usually compared to ISDN-BRI because this was the standard broadband access technology that formed a baseline for the challenges faced by the early broadband providers. These providers sought to compete against ISDN by offering faster and cheaper services to consumers.

A basic rate ISDN line (known as ISDN-BRI) is an ISDN line with 2 data "bearer" channels (DS0 - 64 kbit/s each). Using ISDN terminal adapters (erroneously called modems), it is possible to bond together 2 or more separate ISDN-BRI lines to reach bandwidths of 256 kbit/s or more. The ISDN channel bonding technology has been used for video conference applications and broadband data transmission.

Primary rate ISDN, known as ISDN-PRI, is an ISDN line with 23 DS0 channels and total bandwidth of 1,544 kbit/s (US standard). ISDN E1 (European standard) line is an ISDN lines with 30 DS0 channels and total bandwidth of 2,048 kbit/s. Because ISDN is a telephone-based product, a lot of the terminology and physical aspects of the line are shared by the ISDN-PRI used for voice services. An ISDN line can therefore be "provisioned" for voice or data and many different options, depending on the equipment being used at any particular installation, and depending on the offerings of the telephone company's central office switch. Most ISDN-PRI's are used for telephone voice communication using large PBX systems, rather than for data. One obvious exception is that ISPs usually have ISDN-PRI's for handling ISDN data and modem calls.

It is mainly of historical interest that many of the earlier ISDN data lines used 56 kbit/s rather than 64 kbit/s "B" channels of data. This caused ISDN-BRI to be offered at both 128 kbit/s and 112 kbit/s rates, depending on the central office's switching equipment.

These are highly-regulated services traditionally intended for businesses, that are managed through Public Service Commissions (PSCs) in each state, must be fully defined in PSC tariff documents, and have management rules dating back to the early 1980s which still refer to teletypes as potential connection devices. As such, T-1 services have very strict and rigid service requirements which drive up the provider's maintenance costs and may require them to have a technician on standby 24 hours a day to repair the line if it malfunctions. (In comparison, ISDN and DSL are not regulated by the PSCs at all.) Due to the expensive and regulated nature of T-1 lines, they are normally installed under the provisions of a written agreement, the contract term being typically one to three years. However, there are usually few restrictions to an end-user's use of a T-1, uptime and bandwidth data rates may be guaranteed, quality of service may be supported, and blocks of static IP addresses are commonly included.

Since a T-1 was originally conceived for voice transmission, and voice T-1's are still widely used in businesses, it can be confusing to the uninitiated subscriber. It is often best to refer to the type of T-1 being considered, using the appropriate "data" or "voice" prefix to differentiate between the two. A voice T-1 would terminate at a phone company's central office (CO) for connection to the PSTN; a data T-1 terminates at a point of presence (POP) or data center. The T-1 line which is between a customer's premises and the POP or CO is called the local loop. The owner of the local loop need not be the owner of the network at the POP where your T-1 connects to the Internet, and so a T-1 subscriber may have contracts with these two organizations separately.

The nomenclature for a T-1 varies widely, cited in some circles a DS-1, a T1.5, a T1, or a DS1. Some of these try to distinguish amongst the different aspects of the line, considering the data standard a DS-1, and the physical structure of the trunk line a T-1 or T-1.5. They are also called leased lines, but that terminology is usually for data rates under 1.5 Mbit/s. At times, a T-1 can be included in the term "leased line" or excluded from it. Whatever it is called, it is inherently related to other broadband access methods, which include T-3, SONET OC-3, and other T-carrier and Optical Carriers. Additionally, a T-1 might be aggregated with more than one T-1, producing an nxT-1, such as 4xT-1 which has exactly 4 times the bandwidth of a T-1.

When a T-1 is installed, there are a number of choices to be made: in the carrier chosen, the location of the demarcation point, the type of channel service unit (CSU) or data service unit (DSU) used, the WAN IP router used, the types of bandwidths chosen, etc. Specialized WAN routers are used with T-1 lines that route Internet or VPN data onto the T-1 line from the subscriber's packet-based (TCP/IP) network using customer premises equipment (CPE). The CPE typical consists of a CSU/DSU that converts the DS-1 data stream of the T-1 to a TCP/IP packet data stream for use in the customer's Ethernet LAN. It is noteworthy that many T-1 providers optionally maintain and/or sell the CPE as part of the service contract, which can affect the demarcation point and the ownership of the router, CSU, or DSU.

Although a T-1 has a maximum of 1.544 Mbit/s, a fractional T-1 might be offered which only uses an integer multiple of 128 kbit/s for bandwidth. In this manner, a customer might only purchase 1/12th or 1/3 of a T-1, which would be 128 kbit/s and 512 kbit/s, respectively.

T-1 and fractional T-1 data lines are symmetric, meaning that their upload and download data rates are the same.

Where available, this method of broadband connection to the Internet would indicate that the Internet access is very fast. However, just because Ethernet is offered doesn't mean that the full 10, 100, or 1000 Mbit/s connection is able to be utilized for direct Internet access. In a college dormitory for example, the 100 Mbit/s Ethernet access might be fully available to on-campus networks, but Internet access bandwidths might be closer to 4xT-1 data rate (6 Mbit/s). If you are sharing a broadband connection with others in a building, the access bandwidth of the leased line into the building would of course govern the end-user's data rate.

However, in certain locations, true Ethernet broadband access might be available. This would most commonly be the case at a POP or a data center, and not at a typical residence or business. When Ethernet Internet access is offered, it could be fiber-optic or copper twisted pair, and the bandwidth will conform to standard Ethernet data rates of up to 10 Gbit/s. The primary advantage is that no special hardware is needed for Ethernet. Ethernet also has a very low latency.

One of the great challenges of broadband is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easy for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected.

Several rural broadband solutions exist, though each has its own pitfalls and limitations. Some choices are better than others, but are dependent on how proactive the local phone company is about upgrading their rural technology.

Wireless Internet Service Provider (WISPs) are rapidly becoming a popular broadband option for rural areas.

This employs a satellite in geostationary orbit to relay data from the satellite company to each customer. Satellite Internet is usually among the most expensive ways of gaining broadband Internet access, but in rural areas it may only compete with cellular broadband. However, costs have been coming down in recent years to the point that it is becoming more competitive with other broadband options. German ISP, Filiago, offers the ASTRA2Connect satellite Internet system for €320 (equipment) plus €100 (registration) and a flat rate monthly fee dependent on bandwidth - from €20 for 256Kbit/s download, 64Kbits/s upload, to €80 for 2048Kbit/s download, 128Kbits/s upload.

Satellite Internet also has a high latency problem caused by the signal having to travel 35,000 km (22,000 miles) out into space to the satellite and back to Earth again. The signal delay can be as much as 500 milliseconds to 900 milliseconds, which makes this service unsuitable for applications requiring real-time user input such as certain multiplayer Internet games and first-person shooters played over the connection. Despite this, it is still possible for many games to be played, but the scope is limited to real-time strategy or turn-based games. The functionality of live interactive access to a distant computer can also be subject to the problems caused by high latency. These problems are more than tolerable for just basic email access and web browsing and in most cases are barely noticeable.

There is no simple way to get around this problem. The delay is primarily due to the speed of light being 300,000 km/second (186,000 miles per second). Even if all other signaling delays could be eliminated it still takes the electromagnetic wave 233 milliseconds to travel from ground to the satellite and back to the ground, a total of 70,000 km (44,000 miles) to travel from the user to the satellite company.

Since the satellite is usually being used for two-way communications, the total distance increases to 140,000 km (88,000 miles), which takes a radio wave 466 ms to travel. Factoring in normal delays from other network sources gives a typical connection latency of 500-700 ms. This is far worse latency than even most dial-up modem users' experience, at typically only 150-200 ms total latency.

Most satellite Internet providers also have a FAP (Fair Access Policy). Perhaps one of the largest disadvantages of satellite Internet, these FAPs usually throttle a user's throughput to dial-up data rates after a certain "invisible wall" is hit (usually around 200 MB a day). This FAP usually lasts for 24 hours after the wall is hit, and a user's throughput is restored to whatever tier they paid for. This makes bandwidth-intensive activities nearly impossible to complete in a reasonable amount of time (examples include P2P and newsgroup binary downloading).

The European ASTRA2Connect system has a FAP based on a monthly limit of 2Gbyte of data downloaded, with download data rates reduced for the remainder of the month if the limit is exceeded.

Cellular phone towers are very widespread, and as cellular networks move to third generation (3G) networks they can support fast data; using technologies such as EVDO, HSDPA and UMTS.

These can give broadband access to the Internet, with a cell phone, with Cardbus, ExpressCard, or USB cellular modems, or with cellular broadband routers, which allow more than one computer to be connected to the Internet using one cellular connection.

This is a new service still in its infancy that may eventually permit broadband Internet data to travel down standard high-voltage power lines. However, the system has a number of complex issues, the primary one being that power lines are inherently a very noisy environment. Every time a device turns on or off, it introduces a pop or click into the line. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with these natural signaling disruptions and work around them.

Broadband over power lines (BPL), also known as Power line communication, has developed faster in Europe than in the US due to a historical difference in power system design philosophies. Nearly all large power grids transmit power at high voltages in order to reduce transmission losses, then near the customer use step-down transformers to reduce the voltage. Since BPL signals cannot readily pass through transformers, repeaters must be attached to the transformers. In the US, it is common for a small transformer hung from a utility pole to service a single house. In Europe, it is more common for a somewhat larger transformer to service 10 or 100 houses. For delivering power to customers, this difference in design makes little difference, but it means delivering BPL over the power grid of a typical US city will require an order of magnitude more repeaters than would be required in a comparable European city.

The second major issue is signal strength and operating frequency. The system is expected to use frequencies in the 10 to 30 MHz range, which has been used for decades by licensed amateur radio operators, as well as international shortwave broadcasters and a variety of communications systems (military, aeronautical, etc.). Power lines are unshielded and will act as transmitters for the signals they carry, and have the potential to completely wipe out the usefulness of the 10 to 30 MHz range for shortwave communications purposes.

This typically employs the current low-cost 802.11 Wi-Fi radio systems to link up remote locations over great distances, but can use other higher-power radio communications systems as well.

Traditional 802.11b was licensed for omnidirectional service spanning only 100-150 meters (300-500 ft). By focusing the signal down to a narrow beam with a Yagi antenna it can instead operate reliably over a distance of many miles.

Rural Wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. There are currently a number of companies that provide this service. A wireless Internet access provider map for USA is publicly available for WISPS.

In the end, the disadvantages outweighed the advantages and the glut of fiberoptic capacity that ensued following the collapse of the Internet bubble drove the cost of transmission so low that an ancillary service such as this was unnecessary, and the company folded at the end of 2005. The partner television stations as well as over 500 additional television stations not part of the iBlast Network continue to transmit separate digital signals as mandated by the Telecommunications Act of 1996.

WorldSpace is a digital satellite radio network based in Washington DC. It covers most of Asia and Europe plus all of Africa by satellite. Beside the digital audio, users can receive one way broadband digital data transmission (150 Kilobit/second) from the satellite.

Traditionally, ISPs have used an "unlimited time" or flat rate model, with pricing determined by the maximum bitrate chosen by the customer, rather than an hourly charge. However the use of high bandwidth applications is increasing rapidly, with increased consumer demand for streaming content such as video on demand, as well as peer-to-peer file sharing.

For ISPs who are bandwidth limited, this model may become unsustainable as demand for bandwidth increases. Fixed costs represent 80-90% of the cost of providing broadband service, and although most ISPs keep their cost secret, the total cost (January 2008) is estimated to be about $0.10 per gigabyte. Currently some ISPs estimate that about 5% of users consume about 50% of the total bandwidth .

Some ISPs have begun experimenting with usage-based pricing, notably a Time Warner test in Beaumont, Texas. Bell Canada has imposed bandwidth caps on customers, with pricing ranging from $1 to $7.50 per gigabyte ($1 to $2.50 per gigabyte on their current plans) for usage over certain limits.

An often overlooked analysis when choosing an internet provider is comparing the different DSL and cable internet services at the plan level. Doing so will ensure that consumers do not overpay for a bandwidth they will not utilize.

To the top


The interface of an e-mail client, Thunderbird.

Electronic mail, often abbreviated as e-mail, email, or eMail, is any method of creating, transmitting, or storing primarily text-based human communications with digital communications systems. Historically, a variety of electronic mail system designs evolved that were often incompatible or not interoperable. With the proliferation of the Internet since the early 1980s, however, the standardization efforts of Internet architects succeeded in promulgating a single standard based on the Simple Mail Transfer Protocol (SMTP), first published as Internet Standard 10 (RFC 821) in 1982.

Modern e-mail systems are based on a store-and-forward model in which e-mail computer server systems, accept, forward, or store messages on behalf of users, who only connect to the e-mail infrastructure with their personal computer or other network-enabled device for the duration of message transmission or retrieval to or from their designated server. Rarely is e-mail transmitted directly from one user's device to another's.

While, originally, e-mail consisted only of text messages composed in the ASCII character set, virtually any media format can be sent today, including attachments of audio and video clips.

The spellings e-mail and email are both common. Several prominent journalistic and technical style guides recommend e-mail, and the spelling email is also recognized in many dictionaries. In the original RFC neither spelling is used; the service is referred to as mail, and a single piece of electronic mail is called a message. The plural form "e-mails" (or emails) is also recognised.

Newer RFCs and IETF working groups require email for consistent capitalization, hyphenation, and spelling of terms. ARPAnet/DARPAnet users and early developers from Unix, CMS, AppleLink, eWorld, AOL, GEnie, and HotMail used eMail with the letter M capitalized. The authors of some of the original RFCs used eMail when giving their own addresses.

Donald Knuth considers the spelling "e-mail" to be archaic, and notes that it is more often spelled "email" in the UK. In some other European languages the word "email" is similar to the word "enamel".

E-mail predates the inception of the Internet, and was in fact a crucial tool in creating the Internet.

MIT first demonstrated the Compatible Time-Sharing System (CTSS) in 1961. It allowed multiple users to log into the IBM 7094 from remote dial-up terminals, and to store files online on disk. This new ability encouraged users to share information in new ways. E-mail started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the exact history is murky, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.

E-mail was quickly extended to become network e-mail, allowing users to pass messages between different computers by 1966 or earlier (it is possible that the SAGE system had something similar some time before).

The ARPANET computer network made a large contribution to the development of e-mail. There is one report that indicates experimental inter-system e-mail transfers began shortly after its creation in 1969. Ray Tomlinson initiated the use of the "@" sign to separate the names of the user and their machine in 1971. The ARPANET significantly increased the popularity of e-mail, and it became the killer app of the ARPANET.

The diagram to the right shows a typical sequence of events that takes place when Alice composes a message using her mail user agent (MUA). She enters the e-mail address of her correspondent, and hits the "send" button.

It used to be the case that many MTAs would accept messages for any recipient on the Internet and do their best to deliver them. Such MTAs are called open mail relays. This was very important in the early days of the Internet when network connections were unreliable. If an MTA couldn't reach the destination, it could at least deliver it to a relay that was closer to the destination. The relay would have a better chance of delivering the message at a later time. However, this mechanism proved to be exploitable by people sending unsolicited bulk e-mail and as a consequence very few modern MTAs are open mail relays, and many MTAs will not accept messages from open mail relays because such messages are very likely to be spam.

Note that the people, e-mail addresses and domain names in this explanation are fictional: see Alice and Bob.

The Internet e-mail messages format is defined in RFC 5322 and a series of RFCs, RFC 2045 through RFC 2049, collectively called, "Multipurpose Internet Mail Extensions," or, "MIME," for short. Although as of July 13, 2005, RFC 2822 is technically a proposed IETF standard and the MIME RFCs are draft IETF standards, these documents are the standards for the format of Internet e-mail. Prior to the introduction of RFC 2822 in 2001, the format described by RFC 822 was the standard for Internet e-mail for nearly 20 years; it is still the official IETF standard. The IETF reserved the numbers 5321 and 5322 for the updated versions of RFC 2821 (SMTP) and RFC 2822, as it previously did with RFC 821 and RFC 822, honoring the extreme importance of these two RFCs. RFC 822 was published in 1982 and based on the earlier RFC 733 (see).

The header is separated from the body by a blank line.

Each message has exactly one header, which is structured into fields. Each field has a name and a value. RFC 5322 specifies the precise syntax.

Informally, each line of text in the header that begins with a printable character begins a separate field. The field name starts in the first character of the line and ends before the separator character ":". The separator is then followed by the field value (the "body" of the field). The value is continued onto subsequent lines if those lines have a space or tab as their first character. Field names and values are restricted to 7-bit ASCII characters. Non-ASCII values may be represented using MIME encoded words.

Note that the "To" field is not necessarily related to the addresses to which the message is delivered. The actual delivery list is supplied in the SMTP protocol, not extracted from the header content. The "To" field is similar to the greeting at the top of a conventional letter which is delivered according to the address on the outer envelope. Also note that the "From" field does not have to be the real sender of the e-mail message. It is very easy to fake the "From" field and let a message seem to be from any mail address. It is possible to digitally sign e-mail, which is much harder to fake. Some Internet service providers do not relay e-mail claiming to come from a domain not hosted by them, but very few (if any) check to make sure that the person or even e-mail address named in the "From" field is the one associated with the connection. Some Internet service providers apply e-mail authentication systems to e-mail being sent through their MTA to allow other MTAs to detect forged spam that might appear to come from them.

Many e-mail clients present "Bcc" (Blind carbon copy, recipients not visible in the "To" field) as a header field. Different protocols are used to deal with the "Bcc" field; at times the entire field is removed, whereas other times the field remains but the addresses therein are removed. Addresses added as "Bcc" are only added to the SMTP delivery list, and do not get included in the message data.

IANA maintains a list of standard header fields.

E-mail was originally designed for 7-bit ASCII. Much e-mail software is 8-bit clean but must assume it will be communicating with 8-bit servers and mail readers. The MIME standard introduced character set specifiers and two content transfer encodings to enable transmission of non-ASCII data: quoted printable for mostly 7 bit content with a few characters outside that range and base64 for arbitrary binary data. The 8BITMIME extension was introduced to allow transmission of mail without the need for these encodings but many mail transport agents still do not support it fully. In some countries, several encoding schemes coexist; as the result, by default, the message in a non-Latin alphabet language appears in non-readable form (the only exception is coincidence, when the sender and receiver use the same encoding scheme). Therefore, for international character sets, Unicode is growing in popularity.

Both plain text and HTML are used to convey e-mail. While text is certain to be read by all users without problems, there is a perception that HTML-based e-mail has a higher aesthetic value. Advantages of HTML include the ability to include inline links and images, set apart previous messages in block quotes, wrap naturally on any display, use emphasis such as underlines and italics, and change font styles. HTML e-mail messages often include an automatically-generated plain text copy as well, for compatibility reasons. Disadvantages include the increased size of the email, privacy concerns about web bugs and that HTML email can be a vector for phishing attacks and the spread of malicious software.

Messages are exchanged between hosts using the Simple Mail Transfer Protocol with software programs called mail transfer agents. Users can download their messages from servers with standard protocols such as the POP or IMAP protocols, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Lotus Notes or Microsoft Exchange Servers.

Mail can be stored either on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent e-mail clients use their own proprietary format and require conversion software to transfer e-mail between them.

When a message cannot be delivered, the recipient MTA must send a bounce message back to the sender, indicating the problem.

Most, but not all, e-mail clients save individual messages as separate files, or allow users to do so. Different applications save e-mail files with different filename extensions.

There are numerous ways in which people have changed the way they communicate in the last 50 years; email is most certainly one of them. Traditionally, social interaction in the local community was the basis for communication – face to face. Yet, today face-to-face meetings are no longer the primary way to communicate as one can use a landline telephone or any number of the computer mediated communications such as email.

Research has shown that that people actively use email to maintain core social networks, particularly when alters live at a distance. However, contradictory to previous research, the results suggest that increases in Internet usage are associated with decreases in other modes of communication, with proficiency of Internet and email use serving as a mediating factor in this relationship.

Flaming occurs when one person sends an angry and/or antagonistic message. Flaming is assumed to be more common today because of the ease and impersonality of e-mail communications: confrontations in person or via telephone require direct interaction, where social norms encourage civility, whereas typing a message to another person is an indirect interaction, so civility may be forgotten. Flaming is generally looked down upon by internet communities as it is considered rude and non-productive.

Also known as "e-mail fatigue", e-mail bankruptcy is when a user ignores a large number of e-mail messages after falling behind in reading and answering them. The reason for falling behind is often due to information overload and a general sense there is so much information that it is not possible to read it all. As a solution, people occasionally send a boilerplate message explaining that the e-mail inbox is being cleared out. Stanford University law professor Lawrence Lessig is credited with coining this term, but he may only have popularized it.

E-mail was widely accepted by the business community as the first broad electronic communication medium and was the first ‘e-revolution’ in Business communication. E-mail is very simple to understand and like postal mail, e-mail solves two basic problems of communication: logistics and synchronization (see below). LAN based email is also an emerging form of usage for business. It not only allows the business user to download mail when offline, it also provides the small business user to have multiple users email ID's with just one email connection.

Much of the business world relies upon communications between people who are not physically in the same building, area or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. E-mail provides a way to exchange information between two or more people with no set-up costs and that is generally far less expensive than physical meetings or phone calls.

With real time communication by meetings or phone calls, participants have to be working on the same schedule and each participant must spend the same amount of time in the meeting or on the call as everyone else. E-mail allows asynchrony -- each participant to decide when and how much time they will spend dealing with any associated information.

Information in context (as in a newspaper) is much easier and faster to understand than unedited and sometimes unrelated fragments of information. Communicating in context can only be achieved when both parties have a full understanding of the context and issue in question.

Despite these disadvantages, email has become the most widely used medium of communication within the business world.

A December 2007 New York Times blog post described E-mail as "a $650 Billion Drag on the Economy", and the New York Times reported in April 2008 that "E-MAIL has become the bane of some people’s professional lives" due to information overload, yet "none of the current wave of high-profile Internet start-ups focused on e-mail really eliminates the problem of e-mail overload because none helps us prepare replies".

Technology investors reflect similar concerns.

The usefulness of e-mail is being threatened by four phenomena: e-mail bombardment, spamming, phishing, and e-mail worms.

Spamming is unsolicited commercial (or bulk) e-mail. Because of the very low cost of sending e-mail, spammers can send hundreds of millions of e-mail messages each day over an inexpensive Internet connection. Hundreds of active spammers sending this volume of mail results in information overload for many computer users who receive voluminous unsolicited e-mail each day.

E-mail worms use e-mail as a way of replicating themselves into vulnerable computers. Although the first e-mail worm affected UNIX computers, the problem is most common today on the more popular Microsoft Windows operating system.

The combination of spam and worm programs results in users receiving a constant drizzle of junk e-mail, which reduces the usefulness of e-mail as a practical tool.

A number of anti-spam techniques mitigate the impact of spam. In the United States, U.S. Congress has also passed a law, the Can Spam Act of 2003, attempting to regulate such e-mail. Australia also has very strict spam laws restricting the sending of spam from an Australian ISP, but its impact has been minimal since most spam comes from regimes that seem reluctant to regulate the sending of spam.

E-mail spoofing is a kind of forgery. Mails appear to be sent from a known sender but they are actually not so. Spoofing involves forging the e-mail headers, by altering the header information.

E-mail bombing refers to transferring a huge amount of e-mails to someone, ensuing the victim's e-mail account crash. An easy way of attaining this would be to subscribe the victim's e-mail address to a huge number of mailing lists.

There are cryptography applications that can serve as a remedy to one or more of the above. For example, Virtual Private Networks or the Tor anonymity network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail , or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server.

Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this.

Finally, attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses.

The original SMTP mail service provides limited mechanisms for tracking a sent message, and none for verifying that it has been delivered or read. It requires that each mail server must either deliver it onward or return a failure notice ("bounce message"), but both software bugs and system failures can cause messages to be lost. To remedy this, the IETF introduced Delivery Status Notifications (delivery receipts) and Message Disposition Notifications (return receipts); however, these are not universally deployed in production.

The US Government has been involved in e-mail in several different ways.

Starting in 1977, the US Postal Service (USPS) recognized that electronic mail and electronic transactions posed a significant threat to First Class mail volumes and revenue. Therefore, the USPS initiated an experimental e-mail service known as E-COM. Electronic messages would be transmitted to a post office, printed out, and delivered in hard copy form. In order to take advantage of the service, an individual had to transmit at least 200 messages. The delivery time of the messages was the same as First Class mail and cost 26 cents. The service was said to be subsidized and apparently USPS lost substantial money on the experiment. Both the Postal Regulatory Commission and the Federal Communications Commission opposed E-COM. The FCC concluded that E-COM constituted common carriage under its jurisdiction and the USPS would have to file a tariff. Three years after initiating the service, USPS canceled E-COM and attempted to sell it off.

Early on in the history of the ARPANet, there were multiple e-mail clients which had various, and at times incompatible, formats. For example, in the system Multics, the "@" sign meant "kill line" and anything after the "@" sign would be ignored. The Department of Defense DARPA desired to have uniformity and interoperability for e-mail and therefore funded efforts to drive towards unified interoperable standards. This led to David Crocker, John Vittal, Kenneth Pogran, and Austin Henderson publishing RFC 733, "Standard for the Format of ARPA Network Text Message" (Nov. 21, 1977), which was apparently not effective. In 1979, a meeting was held at BBN to resolve incompatibility issues. Jon Postel recounted the meeting in RFC 808, "Summary of Computer Mail Services Meeting Held at BBN on 10 January 1979" (March 1, 1982), which includes an appendix listing the varying e-mail systems at the time. This, in turn, lead to the release of David Crocker's RFC 822, "Standard for the Format of ARPA Internet Text Messages" (Aug. 13, 1982).

The National Science Foundation took over operations of the ARPANet and Internet from the Department of Defense, and initiated NSFNet, a new backbone for the network. A part of the NSFNet AUP was that no commercial traffic would be permitted. In 1988, Vint Cerf arranged for an interconnection of MCI Mail with NSFNET on an experimental basis. The following year Compuserve e-mail interconnected with NSFNET. Within a few years the commercial traffic restriction was removed from NSFNETs AUP, and NSFNET was privatized.

In the late 1990s, the Federal Trade Commission grew concerned with fraud transpiring in e-mail, and initiated a series of procedures on spam, fraud, and phishing. In 2004, FTC jurisdiction over spam was codified into law in the form of the CAN SPAM Act. Several other US Federal Agencies have also exercised jurisdiction including the Department of Justice and the Secret Service.

To the top

Source : Wikipedia