The Internet - A Tutorial

Dr. Jon Crowcroft, UCL Computer Science Department, London

2007 network writings:
My TechnologyInside blog


What is Internet?

The Internet is the Information Superhighway; It has become a clich� to say so. However, before embarking on a drive around the World Wide Web, it is important to understand how the roads themselves work (and to understand who pays road tax!).

The Internet is undergoing a stormy adolescence as it moves from being a playground for academics into being a commercial service. With more than 50 of the network commercially provided, and more than 50 of the subscribers being businesses, the Internet is now a very different place from what it was in the 1980s. Growth has occurred most sharply in Europe, and in the commercial sector in the last two years. The first is host software (applications and support). These form what one might call the "Information Services". The second is network support. This is the access and transfer technology used to move the information around. Hosts, Networks and Routers

The components that make up the Internet are threefold. There are Hosts, the workstations, PCs, servers, and mainframes that we run our applications on. There are networks, the Local Area Nets (Ethernets), point-to-point leased lines, dial-up (telephone, ISDN, X25) links, that carry traffic between one computer and another. There are routers. These glue together all the different network technologies to provide an ubiquitous service to deliver packets (a packet is a small unit of data convenient for computers to bundle up data for sending and receiving). Routers are usually just special purpose computers which are good at talking to network links. Some people use general purpose machines as low performance (low cost) routers - e.g. PCs or UNIX boxes with multiple LAN cards or Serial line cards or modems.

Every computer (host or router) in a well run part of the Internet has a Name. The name is usually given to a device by its owner. Internet names are actually hierarchical, and look rather like postal addresses. Jon's computer's name is waffle.cs.ucl.ac.uk. We allocated it the name waffle . The department we work in called itself CS. The university it is in called itself UCL. The academic community called themselves ac, and the Americans called us the UK. The name tells me what something is organisationally . The Internet calls this the Domain Name System.

Everything in any part of the Internet that wants to be reached must have an address. The address tells the computers in the Internet (hosts and routers) where something is topologically. Thus the address is also hierarchical. My computer's address is 128.16.8.88. We asked the IANA (Internet Assigned Numbers Authority) for a network number. The task of allocating numbers to sites in the Internet has now become so vast, that it is delegated to a number of organisations around the world - ask your Internet provider where they get the numbers from if you are interested. We were given the number 128.16.x.y. We could fill in the x and y how we liked, to number the computers on our network. We divided our computers into groups on different LAN segments, and numbered the segments 1-256 (x), and then the hosts 1-256 (y) on each segment. When your organisation asks for a number for its net, it will be asked how many computers it has, and assigned a network number big enough to accommodate that number of computers. Nowadays, if you have a large network, you will be given a number of numbers!

Everything in the Internet must be reachable. The route to a host will traverse one or more networks. The easiest way to picture a route is by thinking of how a letter to a friend in a foreign country gets there.

You post the letter in a postbox. It is picked up by a postman (LAN), and taken to a sorting office (router). There, the sorter looks at the address, and sees that the letter is for another country, and sends it to the sorting office for international mail. This then carries out a similar procedure. And so on, until the letter gets to its destination. If the letter was for the same 'network' then it would get immediately locally delivered. Notice the fact that all the routers (sorting offices) don't have to know all the details about everywhere, just about the next hop to go to. Notice the fact that the routers (sorting offices) have to consult tables of where to go next (e.g. international sorting office). Routers chatter to each other all the time figuring out the best (or even just usable) routes to places.

The way to picture this is to imagine a road system with a person standing at every intersection who is working for the Road Observance Brigade. This person (Rob) reads the road names of the roads meeting at the intersection, and writes them down on a card, with the number 0 after each name. Every few minutes, Rob holds up the card to any neighbour standing down the road at the next intersection. If they are doing the same, Rob writes down their list of names, but adds 1 to the numbers read off the other card. After a while, Rob is now telling people about the neighbours roads several roads away! Of course, Rob might get two ways to get somewhere! Then, he crosses out the one with the larger number.

Performance

The Internet today moves packets around without due regard to any special priorities. The speed a packet goes at once it starts to be transmitted is the speed of the wire (LAN, point-to-point link, dial up or what have you), on the next hop. But if there are a lot of users, packets get held up inside routers (like letters in sorting offices at Christmas). Because the Internet is designed to be interactive, rather than the slow turnaround of mail (even electronic mail) , routers generally do not hang on to packets for very long. Instead, they just "drop them on the floor" when things get too busy!

This then means that hosts have to deal with a network that loses packets. Hosts generally have conversations that last a little longer than a single packet - at the least, a packet in each direction, but usually, several in each direction.

In fact, it is worse than that. The network can automatically decide to change the routes it is using because of a problem somewhere. Then it is possible for a new route to appear that is better. Suddenly, all packets will follow the new route. But if there were already some packets half way along the old route, they may get there after some of the later packets (a bit like people driving to a party, and a some smart late driver taking a short cut and overtaking the earlier leavers).

So a host has to be prepared to put up with out of order packets, as well as lost packets.

Protocols

All this communications is done using standard "languages" to exchange blocks of data in packets, simply by putting 'envelopes' or wrappers called "headers" around the packet.

The work of routing and addressing is done by the Internet Protocol, or IP. The work of host communication is done by the Transmission Control Protocol, or TCP. TCP/IP is often used as the name for the Internet protocols, including some of the higher level information services. TCP does all the work to solve the problems of packet loss, corruption and reordering that the IP layer may have introduced, through a number of End to End reliability and error recovery mechanisms. If you like, you can think of IP as a bucket brigade, and TCP as drainpipe.

So if we want to send a block of data, we put a TCP header on it to protect it from wayward network events, and then we put an IP header on it to get it routed to the right place.

Host and Applications

The emergence of some simple APIs (Application Programming Interfaces) and GUIs (Graphical User Interfaces) has led to the rapid growth of new user-friendly applications in the Internet. Information services provided by archive and Web servers are accessible through WWW and Mosaic, Archie and Prospero, gopher, and WAIS and Z39.50.

Mosaic is the most popular client program for accessing the Web Servers around the world. In this picture, we can see a window showing a map of the UK, and two other pictures, both GIF (Graphics Interchange Formats) or photographs. The Top right is a picture of UCL, where we work. The lower right is a satellite image stored on Edinburgh University Computing Service's Web Server every hour, from a live satellite weather feed, which comes from one of the many weather satellites that send out unencrypted images periodically over the ether. Many sites simply point a satellite dish in the right direction, and soak up the data, for later dissemination on the terrestrial Internet.

The value of the service is now becoming clearly the value of the information, rather than the communications channel. This means that some mechanism for charging and auditing access to information is a new requirement. This must be a secure mechanism to assure people that charges (or audit trails) are correct. From all of these interfaces, having found a piece of information, we can retrieve it, print it, mail it to other people.

Information Servers

We are familiar with ways to get information in the non-network world. We can go to a library, or buy a book in a bookshop. We can telephone companies or individuals by looking up their names in a phonebook. We can sit around and watch TV or listen to the radio.

In the networked world, there are a number of ways of carrying out the same kinds of activities, using programs that run on a PC or workstation to access information servers in the network that hopefully hold the knowledge we seek.

Information on the net used to be about as hard to retrieve as computers were to use. Nowadays it is usually pretty easy if you can find the information. First you have to know what kind of server holds it (its a bit like knowing whether a written item is in a reference book, a novel, a magazine, newspaper or shopping list).

Different kinds of information services have different models of use and different ways they hold information. Almost all fit into the ``Client/Server'' model that has become widespread in distributed computing.

Client/Server communication is quite easy to understand in terms of roles, and is very closely analogous to what happens in a shopping situation. An assistant in a shop awaits a customer. The assistant doesn't know in advance which customer might arrive (or even how many - the store manager is supposed to make sure that enough assistants are employed to just about cope with the maximum number of shoppers arriving at any one time). A server on the network is typically a dedicated computer that runs a program called the server. This awaits requests from the network, according to some specified protocol, and servers them, one or more at a time, without regard for who they come from.

There are a variety of refinements of this model, such as requiring authentication or registration with the server before other kinds of transactions can be undertaken, but almost all the basic systems on the Internet work like this for now.

We can categorise information servers along a number of different axes:

Synchronous versus Asynchronous

Synchronous servers respond as you type/click at your computer. Asynchronous ones save up their answers and return them sometime later. Sometimes, your system will actually not even send the request for information to the server until you have finished composing the whole thing, or even later, to save time (and possibly money, since night-time network access may be cheaper and or faster).

Browsable versus Searchable

Some servers allow you to move from one piece of information to another. Typically, the managers/keepers have structured the information with links, or else the information is hierarchical (like most large organisations' job structures or payrolls). Other servers allow you to search for particular items by giving keywords. Usually, this means that the managers of the information have created indexes, although sometimes, it just means that the server is running on a very fast computer that can search all through the data. This latter approach is becoming increasing impossible as the quantity of information kept on-line grows to massive proportions.

Distributed versus Replicated

Some information servers hold only the information entered at their site, and maybe have links for the user to follow to other servers at other sites. Other systems copy the information around at quiet times, so that all servers are replicas of each other. This means that it doesn't matter which server you access in the latter case, so you might as well go for the nearest or cheapest (likely to be both).

Transport Protocols

The Internet provides a way to get packets (convenient units of data for computers and routers) from any host computer to one or more other host computers. However, the network protocols make no guarantees about delivering a packet. In fact, a packet may get lost, may arrive after others sent later or may be distorted. A packet might even arrive that simply wasn't sent!

To counter this, host computers incorporate transport protocols, which use the Internet to carry the application information around, but also send a variety of other information to provide checking and correction or recovery from such errors.

There is a spectrum of complexity in transport protocols, depending on the application requirements. The three representative ones are:

  • The User Datagram Protocol, or UDP

UDP is a ``send and forget'' protocol. It provides just enough control information at the start of each packet to tell what application is running, and to check if the packet got distorted on route. UDP is used by applications that have no requirement for an answer, typically, and don't really care if the other end received the message. A typical example of this might be a server that announces the time on the network, unsolicited.

  • The Reliable Data Protocol, or RDP

RDP is a generic name for a collection of protocols - the most relevant here is the one used by Prospero. RDP type protocols are similar to TCP, but with reduced complexity at the start and end of a conversation, and with good support for sequences of exchanges of chunks of data, often known as Remote Procedure Calls, or sometimes incorrectly called transactions.

  • The Transmission Control Protocol, or TCP

TCP is the protocol module that provides reliability and safety. TCP is designed to cope with the whole gamut of network failures, and adapts elegantly to the available resources in the network. It even tries to be fair to all users.

  • File Transfer Protocol - FTP

The original information service was called FTP, which stands for File Transfer Protocol. This is probably the world's least friendly information service. It operates really at the level of machine-machine communication, and is used by more modern client programs just as that way of getting something from a server. However, it is still used by people as a sort of lowest common denominator means of access to a file on a remote computer.

Internet FTP is interactive or synchronous, which means that you formulate your commands as you type at the terminal. FTP maintains Control Connection between the client and server, and sends commands over this in ASCII or ordinary text!

When data is going to move, the client and server open a Data Connection. The Data connection can keep going whilst the user issues further commands.

Electronic Mail and Information-Servers

E-mail is either the saviour of modern society, and trees, or the devil on the icing on the cake of technocracy. Electronic mail, at its simplest, is a replacement for paper letters (``snail'' mail) or the facsimile/fax. Sending e-mail is easy, if you know the address of the person you want to get it to. You type in the message using whatever facility you are familiar with, and then submit it to the mail system (put it in the postmasters postbag!). Then a series of automatic systems (message handlers) will sort and carry it to the destination, just like post offices and sorting offices do with paper mail.

The protocol used for electronic mail in the Internet is called the Simple Message Transfer Protocol or SMTP. The model is of Message Handling Systems and User agents all talking to each other. Both use same protocol. User program invokes SMTP to send to a receiver.

The receiver may be a mail relay or actual recipient system.

SMTP Mail address look like this: [email protected] . The general form of such an address is: User @ Domain.

The Domain is as defined in the Domain Name service for host implementing SMTP. The DNS Name is translated to an IP address. The sending system merely opens a TCP connection to the site and then talks the SMTP protocol.

Mail Lists

Some mail system managers use info-servers to maintain mail lists. Mail lists are ways of sending a message at one go to groups of people possessed of a common interest or purpose. They resemble, but are completely different in implementation from Bulletin Boards, of which more below. Mail lists are very useful when used discriminatingly. On the other hand, because it is as easy to send to a list as to an individual, sometimes, users propagate junk mail to large groups of people. The most common typical piece of junk mail is to do with list management (e.g. ``please add me to this list'' or ``please remove me from this list'', which should be directed to list managers, usually as `` listname -request ''), but other human errors include sending irrelevant or offensive information.

Bulletin Boards

On-line Bulletin Boards are analogous to the pinboards we are all used to from offices, schools and so on. The difference between a bulletin board and a mail list is fundamental:

Mail arrives in an individuals mailbox, and the individual's attention is drawn to it.

Bulletins arrive on a bulletin board, and users decide whether or when they want to read that bulletin board, if at all.

Less fundamental is the protocol. Bulletin boards are effectively a single mailbox. Thus the overhead of delivery in terms of computer storage is much lower for a bulletin board than a mail list.

Archie

Archive servers appeared in the mid 1980s. Initially, they were a logical extension of FTP servers. They provide indexed repositories of files for retrieval through a simple protocol called Prospero.

Archive servers had been in place manually for some time, simply as well maintained FTP servers. The first attempt at automating coordination was to use a simple protocol. This involved exchanging a recursive directory listing of all the files present on a given server with all other known servers, periodically. Thereafter, access to a given server for a file present on another could have two results: Either the client could be redirected to the right server, or the current server could fetch it, and then return it to the client.

These two approaches are called ``referral'' and ``chaining'' in some communities, or ``iteration'' and ``recursion'' in others. See below for further discussion of these ideas.

Whois/Finger/Netfind

Whois. Whois is one of the oldest and simplest information servers in the Internet. Whois allows you to lookup someone's e-mail address and other information that a user may be happy to give away, simply by knowning their name.

Originally, it was a purely central server run on the ARPANET for all managers/contacts of networks attached to the ARPANET for the DCA (Defence Communications Agency). [RFC 954]

Basically, a whois server runs on TCP port 43, and awaits simple command lines (in ASCII text, ending with CRLF CRLF is "Carriage Return" (ASCII character 13) followed by "Line Feed" (ASCII character 10) ). The server simply looks up the command line or ``name specification'' in a file (perhaps using fuzzy or soundex matching) and responds, possibly with multiple matches. Whois is for keeping organisation contact information.

Finger. The Finger protocol derives from RFC742. A Finger server runs on UDP or TCP port 79. It expects either a null string, in which case a list of all people using the system is returned, or if a string is given, information available concerning that person is given (whether logged in or no).

Some people find the idea of Whois and Finger alarmingly insecure. One particular scare in the Internet concerning security was due to a simple, but extremely effective gaping bug in the most widely used implementation of the finger server, which may be why it scares some people. This bug is long since fixed. Basically, the finger daemon had storage for receiving a limited request/command, but could actually be handed a larger amount of information from the transport protocol. The extra information would overwrite the stack of the executing finger server program. An ingenious hacker could exploit this by sending a finger command carefully constructed with executable code that carried out his desired misdemeanour. The problem was exacerbated on many systems where the finger server ran as a special privileged process (root!), for no particular reason other than laziness of the designers of the default configuration. Thus the wily hacker gained access to arbitrary rights on the system.

DNS The Domain Name System is really designed as a Network Information Service for internal use by tools rather than directly by users. However, the names it holds appear in location information currently used by many services, and are also the basis for electronic mail routing.

The Domain Name System model is that all objects on the net have a name, and that the name should be that given by the people responsible for the object. However, this name is only part of the full way to specify the object. The fully distinguished name is part of an hierarchy of names. They are written as per postal 'address', for example: swan.computer-lab.cambridge.ac.uk

The ``top level'' is a Country code (e.g. `` .uk '') or US Specific (e.g. `` .com ''). Any organisation owns its level and the names of the levels below. Any string is usable. Aliases are allowed - names more friendly than addresses

Any owner of a name space must run a server. The owning site then must inform sites at a ``level'' above, where their server is. At the same time, they tell their server where level above is.

Applications (FTP, Mail, TELNET, Mosaic, etc.) use a library function to call the resolver. They give the resolver function a name, it sends request to the local site DNS Server. The DNS server responds with either:

  • The answer. From its own tables. From another server it asked on the user's behalf. This is called chaining.
  • A site who can answer. This is called referral.
  • The Domain Name System holds general purpose Resource Records.

Wide Area Information Server - WAIS

The Wide Area Information Server idea is based on a search model of information, rather than a browse one. Sites that run WAIS servers have created a collection of indexed data that can then be retrieved by searches on these indexes. The access protocol to WAIS servers is based on the standard developed for library searching by ANSI (American National Standards Institute) with the unlikely title, Z39.50 (a.k.a. Information Retrieval Service and Protocol Standard).

WAIS has four parts (like most information services except the richer WWW): the client, the server, the database, and the protocol.

Client programs (e.g. the X Windows client xwaisq) construct queries, and send them using the protocol to the appropriate server. The server responds, and includes a 'relevance' measure for the results of the search match to the query.

The actual operation of the protocol is quite complex, as it permits exchanges to be broken in to separate parts. WAIS permits retrieval of bibliographic, as well as contents (including images), data.

A search request consists of seed words, or keys if you like, typed by a user into the client, together with a list of documents (identified by a unique global ID). The response is quite complex and includes a list of records, including the following fields:

[Headline] - basically a title/description [Rank] - relative relevance of this document [Formats] - list of formats available (text/postscript etc.) [Document ID] [Length] description

Gopher

Gopher is a service that runs listening for TCP connections on port 70. It responds to trivial string requests from clients with answers preceded by a single character identifying the type, a name, and a selector. The client then chooses what to do, and how to display any actual data returned:

World Wide Web

The World Wide Web makes all these previous services look like stone tablets and smoke signals. In fact, the Web is better than that! It can read stone tablets and send smoke signals too!

The World Wide Web service is made up of several components. Client programs (e.g. Mosaic, Lynx etc.) access servers (e.g. HTTP Daemons) using the protocol HTTP. Servers hold data, written in a language called HTML. HTML is the HyperText Markup Language. As indicated by its name, it is a language (in other words it consists of keywords and grammar for using them) for marking up text that is hyper! hyper comes from the Greek prefix meaning above or over , and generally means some additional functionality is present compared with simple text. In this case, that additional functionality is in two forms: graphics or other media, and links or references to other pieces of (hyper)-text. These links are another component of the WWW, called Uniform Resource Locators . The pages in the World Wide Web are held in HTML format, and delivered from WWW servers to clients in this form, albeit wrapped in MIME (Multipurpose Internet Mail Extensions) and conveyed by HTTP. HTTP is the HyperText Transfer Protocol.

A Note on Stateless Servers

Almost all Information Servers above are described as stateless .

State is what networking people call memory . One of the important design principles in the Internet has always been to minimise the number of places that need to keep track of who is doing what.

In the case of stateless information servers this means that they do not keep track of which clients are accessing them. In other words, between one access and the next, the server and protocol are constructed in such a way that they do not care who, why, how, when or where the next access comes from.

This is essential to the reliability of the server, and to making such systems work in very large scale networks such as the Internet with potentially huge numbers of clients: if the server did depend on a client, then any client failure, or network failure would leave the server in the lurch, possibly not able to continue, or else serving other clients with reduced resources.

Security, Performance Guarantees and Billing

The growth in CPU, memory and storage performance/price has made possible all these new applications. The reduction in connectivity costs is leading to the use of these new applications. The corresponding increase in network functionality has yet to happen. The Internet is experiencing a number of problems due to the growth in its size, and in breadth of the community using it. These include: Scale

The number of systems is exceeding the available numbers for addressing systems in the Internet - a similar problem that everyone in the UK is accustomed to from time to time with telephone numbers. The way these numbers are allocated is also leading to problems with the amount of memory in the router boxes that hold together the Internet. Currently, these need to hold the full list of every site in the Internet. A more hierarchical approach (like the phone system or the postal system) will fix this.

Security

Security is really not a question that is relevant when talking about the Internet itself. What needs to be secured are hosts, and information. However, the network must provide relevant hooks for security to be implemented.

Billing

Currently, the charging model in most of the Internet is a leasing one. Bills are for the speed of access, not the amount of usage. However, many believe that at least during busy periods, or else for priority service, billing will be necessary as a negative feedback mechanism. This will also require security so that the right people can be billed legally. There are some who believe that every type of Internet access should be billed for on a usage basis. This is problematic, and in fact, it has been shown that does not maximise profit. Only the user herself knows how "urgent" a file transfer is. With very many types of data around, the net can only charge for the ones it really knows about, like long-distance voice or high quality video say.

Guarantees.

The Internet has not historically provided guarantees of service. Many providers have done so, but typically by over-provisioning the internal resources of their networks. In the long run, this may prove viable, but at least for the next few years, we will need mechanisms to control guarantees, especially of timeliness of delivery of information. For example many information providers such as the Share Trading and News businesses value their commodity by time.

Performance Parameters

There are three key parameters to worry about in the network, and these are important if you intend using a part of the Internet to deliver commercial or dependable WWW services:

  • Errors

Transmission technology is NEVER perfect. Even glass fiber has occasional errors. These are when what is received is not what was sent. (Imagine you send a letter by airmail, and the plane crashes at sea, and the postbags are recovered but water damaged - it happens!).

  • Delay or Latency

A network is not infinitely fast. In fact, now that we are building a global society, the speed of light limit that Einstein was so keen on, is rearing its pretty head. Also, busy networks run slower. Picture the difference between a space shuttle and a canal boat. A canal boat runs at 4 miles an hour, while the shuttle runs at around 7 miles a second. So the time to get there and back is much less on the shuttle. However, there is a limit.

  • Throughput.

Different networks are built for different amounts of traffic. The canal system above can carry around 200 tons per boat, while the shuttle can only carry around 1 ton. So while you may have lower latency on some networks, you may also have lower throughput. Normally, though, latency and throughput are largely unconnected.

Typically, throughput is a feature of how much you pay, while latency is a feature of the distance you are communicating over, plus the busyness of the net.

Internet Service Model Futures

The Internet provides a best effort service. When you want to send data, you just send it. You do not have to know that there is a receiver ready, or that the path exists between you and a potential receiver, or that there are adequate resources along the path for your data. You do not have to give a credit card number or order number so you can be billed. You do not have to check the wire to make sure there are no eavesdroppers.

Some people are uncomfortable with this model. They point out that this makes it hard to carry traffic that needs certain kinds of performance guarantees, or to make communication secure, or to bill people. These three aspects of the Internet are intimately connected, and in this article, we examine how research at UCL and elsewhere is leading to a new Internet model which accommodates them.

Best Effort and Charging

The current model for charging for traffic in the Internet is that sites connect via some "point of presence" of a provider, and pay a flat fee per month according to the speed of the line they attach with, whether they use it to capacity or not.

For existing applications traffic and capacity, this model is perfect. Most sites wish to exchange data, which has value that is not increased radically by being delivered immediately. For instance, when I send electronic mail, or transfer a file, the utility to me is in the exchange.

The network provider maximises their profit by admitting all traffic, and simply providing a fair share. As the speed decreases, my utility decreases, so I am prepared to pay less. But the increase in possible income from the additional users outweighs this. The underlying constant cost of adding an additional user to the Internet is so low, that this is always true.

However, there are other kinds of traffic, that this "best effort" model does not suit at all, and we look at those next.

  • Real Time Traffic

The Internet has been used for more than 4 years now to carry audio and video traffic around the world. The problem with this traffic is that it requires guarantees: It has a minimum bandwidth below which audio becomes incompressible, and even compressed video is just not usable. For human interaction, there is also a maximum delay above which conversation becomes intolerable.

In the experimental parts of the Internet, we have re-programmed the routers which provide the interconnection to recognise these kinds of traffic and to give it regular service.

There are two aspects to this: First we must meet the minimum traffic guarantee, and this is done by looking at the queues of traffic in the net more frequently for traffic that needs more capacity. This then also means that the delay seen by this traffic is only affected by the amount of other traffic on the network, and the basic transmission time (speed of light, or thereabouts, although around the world, this is still a significant factor. However, it is one we are not at liberty to alter!). As we increase the other traffic, our video or audio experiences increasing delays and variation of delay. So long as this stays within tolerable limits, the receiver can adapt continuously (e.g. in silences in audio or between video frames) and all is well. Meanwhile, any spare capacity carries the old best effort traffic as before.

However, when the total amount of traffic is higher than capacity, things start to break.

At this point there are three views on how to proceed.

Engineer the network so that there is enough capacity. This is feasible only while most peoples access speed is limited by the "subscriber loop", or tail circuits that go to their homes/offices. When we all have fiber to the home/desk-top, the potential for drowning the net is alarming. Note though, that the phone network is currently over engineered, so for audio capacity, we could certainly switch everyone over to the Internet, and switch all our phones over to Internet based terminals, and have a flat fee model.

Police the traffic, by asking people who have real time requirements to make a "call setup" like they do with the telephone networks. When the net is full, calls are refused, unless someone is prepared to pay a premium, and incur the wrath of other users by causing them to be cut off!

Simply bill people more as the net gets busier. This model is proposed by economists at Harvard, and is similar to models of charging for Road Traffic proposed by the Transport Studies group at UCL. We believe it is optimal. Since we have already re-programmed the routers to recognise real-time traffic, we have the ability to charge on the basis of logging of this traffic. Note that we can charge differentially, as well. And until we could make the guarantees, we would have a hard time placing a contract for this. Now it is feasible. But we have maintained all the original advantages of the Internet (no call setup, easy to rendezvous and so on).

Back to home