The network of networks

The Internet: how does it happen?

The amorphous nature of the Internet tends to distract many simple commercial folk trying to get a handle on something that is owned by no-one, and apparently costs nothing to use. This is wholly understandable, of course: when did you last get something for nothing?

Well, of course, Internet connectivity isn’t quite that simple. The Internet is a co-operative collection of networks that are indeed “owned" by various operators who have agreed to co-operate on the broader issues of interconnection, largely thanks to the simplicity of connecting disparate networks together using the now-universal TCP/IP network protocol. In much the same way that international postal agreement exist to ensure that the mail passes from the “provider" in one country that took the stamp money, to a destination in another country where no fee was paid directly to the delivering authority.

In the UK, the service provided by PIPEX is not "Internet connectivity" per se, but a facility management service for the wide area network (WAN) operated, controlled and managed by PIPEX, that connects into the world-wide Internet. So perhaps it might be an easier concept to grasp if we said that PIPEX was offering "PIPENET", with a connection to the world-wide Internet as an "added bonus".

The postal analogy is quite simple: you don’t post your letters into the great "world-wide" mail system; they are first handled (in the UK) by the Post Office, who filter out those destined for overseas, and route them accordingly. Most of the mail stays in the "closed circuit" of the British Post Office, and gets delivered directly.

A connection to the Internet via PIPEX is supplied at a flat rate: there is no additional charge for the volume of traffic passing along it. And where a permanent leased line is used to connect the subscriber to PIPEX facility point, then there is nothing more to pay by way of connection time.

How it works

So, if two PIPEX (or EUnet) customers send traffic to each other, their traffic does not actually go outside their providers’ respective backbones and onto the world-wide "Internet" at all. This has speed and security benefits. So what you get when you subscribe to the Internet via a service such as PIPEX in the UK is a link to a commercially managed network (which is generally referred to as their "backbone") offering assurances about the quality of the service and its availability: generally a minimum of 99.99% availability will be required for commericial comfort, and a bunch of other features that form a contractual undertaking and responsibility on the part of the service provider to "deliver the mail".

The broader "Internet" cannot make any specific promises about delivery, because it does not exist as an entity as such; but the network operators that comprise the Internet are perfectly able to do so. This is something the recent government consultative report "Information Superhighways" (CCTA) completely failed to grasp.

I quote from section 2.3:

"The [Internet], a collaborative network run by academic and research communities in the US, and now world-wide. It provides relatively low cost access to a wide range of computing facilities. However, the service is not guaranteed, there is no billing system for use, the user interface is unfriendly and investment in the infrastructure is quixotic

There are a number of other potentially serious problems with the "Internet" stemming from the degree to which access is open...". Obviously, the current information network operators that comprise the Internet have singularly failed to get the message that they do indeed offer a "guaranteed" service through to the CCTA, or for some other reason, the CCTA is being specifically obtuse.

But as most would generally agree, the "cock-up" theory is generally more applicable to the actions of the government in this country than the "conspiracy" theory. The CCTA document goes on to cite potential uses including "fast food ordering and delivery".

Delivery...?!?

Do they know something we don’t?

Retail Internet

The main public access wide area TCP/IP network operators such as PIPEX like to style themselves as "wholesalers" in the business of selling their managed networking facilities and connectivity. They in turn supply their managed network facility to retailers, who lease a connection service to the PIPEX network, and sell access to individuals for "dial-up" accounts to the Internet. It is then up to that retailer to provide support and service to the subscribers to the service: in effect, the retailer acts as a buffer between user and the PIPEX network.

Various retailers exist offering a variety of low-cost routes to the Internet, but none can guarantee better availability than the next link up the chain of service provision. Availability of the service depends on there being a free modem connection at the dial-in point of entry to the system. Modem ratios will vary, but operate on the theory that not all the subscribers will want to dial in at the same time. In fact, if you assume that a regular daily user is going to want 30 minutes of connection time a day, then one line should be able to service 16 users (8 hour working day) according to the law of averages.

Obviously, this is an unlikely proposition where there are 16 users and one line; but these things become more statistically feasible as the numbers pile on: so 100 modems with 1600 customers is nearly a statistical dead cert to provide all users with 100% access when they want it. Where the Internet and similar packet transport networks are concerned, a 100 modems all "live" at once at 14.4kbaud, does not mean 1.44Mb (14.4k*100) of bandwidth consumption. The nature of the TCP/IP network protocol will ration the bandwidth amongst the requests, and serve up the packets "according to availability".

Assuming that the main network connection is running a 256kbps, then if all 100 modems were trying to suck data in at 14.4kbps, then (in very approximate terms) each will get (256kbps/100), or 2.56kbps. The lamp on your modem flashes intermittently, rather than staying on continuously. Because packet network connections are essentially "burst" phenomena, then it is highly unlikely that all connections will be running at the full demand rate all the time. In practise, retail users might only average out at 15 minutes a day; but whatever actually happens, the service provider is in an excellent position to analyse network use and behaviour, and will quickly get a true picture and analysis of the usage demand and supply options.

The overall speed of connection is determined by the "narrowest" bandwidth in the line of connection between one user and his/her destination. In the UK, much of the PIPEX backbone run at 2Mbps, which since PIPEX is supporting a constantly growing number of users with 64k leased line connections, is a necessary part of the service. Although it would be possible to connect 64k leased lines users together along a 256 backbone, four users would swamp the available bandwidth, and thereafter things would slow down considerably as the packets became rationed. But on almost any circuit, occasional bursts of loading will cause packet transmission to slow and even stop ...temporarily.

On some world links, where a connection can take as many as 8 or 10 routings on its way via the Internet, there is a much higher probablilty of squeezing through an arm of the network with restricted bandwidth.

What does this matter? The essence of the TCP/IP network protocol is that it works, and stays working, in all manner of adverse conditions. Your packets will arrive at their destination "eventually", and for some applications, like store and forward email, this isn’t a big problem. But for the increasing numbers of users who are using a "live" IP connection for applications like TELNET, FTP and the hypermedia browsers, then this is a major concern if the packets dribble in at maybe as little as 100bytes/second (and less!).

Quality of service is therefore increasingly important, and while the "amorphous Internet" cannot be held accountable for much of what goes on, your specific service provider can at least manage his arm(s) of the network as effectively as possible. If your service provider’s own network connects the majority of users that you want to communicate with, then your connection to them will be direct (unless some failure on a link causes the magic of the Internet to take over and re-route your call out of the country to enter at another point).

Bottom line: no-one "sells the Internet", and the phrase "a supplier of Internet connectivity" needs to be taken advisedly. What companies who are offering to connect you to the Internet are doing is connecting you via a managed network connection that they operate and maintain, and which has gateways to the world-wide Internet. The arrangements that exist between the various member companies that have mutual exchange agreements to form the Internet rely on the spirit of open systems that has driven the Unix community for the past twenty years.

It is possible for a large national network operator such as BT or Mercury to stroll in and offer facilities in the same way as any other service provider. And existing operators will be understandably peeved that their efforts to create the viable Internet in the face of the indifference of the main PTT services may be usurped: expecially since BT already earns more from any Internet connection than the specialist suppliers, since they (inevitably) provide the kilostream and dialup network connections to the suppliers’ backbones.

Back to the issue 5 contents page...

Back to the infoHIGHWAY home page...