DEVTOME.COM HOSTING COSTS HAVE BEGUN TO EXCEED 115$ MONTHLY. THE ADMINISTRATION IS NO LONGER ABLE TO HANDLE THE COST WITHOUT ASSISTANCE DUE TO THE RISING COST. THIS HAS BEEN OCCURRING FOR ALMOST A YEAR, BUT WE HAVE BEEN HANDLING IT FROM OUR OWN POCKETS. HOWEVER, WITH LITERALLY NO DONATIONS FOR THE PAST 2+ YEARS IT HAS DEPLETED THE BUDGET IN SHORT ORDER WITH THE INCREASE IN ACTIVITY ON THE SITE IN THE PAST 6 MONTHS. OUR CPU USAGE HAS BECOME TOO HIGH TO REMAIN ON A REASONABLE COSTING PLAN THAT WE COULD MAINTAIN. IF YOU WOULD LIKE TO SUPPORT THE DEVTOME PROJECT AND KEEP THE SITE UP/ALIVE PLEASE DONATE (EVEN IF ITS A SATOSHI) TO OUR DEVCOIN 1M4PCuMXvpWX6LHPkBEf3LJ2z1boZv4EQa OR OUR BTC WALLET 16eqEcqfw4zHUh2znvMcmRzGVwCn7CJLxR TO ALLOW US TO AFFORD THE HOSTING.

THE DEVCOIN AND DEVTOME PROJECTS ARE BOTH VERY IMPORTANT TO THE COMMUNITY. PLEASE CONTRIBUTE TO ITS FURTHER SUCCESS FOR ANOTHER 5 OR MORE YEARS!

A Fundamental Overview of Network Technologies – Part I

The network technologies that perform the modular tasks assigned to them is of the highest important concerns here. As with any field of technology, many types of sub technologies and related tech will branch off from the main subject matter we will examine today. The primary focus here is going to be on the network technical information that is directly relative to computer based communications. This will allow us to imagine a fundamental idea what network technologies facilitate what a network is used for.

A great variety of methodologies perform the most common network tasks. In the big picture model, imagine an access layer allowing a large variety of devices to attach to the network portal. Switches, routers, and hubs will contain connections for wired ports to be created to connect the devices. In many situations devices will connect themselves wirelessly to this access layer. Connectivity is established first to allow further access to network services. The network services will contain things like websites, printers, storage devices, application servers, peers, and countless other possible host devices.

From the access layer there may be an additional distribution layer with more switches and routers to direct network requests and traffic. Redundant network traffic paths may exist to route requests for service. Sometimes local host requests may be resolved without the network traffic flowing far to find what it needs. Other times if the network requests are not found local the will be forwarded to another place where the service might be found through default gateways.

Default gateways can forward network traffic to remote locations through traffic directing devices called routers. The routers maintain a table of other networks through an address system. Through the address system other hosts can be contacted that are located in a remote location.

For small networks like soho (small office home office) or regular home internetwork access, they will often have a local routing capable device that will contact a service provider to deliver expanded host contact. Access and distribution network layers are more common in larger networks with many computers that require enhanced network traffic control.

On very large networks in large organizations like enterprise networks, the large number of devices that will exist requesting access on these types of nets will sometimes have a additional network layer called a core. With very large numbers of network traffic requests existing on enterprise networks you will often find large fast routing devices in the core layer that can service huge numbers of requests for service. These types of very fast routing devices can also be integrated into the distribution layer design and eliminate the need for a discreet network core layer. This method of integrating a distribution network layer and a core network layer is often referred to as a “collapsed core “.

This basic view of interconnecting networks into a architecture that works is only the beginning. Often various types of wiring and hardware is assembled together to create the network in this way. Software running on the connectivity devices directs traffic on the network from one place to the next. Without this software running on the network hardware the magic of the communications on the network will not be achieved.

Standards organizations like the IEEE exist to create standards that the software can comply with to deliver compatibility that is essential to make things work together. Without standards agreed to by the software running on the network devices communication between devices would be extremely limited. Luckily popular standards do exist and network communication can be assured when the technologies are combined together as they transverse the local or remote networks.

Sometimes unique or vendor specific networking standards are implemented and used for secure networks. Certain agreed to protocol standards are only understood by certain software versions and vendor hardware. Cisco Systems for example, is a world wide networking leader in network device creation. Some of their proprietary protocols will not communicate with other vendor’s software and hardware. Often these unique protocols will exist as enhancements between the Cisco specific equipment. Cisco Systems is not at all limited just to high performance network operation within its own family of network devices. Cisco systems a “one” network vision that allows impressive integration with other vendor’s hardware and software.

The cooperation of networking agreement standards are important to any vendors success in piecing together a successful network design all in harmony. This facilitates scaling and expanding networks as they grow and upgrade over time. Without vision and foresight into what is important to the greater network community in general, a narrow minded vendor will find themselves quickly as technology grows over time in a situation of obsolescence and failure.

Exciting new technologies are in development that will in the future deliver enhancements not commonly dreamed about now. Things like peer to peer networks for example may facilitate local network routing without the need to forward traffic to specific gateway hosts like external routers. These peer to peer networks might even create there own unique protocols that are exclusively used by the participating communities. This revisiting of the past by reinventing visionary peer to peer network types will create new networking models and architecture independent of traditional modern centralized networking models.

What might we see in the future? It is common now to think about the internet as a centralized network where certain standards are followed, and for some this is the end of their ideas and vision. Back in the 1990’s internetworks seemed so new and little was dreamed about in regards to anything else being possible. Perhaps in the technological networking cycle that seems to reinvent itself every several years we will encounter many unique networking models and standards we can only speculate about today.

Looking at the currently existing network technology fundamentals gives us a background to why things are the way they are and give us a clue on where they will be going. This article will present fundamental networking concepts. Further study and details can be helpfully gathered through personal research and learning at your own pace through other sources.

Fundamentals and Essentials

When we are working to achieve an understanding of computer based networks we must first build an understanding of fundamental design, components, and network types. In this way we can open up our comprehension for really learning how networks function. A connection between two hosts or devices can be considered a small network. The connection between the devices facilitates a solution for both systems to share and communicate data information.

The systems will not automatically communicate between each other just because they have a wired or wireless connection. There must certainly be present some kind of conduit for transporting the data between the systems. Traditionally there have been two general networking models used. The peer to peer network type and the client/ server types are basic ways of using at least 2 or more computers to communicate data information between them. Software running on a host might set up a “peer” and look for other peers to join the network. A host specifically chosen as a server might be joined together with clients to form a network. Peers are like separate servers or nodes, and they maintain a database of what other peers they are connected to. Server networks that have clients use the server to control the network communication. Clients may have a specific software program or application that allows them to connect to the server and be granted permission to share data. The centralized server/ client model traditionally uses the server to keep track of accounts to facilitate authentication. Hybrid networks combine both elements of peer to peer and server/ client types of networks together.

Whether someone is using a peer to peer network or a client/ server network the main goal is to share information. The information might be a database of information, files and directories, print resources, or applications. These are the general uses of a network model but in fact almost any kind of electronic data resources could be possibly designed into a network to accomplish the goal of what it is to be used for.

There are a large number of technologies and methods used to make the data sharing all work together. Also it is important to consider the connection types and related software that will be used to build the network and allow it to function providing the desired results. Not all networks will have the same objectives to accomplish and various considerations will have to be made and decided upon in order to bring the network to life. It is important to recognize what you wish to accomplish in order to set up a network. For many networks they will be like assembling a puzzle and placing each piece in proper order to build the completed picture.

Like a collage of different pictures all together you may think of a network as pictures attached to more pictures building an even larger collection of photographs. Often a network might be similar to this analogy in the way small networks or groups of devices will eventually be connected possibly to larger and larger groups of networks. for example, a small home network may communicate and share data locally, then sometimes it will attach to a larger network to gain access to a greater amount of data information. Home networks will usually have some sort of gateway that will allow it to reach out to other compatible networks to share and transfer information.

In general, past network technologies used mainframe computers as servers to provide network services. Then a terminal was used to set up a communication session with the mainframe server to accomplish networking. These older types of terminals that were used as clients would have little if any computing power needed because the mainframe server was where the computing took place.

More modern network devices have extended computing power that can allow them to perform computing functions locally and communicate with other devices as peers without needing a centralized mainframe or server computer. Small, medium, and larger network models began to appear and created general standards regarding methods of using computer based networks. Later devices like cell phones, faxes, and other hosts could successfully attach to these computer networking models and participate on the network.

Three types of networking models are basically accepted and popular. They are LAN, WAN, and MAN. Local area networks are referred to as LAN’s and they are usually located in a general local area of some kind. A LAN might exist for example in a home or office building.

Wide area networks are referred to as WAN’s and are generally larger in area than a LAN. Wide area networks can be one private network or a collection of smaller LAN’s assembled into a group to form a larger network. WAN’s generally occupy and transverse a large geographical area and can be worldwide in scope.

Metropolitan area networks are referred to as MAN’s. Metropolitan area networks are located in a metropolitan type of place like a city or town. MAN’s are popularly used by organizations that have more that one location of there network in a metro area. Any LAN, WAN, or MAN might be a peer to peer network, a server/ client network, or a hybrid type of network combining both elements from the two.

Peer to peer networks have the advantage of being able to act as both a server and client at the same time. In this networking model each peer has the server role and maintains control of its shared resources. When peers control their own resources no other peer can necessarily act as a centralized resource. The peers share some elements sometimes like a common database and will synchronize themselves with other peers to maintain accurate up to date information. Peers can also contain information and data they do not allow to be shared with other peers and this data stays local.

Every peer on a peer to peer network has its own administrator that controls the local server functions. Clients do not normally administrate centralized server functions but they might have permission to possibly do so if this is designed into the network plan. Peer data might be freely available to other peers or might be selectively allowed to only selected peers. Data information in the peer to peer networking model might be readily transferable or password protected for further access security.

In early peer to peer networks the software that facilitates network sharing was basic and random therefore allowing loose security and unreliable network data management. More modern peer to peer networks allow greater configuration options that protect the local network data in a secure fashion. A factor of peer security exists within the operating administrator and this is true as well in server centric administration. A administrator of low knowledge in operating any network model will always encounter mishaps if they do not follow best practice security and configuration methods.

It is a personal choice of the organization whether or not they view peer to peer networks with skepticism. Some network engineers in the past regarded centralized computing as the only secure computing solution on a network. This opinion would depend on your point of view and how you want to use the network. Keep in mind in a centralized network model a security hole would still exist in all the data centrally being possibly attacked. In a peer to peer network there would not necessarily be only one centralized point of failure. For example it might be convenient if a peer becomes unavailable to be able to access the data from another peer with a copy of the necessary data information. If a centralized server becomes unavailable to run a network the network could come to a halt.

When administrators prefer a centralized networking model it is usually because they prefer a centralized control of network resources in order to maintain consistency. Private peer to peer networks can operate just fine if they are well planned and maintained. Server centric administration lends itself best to organizations needing to maintain centralized control of a network, like in a company or corporation. Also in the past years personal computers and devices were not usually powerful enough to act as servers. Old computers had less memory and cpu power to run efficiently. In modern times memory on new devices is large and reasonably priced. Even cell phones can act as smart devices and run current peer to peer applications successfully.

Operating systems play a large role in how well a network will perform. Some devices and clients can operate more efficiently with operating systems like Linux. Windows operating systems generally run less efficiently when using device hardware and use more overhead. Of course if any hardware is sufficiently powerful enough there is not a lot of concern in regards to running software in a server type mode. If a full scale network is needed with a large number of robust functional features be sure to use a peer to peer network operating system that allows detailed data access. Some peer to peer network software performs a specific function and does not necessarily provide large scale network services.

For example, a peer to peer networking application may provide a specialized function like sharing a database but does not offer services like directory shares or remote print services. More richly featured peer to peer networking software is referred to as a peer to peer networking operating system. Get an idea of what network services a network will require and in doing so one will be able to make an educated choice in choosing the solution that will accomplish what needs to be done.

Common benefits of basic peer to peer networks include the following.

  • Ease of installation and configuration.
  • No centralized server is necessary
  • Simple hardware requirements and no need for specialized equipment.
  • No centralized administrator.
  • No centralized security issues involving central data attacks.
  • Increased password security due to multiple passwords being used for different peer resources.
  • More reliable privacy resulting from non centralized administrative control and auditing.

Server/ client type networks are referred to in many ways. Possibly they may be called mainframes or server based networks. Commonly the centralized server has a role of controlling the network and serving up network services requested by clients. Unlike peer to peer networks, the server controls the network traffic in directing client requests for information and data instead of a peer requesting information directly from another peer. It is common for a client to first be authenticated into a server based network to then allow it permission to access network resources. The server runs different services for the network like applications, file sharing, print services, and databases.

The server’s main job is to respond to client requests and deliver network resources. A server will basically run a NOS or network operating system of some kind that will give it an ability to act in the role of a network server. Accounts are used to verify a client along with a password so it can join and participate on the network. Since a server based network might be facilitating access of network resources from a large number of users it is generally a machine with powerful hardware and lots of memory. Servers may have fast hard disks local to support network applications and data or might use a storage array to keep the data available to the users. In past years server networks might be load balanced and contain farms of server hardware running the network. In modern times it has become impractical to maintain large numbers of individual servers and virtual server systems have gained popularity.

A couple of physical server machines might be setups that have very powerful hardware and resources. Then virtual machine software is run on the servers to create virtual machines to run the NOS. This is a very green and efficient way of implementing servers in modern times for various reasons like saving energy and providing fault tolerance. If a virtual machine goes down due to a hardware failure on server A, server B can recreate the virtual machine on its hardware thereby keeping the network operational with minimum downtime and administrator intervention.

Since the physical servers are creating virtual servers in multiplicity, the physical server hardware must be high end and resource rich. This is because the hardware is creating virtual hardware and controlling a virtual software system. Return on running virtual software like VM ware can be great in the long run just from the savings of electricity alone. When physical servers break down if they are not running virtualization software they will need to be replaced with more expensive hardware instead of just being recreated on the fly. Also server disk images with the exact configuration and machine specification for software drivers make it less convenient and easy to get physical servers up and running as simple as virtual machines.

Solutions like the Cisco Unified Network model aid in the integration of server based networks with NOS, server hardware, virtualization, and routing and switching. This “one” network design will become even more important in future implementations of server based networks to become more reliable and easier to administrate. Hardware involved with the physical design and building of networks will always be variable but fundamentals of networks will continue being rather consistent.

Centralized control is the most common reason for implementing server based networks with servers controlling the access of clients on the network. Only authorized administrator groups will have permissions to administer network access to resources. This facilitates the server centric security model that aids organizations and companies to secure their network data and resources in a simple way.

The biggest concern relative to server based network is costs. They are more expensive to create and manage. In addition they often require specialized information technology staff to administer the network. IT staff information professionals require good salaries to keep them retained in a organization. Without good salaries offered to retain IT personnel organizations can experience large turnover and network management concerns due to new employees coming into and going out of the organization. It is important to maintain personnel familiar with the everyday IT management because retraining and network familiarity are important to keep consistent for a reliable network.

Common benefits of server based networks include the following.

  • More powerful network equipment running the network in a fast reliable manner.
  • Centralized account and client administration saves management time.
  • Enhanced data security centrally managed.
  • Less passwords needed to access network resources simplifies the networking experience.
  • Fault tolerance is provided through redundant systems.

Helpful terminology used in basic networking.

  • Administrator – A individual that manages a system, resource, or device.
  • Account – An authorized user or device able to authenticate and use network resources.
  • Application server - A specialized server participating on the network to deliver a application service.
  • Byte – A unit of measurement regarding data storage.
  • Centralized network – A network that is server based and centrally administrated.
  • Client – A computer or device capable of initiating a login session on a server based network. Clients can also be created for specialized peer to peer networks.
  • CPU – The centralized processing unit chip used to perform operations allowing software to function.
  • Dedicated server – A machine serving a role as a server only and not participating in other roles like use as a client.
  • Directory server – A machine serving a role as a server primarily delivering directory information.
  • Disk drive space – The amount of data space available to store information like software. It is primarily measured in gigabytes GB, and terabytes TB.
  • Network domain – A unit of grouping network resources.
  • Email – Electronic messaging provided by network service and applications software.
  • Ethernet – IEEE specification for networking connectivity technology. It is popularly used in network interfaces.
  • File server – A machine serving a role as a server primarily delivering file and storage services.
  • Group – A specific collection of resources organized in a logical way facilitating a common element.
  • Internetwork – A collection of networks serving as a network.
  • IEEE, the Institute of Electrical and Electronic Engineers is a standards organization that exists to help clarify standards for software and hardware
  • LAN - Local area networks are referred to as LAN’s and they are usually located in a general local area of some kind.
  • MAN - Metropolitan area networks are referred to as MAN’s. Metropolitan area networks are located in a metropolitan type of place like a city or town.
  • Stand alone device – An independent device not typically connected to any other device.
  • Terminal – Device that allows communications with a network. Traditionally simple and may only consist primarily of a display and keyboard.
  • User – A participating individual on the network.
  • WAN - Wide area networks are referred to as WAN’s and are generally larger in area than a LAN. Wide area networks can be one private network or a collection of smaller LAN’s assembled into a group to form a larger network.
  • Workgroup – In Microsoft network terminology it is a peer to peer type of network model.
  • WiFi – Wireless communications used in data transfer over a network.

Cabling and Wiring

In the physical part of the network connectivity is achieved with cabling and wiring allowing data traffic to proliferate on the network. In modern networks wireless is also used for network communications. WiFi is a term used to describe wireless communications capability. Different ratings for WiFi standards exist that allow different communication speeds, for example “G” type.

Wide ranges of cable wiring types are available for network communications. Data transfer rate, cost, size, attenuation length, and installation methods are all factors to be considered when dealing with installing network cabling. Each cabling type will have its advantages and disadvantages. Be sure to look into the specifications of the specific cabling wiring you are interested in because there is a very large selection of them to choose from.

Here are a few types of traditional cables and wires used for networking.

  • Coaxial Cable
  • Twisted pair cabling
  • Unshielded twisted pair cabling
  • Thinnet cabling
  • Arcnet cabling
  • Plenum Cabling
  • Fiber Optic cable

Coaxial cable or coax is like the grand daddy of network cabling. It was the first type of cabling used to connect Ethernet computer networks. This type of cable uses a thick copper core. The large thick core allows data transfer over large distances. A woven mesh of copper or other alloy is the electric ground wrapped around the core wire. The mesh layer of the cable also serves as a shielding insulator from magnetic interference.

The coaxial cable is the cable television cabling type of wire you might have in your home if you have cable TV. In general network installations it is not as common as the unshielded twisted pair or UTP type cabling. It is more popular in recent times as cable companies have now entered the internet service providing business. Being kind of a stiff and firm type of cabling it is not easily bent or pliable like thinner wiring types.

Being an older type of network cabling coaxial cable will have typical limitations when transferring data. 10Mbps is an average data transfer speed for typical coaxial cable. This type of cabling will be much slower than cable that allows faster data transfer greater than 100Mbps (megabits per second). Data transfer speed is measured in bits per second. Do not confuse bits per second with bytes per second, as bytes are a larger unit of measurement than a bit and measurement of transfer speeds in bytes are much faster than bits.

Thinnet is a type of coaxial cable more common in computer networks, also referred to as RG-58 cable. Data transfer is still limited around 10Mbps but special varieties of this cabling type are available that can give extended ability to the standard RG-58. Other RG type cables will have different ohm resistances and capabilities depending on what it is used for. Ends of this type of cables are terminated with barrel type BNC connectors commonly straight or T shaped. The BNC term stands for British Naval Connector.

Thicknet cables are a thicker version of the Thinnet cabling type. Longer transmission of data is possible with this type of cable but it is rather expensive. Also Thicknet cable is stiff and hard to use because it is so thick. You might encounter these types of cables in installations of old networks but they are pretty much outdated now a day.

In other older networks basic phone line type cabling was common like twisted pair cable. UTP is unshielded twisted pair cabling. Two pairs of copper wire are twisted together in a pair to help protect against crosstalk interference. UTP comes also in different types CAT1-CAT5. The higher the CAT grades the more pairs of wires and more twists per foot delivering better interference protection.

CAT 5 is most common because it supports up to 100Mbps data rate transfer speed. The lower grades of CAT cable are not common in computer networks because they will perform poorly or not work at all. For example, CAT 1 is an original type of wire for telephones and does not support digital data transfer, just voice communication. These cable types use either a RJ-11 or a RJ-45 connector, both look almost the same but the RJ-11 is for low grades of CAT cable and the RJ-45 connector is for CAT 5 type. The wires connect to a patch panel from the places they are connected to in a building, usually some type of wiring room. Then with patch cables they can be plugged into ports on a hub or switch for network access. Hubs are not used much now a day’s for connectivity and have been replaced by intelligent switches that can direct network traffic more efficiently.

STP is shielded twisted pair and it has an added benefit of being less susceptible to electromagnetic interference. UTP unshielded twisted pair is inexpensive and works well like regular phone wire would. Be sure if you are using cabling without connectors to not mix up a RJ-11 or a RJ-45 connector. STP might require additional electrical grounding and is less flexible and more difficult to use generally than UTP. Pre made cables are commonly used in the server or switching rooms instead of cutting cabling and crimping on connectors. Cables pre made come in various standard lengths of size and can be purchased color coded.

Some less common cables will show up occasionally. ARCnet cabling was used for token ring type networks in the past. ARCnet cables look much like coaxial cables but support for these types of cables is little today. If you encounter ARCnet cable chances are it will be of no use and will need to be replaced. Plenum type cabling is fire resistant cable. Since it does not burn it will not produce toxic smoke during a fire and is required for situations where requirements call for fire resistant wiring.

Fiber optic cable has a core of plastic or glass as the core. It typically is around up to 2 Gps (gigabits per second) for data transfer speed. Other types of fiber optic cable have various specifications and transfer speeds. Dark fiber can be very expensive to implement and might need to be leased for long periods of time, 20 year leases can be typical. Average fiber optic cable types are used primarily to connect high speed connections like servers to switches and routing equipment. With the advent of gigabit Ethernet technology it may become more practical to use it instead of fiber optic cable. Many devices in networks are fiber optic cable ready and many network engineers prefer fiber.

Fiber optic data transmissions only occur in one direction so two cables are needed in pairs. A transfer cable and a receive cable. Since light is used to transfer data, electromagnetic interference is not worried about because it is not affected by it. This is a great aid in preventing data eavesdropping and security attack. Pre made fiber optic cables are often used because attaching connectors can be somewhat difficult. Even securing pre attached connectors can be a bit tricky sometimes and this is why some prefer using gigabit Ethernet for it is simpler to use and work with. Fiber optic cable can not withstand gross bending for the glass or plastic core inside can become damaged and interfere with the data transfer signal.

Wireless technology uses frequencies as the medium to transfer data. They can be quite easy to install and configure. A drawback of wireless network communications is security. Signals can be intercepted and interference can be possible. Wireless technology is convenient especially for clients, but servers, switches, and routing equipment doesn’t work well in high speed networks due to slower transfer speeds from WiFi. An interesting technology that was gaining popularity several years ago was infrared. A infrared LED transmitted a signal to a receiver and worked fairly well. The main concern was it required a line if sight with the receiver or else if the signal was blocked it just wouldn’t work or slow down considerably.

Standards for Networks

When data is transferred between devices on a computer network it needs to be in a form that is standardized to be useful. In between network devices there are agreed upon protocols, so when the data is sent during transmission it can be received and decoded successfully. IEEE, the Institute of Electrical and Electronic Engineers is a standards organization that exists to help clarify standards for software and hardware so interoperability is achieved to make everything work together properly.

ISO, the International Standards organization created the OSI model to facilitate computers to communicate with each other. The OSI model was first introduced in 1978 and revised in 1984. It is a international standard used for understanding how networks operate and communicate. There are other models similar like the Internet architecture network model that simplifies the networking model into less layers but the OSI model is more detailed and easier to understand when trying to analyze what is really happening on the network.

Here is the OSI model and the functions that basically occur on each layer.

  • 7.)Application layer – Applications use this layer to gain access to network services.
  • 6.)Presentation layer – This layer changes data by converting it into a format for transmission.
  • 5.)Session layer – The session layer enables sessions across the network between devices.
  • 4.)Transport layer - Data transmitted across the network is managed here.
  • 3.)Network layer – Addressing is handled by this layer through translation of logical network addresses to physical network addresses.
  • 2.)Data Link layer – This layer handles sending data frames from the network layer to the physical layer.
  • 1.)Physical layer – Converts bits into signals to send out data onto the physical cabling and into the network.

Depending on what you are interested in understanding about the network you can examine what is happening at the different OSI layers. Layers 7-5 are primarily concerned with applications. Layers 4-1 are most relative to networking.

Let’s briefly look at the OSI layers and examine the path of a network message entering a network interface card through a cable. First at the physical layer 1 the signal is received and is converted into bits of data to be passed up to the data link layer. When arriving at the data link layer 2, the data in bit form is made into a frame for further processing.

From the data link layer 2, the network layer 3 processes protocol addressing and deals with packet switching. Then the data is passed up to the transport layer 4 for error checking and further packet processing. The upper layers from here are used a great deal by applications and when the data is passed up all the way to the application layer 7 the data is completely received.

We can think about networking hardware and tasks by what layer of the OSI model it exists on. This is very useful for troubleshooting and analyzing networks. It is also a great way to increase your comprehension and knowledge about computer networks. Network interface cards and cabling is on physical layer 1. Switching equipment and related functions like MAC addressing occur on the data link layer 2. Routing and TCP/IP functions are present on the network layer 3. Specialized network functions like layer 4 routing occur on the transport layer. In this way we can isolate concerns and concern ourselves with only OSI layers that we are interested in. IEEE 802 specifications are relevant to the physical components of the network on the OSI physical layer 1, to the data link layer 2. Also on layer 2, MAC media access control and LLC, logical link control are expanded in the OSI model by IEEE 802.

These 802.x standards specify what capabilities and specifications are relatively defined for network adaptor categories.

  • 802.1 - Internetworking
  • 802.2 – LLC, logical link control
  • 802.3 – CSMA/CD carrier sense multiple access with collision detection. Ethernet LAN’s
  • 802.4 – LAN token bus
  • 802.5 – LAN token ring
  • 802.6 – MAN metropolitan area network
  • 802.7 – Broadband technical advisory group
  • 802.8 – Fiber optic technical advisory group
  • 802.9 – Integrated voice and data networks
  • 802.10 – Network security
  • 802.11 – Wireless networks
  • 802.12 – Demand priority access LAN. 100BaseVG

The OSI model is only a guideline to help us understand computer networks and design reliable interoperability. Use it like a tool for a enhanced comprehension of networking matters. When considering network design the OSI is very helpful to focus on what is most important for what you are analyzing.

Network communications between devices is relative to the protocol stack. The OSI layers we are concerned about here are the application layer 7, the transport layer 4, and the network layer 3. Application to application services on layer 7 may include protocols like AFP apple talk file protocol, FTP file transfer protocol, SMTP simple mail transfer protocol, and SNMP simple network management protocol.

Transport protocols are concerned with reliable data delivery between devices. These operations occur at the transport layer 4. ATP apple talk transaction protocol, TCP transmission control protocol, Novell SPX, and MS netBIOS-netBEUI all operate as transport protocols.

Addressing and routing information and error checking are just a few of the functions network protocols provide on the network layer 3. Here we can find protocols like the popular IP internet protocol providing routing and addressing information. Other protocols on network layer 3 are Novell IPX and MS netBEUI.

All these protocols are not exactly the same even though they might exist on the same OSI network layer. Often these specialized protocols are function specific and not compatible directly. Consider networking devices like routers will need a larger amount of usable memory to run multiple protocols in the protocol stack. Additional CPU power will also be consumed by multiple protocols, so only run the protocols you really need to make your network function.

Vendor specific protocols by Microsoft and Apple are required sometimes for certain applications to function properly. The protocols are not necessarily vendor specific replacements for one another. Microsoft NetBEUI and NetBIOS are not routable, and Apple talk supports Apple specific network functions for their devices if necessary. Novell IPX/ SPX protocols are a lot like the TCP/ IP protocols. IPX/ SPX is routable but if a router runs TCP/ IP then it does not make much sense to run the Novell protocol if it is not required. Many routers might not even support IPX/ SPX so if you plan to use it on your network be sure you have a router that has support for it.

Wide Area and Local Area Network Topologies

Network topology refers directly to the physical layout of a network. Topology theory can be applied to any type of network if the solution works. It will make good sense to consider various topologies on LAN, WAN, and MAN. Examine all the components a network might need to function optimally. Analyzing a network before implementing one is always better than installing a poorly chosen one. Even if you are not installing a network today, learning about networks coming from this point of view gives you great insight and knowledge as you understand why things will be the way they need to be.

The most basic way to start connecting right away to a computer network is with a network adaptor. Sometimes you may need to purchase a network adaptor to connect to a network but most of the time you will have a built in network adaptor already built in to your personal device. In years past network adaptors needed configuration such as an IRQ request number and an I/O address. For example, some servers you might still need to go through these additional steps. Most modern devices will auto configure themselves when installed or used on a network. The device will connect through an Ethernet port or a wireless connection, and often this is all that you will need. By far, for most people connecting with WiFi wireless is the easiest way to get connected fast. If you have a home router it will automatically assign you a TCP/IP address when you access it.

All you will commonly need for your router to give you internet access is an identification number or code that accompanied the device when you purchased it. The device that has a wireless adaptor will identify local networks and give you a choice of access points to connect to. Find the name of the network you would like to make the connection and click on it to connect. Enter your identification number or code, and then you will gain access. These are general steps to getting WiFi internet access so some extra steps might be needed depending on what systems you use.

If you need to login to a server based LAN type of network, go to the client login window and enter you username and password. On some networks you might also need to enter a home server name to connect to the network. If you do not have an account for the network to login, ask the organizations administrator to give you a new account so you can use it. Some network operating systems require a user account and a device account, so if you still have difficulty trying to login ask an admin if a device account is required. If a device account is required each network device used will need to have one to join the network.

From here we can see it is necessary for all network devices and users to have the proper accounts required for server centric networks. Home and small office network types are often much easier to access network resources. After you build a fair idea how a client may attach to a network then its time to more closely examine the network topology.

Network topologies can be a very wide ranging topic, from small home networks to large enterprise organization networks. The topology of a network is most simply understood from a physical point of view. There are other styles of describing network topologies like logical topologies. At this time, logical topologies are more related to network design, so let us start with the physical topology instead.

Imagine the physical topology as all the physical components in a network assembled together to create the complete network. There are four basic core topology styles. The ring, bus, mesh, and star are all valid topologies.

We can eliminate the ring topology right off because this is a outdated design since little if anyone uses token ring networks anymore. Generally the ring topology had all the computers connected one after the other in a ring shape, similar to a circle pattern. If a computer went down the net would have problems because this would interrupt the data flow from circling around the ring. Although alternative solutions to ring topology concerns were developed little interest followed.

Bus topology networks were sort of associated with coaxial type cabling. There was a main long coaxial cable, and each computer on the network connected in along the piece of the main cable. Each end of the main cable was capped of by a terminator. Much like the ring network, if the path became broken on the main cable data communication path the network would not function correctly. The inherit problems with this topology design and the use of coaxial cable quickly ended its practical usefulness.

Mesh networks are very fault tolerant because each computer is connected directly to the others with its very own cable or wire. The flaw in this topology was seen as the network grew larger it would also grow more complex to administrate. After a while with enough computers participating on a mesh network you get the “spider web” effect. Many times complex mesh networks might run into concerns with traffic routing loops, where data might get stuck looping around on redundant paths unable to determine the proper data delivery route. This could be possibly solved with a pruning protocol determining best data paths, but administrating probable routing loops in addition to countless multipoint cable connections is not for the faint of heart.

The best routing topology by far is the star design. There is a central connection point for all computers on the network to plug into. The central connection point was originally a hub, but now it is a modern switch. If a computer goes down the others are not disturbed on the network and continue operating. This network topology is also easy to understand because the LAN group always connects to the same place, the switch. Larger networks can be assembled by connecting multiple star network LAN switches together. One way this can be done is by seeing the LAN’s all in an access layer on the network, and then linking all the switches to an additional set of distribution switches that manage the LAN switches. One small drawback to this topology is one needs a lot of cabling to set up many different connections. Another small concern is that if a switch goes bad it will disrupt the LAN connections to the devices attached to it.

A common issue of switches going down sometimes is related to a power supply going bad. This possible power concern can be avoided by using a Cisco 4000 or 6000 series switch using dual power supplies. If a power supply goes down then another power supply is ready to start working like a backup. Other vendors have similar technologies like Cisco switches for hardware redundancy to help head off concerns before they affect the network.

Another helpful switch solution assists in preventing data loops. If a switch has more than one redundant path to choose for routing data, the best path will only be used to avoid data looping around possibly lost on multiple paths. This technology is called STP spanning tree protocol. STP will choose a default data path when multiple switches are used and disable any redundant paths till they are needed if a data path fails. Hybrid topologies use the innovations of the star topology for their own unique implementations. There are both star ring and star bus topologies. They are both similar to the star topology in the way that they operate using the same type of centralized switch but implement their own integrated bus technologies. A star ring switch needs to be a special switch that operates as a logical ring. A regular switch can work fine with different star bus configurations.

A hybrid mesh network topology is possible for partial fault tolerance. In a hybrid mesh topology only some critical parts of the network are connected together at multiple points. This creates multiple paths that a computer or device can use to communicate. Hybrid mesh networks are less complex that traditional mesh network topologies and they are less expensive to maintain. If a data path is unavailable, this type of topology has different optional ways to send the data. The balance of fault tolerance and practical redundant point to point connections makes this a nice topology design to consider.

If a network needs to forward information to a default gateway it means the host device is not on the local network. A default gateway is usually some kind of router or other device used to contact the internet. When a network topology has a large number of switches and needs default gateway access routers are used to facilitate the job. Routers can make more complex decisions about large network traffic redirection, and often both routers and switches are combined together into one relatively new device called a layer 3 switch.

A layer 3 switch has the ability to act as a switch directing network MAC address traffic at OSI layer 2, but can also act like a router directing routable IP type traffic at OSI layer 3. Most common switches only operate at OSI layer 2, the data link layer. Since layer 3 switches are both a switch and a router at the same time they are also much more expensive than a regular switch.

Some additional network topology devices that you may find interesting include repeaters and amplifiers. When cables and wires reach a certain length specification they attenuate the signal. This means the carrier signal that the network is using to move digital data through a cable or wire is loosing its strength. A repeater will receive a degrading signal and increase its strength before sending it forward again. This action is sometimes referred to as baseband transmission. An amplifier is similar to a repeater but the device is used on networks that use analog signaling.

Bridges are seldom used today. They are a lot like repeaters but make some decisions before forwarding packets of data. When a destination address of a packet is on the same network segment as the repeater it will not forward the packet. If the address of a data packet is not on the same network segment the bridge will know and forward it onward.

Also bridges do a good job connecting network segments from different media types, like adapting Ethernet to another different type of media like coaxial. Repeaters are faster than bridges because they do not examine MAC address information. Bridges can be much more expensive than repeaters but they have the advantage of being able to bridge network segment media.

When network topologies reach out to other resources on different networks they act like WAN’s wide area networks. It is not always possible for an organization to own all the physical cabling topology between wide area network sites so they use service providers to bridge the communications between their network sites. A virtual circuit is like a private connection setup by a provider to tunnel between networks. Frame relay is an early example of a virtual circuit. The circuit can be an on demand or a permanent virtual circuit for wide area network communications.

When frame relay was popular, most organizations preferred to have permanent virtual circuits unless they used the circuit only occasionally and in that case a lower cost on demand virtual circuit was good enough. MPLS technology on routers is interesting, and it is used in a similar way as frame relay. When MPLS first came out available built into routing equipment organizations tried to get it working through the internet on their own but if an experienced routing administrator is not able to run it the effort is lost in time and expense. Most well designed networks will not implement the MPLS system themselves and will use a solution provided through a service provider which is an easier way of producing the same results. MPLS stands for multi protocol label switching.

Internet wide area access and MPLS can be combined with VPN or virtual private network technology. VPN technology is used to secure the data communications over a wide area networks virtual circuit or MPLS system. The major popular protocol used with a VPN is usually IPsec or IP security. VPN tunneling technology helps protect data by providing a better way to securely transfer data through unknown networks.

Since network topologies rely heavily on OSI layers 4 through 1, IEEE 802.x standards are directly applicable and important for standardizing technologies used for building networks physically and helping them operate properly. Noteworthy here are 802.2 LLC, logical link control, and 802.3 /CD carrier sense multiple access with collision detection. Ethernet LAN’s. Another 802.x standard is declared but popularly used anymore, 802.5 LAN token ring. One thing to keep in mind about 802.x standards is although they might have been created, not all of them are popularly used at this time. Perhaps if a forgotten standard is found useful again in the future it can be formally recognized again or the original specification expanded upon.

Introduction to TCP/IP

TCP/IP is a open standard protocol. It has a large lead ahead of other network and transport protocols having been accepted world wide eagerly since 1983. TCP/IP is based on the Internet architecture network model. This model is similar to the OSI model except it has 4 layers instead of the 7 layers in the OSI.

  • 4.) Application layer – Telnet, FTP, SNMP, other.
  • 3.) Transport layer – TCP, UDP, other.
  • 2.) Internet layer – IP, ICMP, ARP, RARP, other.
  • 1.) Network access layer – Ethernet, FDDI, MAC, other.

The Application layer maps to the OSI layers 7-5. The Transport layer maps to the OSI layer 4. The Internet layer maps to the OSI layer 3. The Network layer maps to the OSI layers 2-1.

The TCP/IP protocol suite is a collection of popular open standard protocols that any vendor can implement. Two devices can negotiate a connection for data transfer with the TCP - Transmission control protocol. Addressing and fragmentation are the two primary functions provided by IP - Internet protocol.

At the application layer many interesting protocols in the TCP/IP suite are available like these. Telnet allows terminal emulation session support from client to server. FTP can be used to transfer files between a client and a FTP server. SNMP - Simple network management protocol monitors and manages TCP/IP network devices. DHCP – dynamic host configuration protocol is a very successful protocol used to automatically assign devices on the network TCP/IP address information.

On the transport layer these two key protocols are implemented by a very large number of vendors. TCP - Transmission control protocol and UDP – user datagram protocol. UDP is very popular in implementing voice over IP networks since it is a unreliable connectionless transport protocol.

The internet layer has a large number of handy protocols and IP - Internet protocol is probably the most widely used and famous. ARP – address resolution protocol is a great protocol for resolving IP addresses to MAC addresses. Similarly RARP – reverse address resolution protocol resolves MAC addresses to IP addresses. ICMP – internet control message protocol reports errors primarily from hosts and routers.

The network access layer is most similar to the OSI model dealing with more physical kind of things like Ethernet and MAC addresses. This layer most closely resembles the OSI Data link and Physical layers. Switches can use many TCP/IP suite applications to service and troubleshoot. The whole TCP/IP suite of applications is quite versatile and handy to use.

The transport layer keeps some port information in data packets, ports are commonly used over and over again and standard port numbers like these are common when using TCP/IP. HTTP – hypertext transfer protocol, port 80. FTP – file transfer protocol, port 20 and 21. SMTP – simple mail transfer protocol, port 25. DNS – domain name system, port 53. Telnet – port 23. POP3 – port 110. Since common ports are routinely used it becomes very easy to memorize and predict services appearing on specific ports.

Multiplexing builds data packets used for network communications. Demultiplexing disassembles received network data packets. To participate on a TCP/IP network ICANN – the internet corporation for assigned names and numbers requires a unique TCP/IP address. A dotted decimal or dotted quad notation is used to write a series of four numbers separated by periods. These numbers written in this way represents a TCP/IP version 4 address. To help increase the number of available unique TCP/IP addresses available to everyone there has been another version release of TCP/IP. The newest version is called TCP/IP version 6. Since IP V4 addresses have mostly all been taken up already, other methods are now used to help gain additional IP address availability till TCP/IP version 6 gains more use worldwide.

Looking closer into a TCP/IP address and you will learn the two major parts of the address are the network ID and the host ID. The network ID identifies the network segment an address belongs to, therefore all members of the same network segment must have the same network ID. The host ID identifies the unique address of the host on the network segment. For an easy way to remember what side of the address the network ID is on, just remember it comes first on the left side of the address. The right side of the TCP/IP address will contain the host ID.

There are various methods to calculate what the numbers mean in TCP/IP but unless you are very good at binary math it will not mean that much to you. In addition to relative positioning of numbers in the TCP/IP address, there is kind of a floating number system called subnetting that can change the meaning of the address. Without having a clear understanding of TCP/IP subnetting it will be very difficult trying to figure out binary math by hand. The easiest way to figure out your TCP/IP addresses and practice learning how to read them is getting yourself a TCP/IP calculator.

A TCP/IP calculator is a calculator specialized for figuring out TCP/IP addresses. They have them for purchase at the computer shop or you might be able to find a good one online to use. The calculator needs you to input the TCP/IP address you want more information about. Then you need to input or tell the calculator what subnet mask you have for the address. When the calculator is finished it will give you more information about your address like your internet address class, network ID and the host ID.

The four decimals represent octets in a TCP/IP address. For class A networks the first octet is used to determine the network ID. Class B networks use the first and second octets to determine the network ID. For class C networks the first, second, and third octets to determine the network ID. After you get a good idea what the network ID is the rest of the address left over to the right side is the host ID portion of the TCP/IP address.

When reading the first octet keep in mind Class A addresses will be in the numeric range of 1-126. The number 127 is not used because it is reserved for a loop back address. Class B addresses will be in the range of 128-191. Class C addresses will be in the range of 192-223. Class D addresses are in the range of 224-239 and these are multicast addresses so it is not likely you will be assigned one. Class E addresses are in the range of 240-247 and these are reserved for future use so it is not likely you will have one of these either.

The only likely kind of addresses you might have assigned to you by an administrator or provider would be a class B or C address. Class A addresses are usually reserved for very large organizations. Subnet masks mark some octets of a TCP/IP address so you can better tell what the number of network hosts has been modified to. Standard subnet masks are easy to read, and are helpful if you can remember them.

  • The class A default subnet mask is 255.0.0.0
  • The class B default subnet mask is 255.255.0.0
  • The class C default subnet mask is 255.255.255.0

The 255 part marks the first octet as a class A network. In subnetting it is helpful to think of the 255 as blocking out all that part of the TCP/IP address. The part of the subnet mask where the zero is, represents the host ID part of the address. Now this is where it gets trickier as you encounter more complicated subnets that modify the amount of available hosts. You might see masks like this 255.255.224.0

One might think why does the third octet say 224? It is because the subnet has been modified to change the number of networks and hosts available on the network. This is the point where if you understand binary math you will be just fine. If this is just getting too complex it is ok, just use a subnetting calculator like mentioned before and don’t get involved with the math. The main concepts here are what are most important. You can also look the octet numbers up on TCP/IP charts and use a combination of the subnetting calculator and charts to help figure out what it all means. You will also learn the number of networks and hosts available on your network segment. It is a fun exercise and takes some practice.

The most important pieces of information for configuring a host device with TCP/IP are the IP address, a subnet mask, and a default gateway. Many times this information only needs to be configured manually when using static information given to you by a network administrator or if you are making your own network. DHCP will most of the time auto configure all of the TCP/IP information you need on a host device so do not worry.

Some people claim they are confused by TCP/IP version 6, but in fact I have been looking forward to it for many years. Although it looks more complex than regular TCP/IP version 4, it is much easier in many ways. There are lots more addresses available so no more subnetting is necessary. Next time in, A Fundamental Overview of Network Technologies – Part II, we will take a closer look at TCP/IP version 6 and some more interesting technology subjects.

internet | systems


QR Code
QR Code a_fundamental_overview_of_network_technologies_part_i (generated for current page)
 

Advertise with Anonymous Ads