Saturday, 28 May 2011

HUB & SWITCH OPERATION

Step 1
The network interface card (NIC) sends a frame.
Step 2
The NIC loops the sent frame onto its receive pair internally on the card.
Step 3
The hub receives the electrical signal, interpreting the signal as bits so
that it can clean up and repeat the signal.
Step 4
The hubís internal wiring repeats the signal out all other ports, but not
back to the port from which the signal was received.
Step 5
The hub repeats the signal to each receive pair on all other devices.

In particular, note that a hub always repeats the electrical signal out all ports, except the port
from which the electrical signal was received. Also, Figure 3-10 does not show a collision.
However, if PC1 and PC2 sent an electrical signal at the same time, at Step 4 the electrical
signals would overlap, the frames would collide, and both frames would be either
completely unintelligible or full of errors.


CSMA/CD logic helps prevent collisions and also defines how to act when a collision does
occur. The CSMA/CD algorithm works like this:

Step 1
A device with a frame to send listens until the Ethernet is not busy.
Step 2
When the Ethernet is not busy, the sender(s) begin(s) sending the frame.
Step 3
The sender(s) listen(s) to make sure that no collision occurred.
Step 4
If a collision occurs, the devices that had been sending a frame each send
a jamming signal to ensure that all stations recognize the collision.
Step 5
After the jamming is complete, each sender randomizes a timer and waits
that long before trying to resend the collided frame.
Step 6
When each random timer expires, the process starts over with Step 1.


CSMA/CD does not prevent collisions, but it does ensure that the Ethernet works well even
though collisions may and do occur. However, the CSMA/CD algorithm does create some
performance issues. First, CSMA/CD causes devices to wait until the Ethernet is silent
before sending data. This process helps avoid collisions, but it also means that only one
device can send at any one instant in time. As a result, all the devices connected to the same
hub share the bandwidth available through the hub. The logic of waiting to send until the
LAN is silent is called half duplex. This refers to the fact that a device either sends or
receives at any point in time, but never both at the same time.



The other main feature of CSMA/CD defines what to do when collisions do occur. When a
collision occurs, CSMA/CD logic causes the devices that sent the colliding data frames to
wait a random amount of time, and then try again. This again helps the LAN to function,
but again it impacts performance. During the collision, no useful data makes it across the
LAN. Also, the offending devices have to wait longer before trying to use the LAN.
Additionally, as the load on an Ethernet increases, the statistical chance for collisions
increases as well. In fact, during the years before LAN switches became more affordable
and solved some of these performance problems, the rule of thumb was that an Ethernetís
performance began to degrade when the load began to exceed 30 percent utilization, mainly
as a result of increasing collisions.

Increasing Available Bandwidth Using Switches
The term collision domain defines the set of devices whose frames could collide. All
devices on a 10BASE2, 10BASE5, or any network using a hub risk collisions between the
frames that they send, so all devices on one of these types of Ethernet networks are in the
same collision domain. For example, all four devices connected to the hub in Figure 3-10
are in the same collision domain. To avoid collisions, and to recover when they occur,
devices in the same collision domain use CSMA/CD.

LAN switches significantly reduce, or even eliminate, the number of collisions on a LAN.
Unlike hubs, switches do not create a single shared bus, forwarding received electrical
signals out all other ports. Instead, switches do the following:

* Switches interpret the bits in the received frame so that they can typically send the
   frame out the one required port, rather than all other ports
* If a switch needs to forward multiple frames out the same port, the switch buffers the
  frames in memory, sending one at a time, thereby avoiding collisions

NOTE
The switchís logic requires that the switch look at the Ethernet header, which is
considered a Layer 2 feature. As a result, switches are considered to operate as a
Layer 2 device, whereas hubs are Layer 1 devices.


For example, Figure 3-11 illustrates how a switch can forward two frames at the same time
while avoiding a collision. In Figure 3-11, both PC1 and PC3 send at the same time. In this
case, PC1 sends a data frame with a destination address of PC2, and PC3 sends a data frame
with a destination address of PC4.  The switch looks at the destination Ethernet address and sends the frame from PC1 to PC2 at the same instant as the frame is sent by PC3 to PC4. Had a hub been used, a
collision would have occurred; however, because the switch did not send the frames out all
other ports, the switch prevented a collision.

Buffering also helps prevent collisions. Imagine that PC1 and PC3 both send a frame to PC4
at the same time. The switch, knowing that forwarding both frames to PC4 at the same time
would cause a collision, buffers one frame (in other words, temporarily holds it in memory)
until the first frame has been completely sent to PC4.
These seemingly simple switch features provide significant performance improvements as
compared with using hubs. In particular:
*If only one device is cabled to each port of a switch, no collisions can occur.

*Devices connected to one switch port do not share their bandwidth with devices
connected to another switch port. Each has its own separate bandwidth, meaning that
a switch with 100-Mbps ports has 100 Mbps of bandwidth per port
.
The second point refers to the concepts behind the terms shared Ethernet
 and switched Ethernet
. As mentioned earlier in this chapter, shared Ethernet means that the LAN
bandwidth is shared among the devices on the LAN because they must take turns using the
LAN because of the CSMA/CD algorithm. The term switched Ethernet refers to the fact
that with switches, bandwidth does not have to be shared, allowing for far greater
performance. For example, a hub with 24 100-Mbps Ethernet devices connected to it allows
for a theoretical maximum of 100 Mbps of bandwidth. However, a switch with 24
100-Mbps Ethernet devices connected to it supports 100 Mbps for each port, or
2400 Mbps (2.4 Gbps) theoretical maximum bandwidth.


Doubling Performance by Using Full-Duplex Ethernet
Any Ethernet network using hubs requires CSMA/CD logic to work properly. However,
CSMA/CD imposes half-duplex logic on each device, meaning that only one device can
send at a time. Because switches can buffer frames in memory, switches can completely
eliminate collisions on switch ports that connect to a single device. As a result, LAN
switches with only one device cabled to each port of the switch allow the use of
full-duplex
operation. Full duplex means that an Ethernet card can send and receive concurrently.

To appreciate why collisions cannot occur, consider Figure 3-12, which shows the full-
duplex circuitry used with a single PCís connection to a LAN switch.


With only the switch and one device connected to each other, collisions cannot occur. When
you implement full duplex, you disable CSMA/CD logic on the devices on both ends of
the cable. By doing so, neither device even thinks about CSMA/CD, and they can go ahead
and send data whenever they want. As a result, the performance of the Ethernet on that cable
has been doubled by allowed simultaneous transmission in both directions.

WAN PROTOCOLS

Protocol
Type
Layer
Characteristics

X25
Packet Switched
Data-link and Physical
ITU-T standard (International Telephone Union – Telecommunications Standardization Sector)
Addresses expressed in decimal numbers in the following format:
Frame Relay
Packet Switched
Data-link and Physical
Connection-oriented and similar to X.25 with less overhead but does not provide error correction. More cost-effective than PPP. Uses Permanent Virtual Circuits (PVC) mostly but also Switched Virtual Circuits (SVC)
HLDC
Dedicated Connection
Bit Oriented
Data-link
peer-to-peer HDLC not intended to encapsulate multiple Network layer protocols across the same link, which prompted vendors to have their own proprietary HDLC protocol. No authentication is provided by HDLC.
Default encapsulation of serial links, which have a default bandwidth of 1.54 Mbps (T1).

SDLC
Bit Oriented
Data-Link
Full-Duplex non peer-to-peer bit oriented serial protocol created by IBM.

ISDN
Circuit Switching
Physical, Data-link and Network, typically used with PPP.
Set of digital services that transmit voice and data over existing phone lines. The Basic Rate Interface (BRI) consists of two B channels at 64 kbps and one D channel at 16 kbps. PRI (Primary Rate Interface) T1 is 23 X 64kbps B channels and 1X 64 kbps D channel, and the PRI E1 is 30 X 64 kbps B channels and 1 X 64kbps D channel.

ATM
Asynchronous Transfer Mode
53-byte cell that allows fast hardware base switching. LANE (LAN Emulation) was created to hite ATM and look like 802.3 Ethernet.
PPP
Dedicated Connection
Data-link
Can be used to create point-to-point links between different vendors’ equipment. Allows authentication and multilink connections and can be run over asynchronous (dial-up) and synchronous (ISDN) links. Created to replace SLIP (Serial Line Internet Protocol) which could only run IP at the Network Layer but was also a dedicated connection protocol. Stacker and Predictor compression methods are supported.

LAPB
Data-link
Connection-oriented, has tremendous amount of overhead for links that are error-prone. Defined by X.25 at the data-link layer