NOTE: CentOS Enterprise Linux is built from the Red Hat Enterprise Linux source code. Other than logo and name changes CentOS Enterprise Linux is compatible with the equivalent Red Hat version. This document applies equally to both Red Hat and CentOS Enterprise Linux.
Chapter 3. Bandwidth and Processing
Of the two resources discussed in this chapter, one (bandwidth)
is often hard for the new system administrator to understand, while
the other (processing power) is usually a much easier concept to
Additionally, it may seem that these two resources are not that
closely related — why group them together?
The reason for addressing both resources together is that these
resources are based on the hardware that tie directly into a
computer's ability to move and process data. As such, their
relationship is often interrelated.
At its most basic, bandwidth is the capacity for data transfer
— in other words, how much data can be moved from one point
to another in a given amount of time. Having point-to-point data
communication implies two things:
There are two types of system components that meet these
The following sections explore each in more detail.
As stated above, buses enable point-to-point communication and
use some sort of protocol to ensure that all communication takes
place in a controlled manner. However, buses have other
Standardized electrical characteristics (such as the number of
conductors, voltage levels, signaling speeds, etc.)
Standardized mechanical characteristics (such as the type of
connector, card size, physical layout, etc.)
The word "standardized" is important because buses are the
primary way in which different system components are connected
In many cases, buses allow the interconnection of hardware made
by multiple manufacturers; without standardization, this would not
be possible. However, even in situations where a bus is proprietary
to one manufacturer, standardization is important because it allows
that manufacturer to more easily implement different components by
using a common interface — the bus itself.
No matter where in a computer system you look, there are buses.
Here are a few of the more common ones:
Mass storage buses (ATA and SCSI)
Networks (Ethernet and Token Ring)
Memory buses (PC133 and Rambus®)
Expansion buses (PCI, ISA, USB)
Datapaths can be harder to identify but, like buses, they are
everywhere. Also like buses, datapaths enable point-to-point
communication. However, unlike buses, datapaths:
The reason for these differences is that datapaths are normally
internal to some system component and are not used to facilitate
the ad-hoc interconnection of different components. As such,
datapaths are highly optimized for a particular situation, where
speed and low cost are preferred over slower and more expensive
Here are some typical datapaths:
There are two ways in which bandwidth-related problems may occur
(for either buses or datapaths):
The bus or datapath may represent a shared resource. In this
situation, high levels of contention for the bus reduces the
effective bandwidth available for all devices on the bus.
A SCSI bus with several highly-active disk drives would be a
good example of this. The highly-active disk drives saturate the
SCSI bus, leaving little bandwidth available for any other device
on the same bus. The end result is that all I/O to any of the
devices on this bus is slow, even if each device on the bus is not
The bus or datapath may be a dedicated resource with a fixed
number of devices attached to it. In this case, the electrical
characteristics of the bus (and to some extent the nature of the
protocol being used) limit the available bandwidth. This is usually
more the case with datapaths than with buses. This is one reason
why graphics adapters tend to perform more slowly when operating at
higher resolutions and/or color depths — for every screen
refresh, there is more data that must be passed along the datapath
connecting video memory and the graphics processor.
Fortunately, bandwidth-related problems can be addressed. In
fact, there are several approaches you can take:
Spread the load
Reduce the load
Increase the capacity
The following sections explore each approach in more detail.
The first approach is to more evenly distribute the bus
activity. In other words, if one bus is overloaded and another is
idle, perhaps the situation would be improved by moving some of the
load to the idle bus.
As a system administrator, this is the first approach you should
consider, as often there are additional buses already present in
your system. For example, most PCs include at least two ATA
channels (which is just another name for a
bus). If you have two ATA disk drives and two ATA channels, why
should both drives be on the same channel?
Even if your system configuration does not include additional
buses, spreading the load might still be a reasonable approach. The
hardware expenditures to do so would be less expensive than
replacing an existing bus with higher-capacity hardware.
At first glance, reducing the load and spreading the load appear
to be different sides of the same coin. After all, when one spreads
the load, it acts to reduce the load (at least on the overloaded
While this viewpoint is correct, it is not the same as reducing
the load globally. The key here is to
determine if there is some aspect of the system load that is
causing this particular bus to be overloaded. For example, is a
network heavily loaded due to activities that are unnecessary?
Perhaps a small temporary file is the recipient of heavy read/write
I/O. If that temporary file resides on a networked file server, a
great deal of network traffic could be eliminated by working with
the file locally.
The obvious solution to insufficient bandwidth is to increase it
somehow. However, this is usually an expensive proposition.
Consider, for example, a SCSI controller and its overloaded bus. To
increase its bandwidth, the SCSI controller (and likely all devices
attached to it) would need to be replaced with faster hardware. If
the SCSI controller is a separate card, this would be a relatively
straightforward process, but if the SCSI controller is part of the
system's motherboard, it becomes much more difficult to justify the
economics of such a change.
All system administrators should be aware of bandwidth, and how
system configuration and usage impacts available bandwidth.
Unfortunately, it is not always apparent what is a
bandwidth-related problem and what is not. Sometimes, the problem
is not the bus itself, but one of the components attached to the
For example, consider a SCSI adapter that is connected to a PCI
bus. If there are performance problems with SCSI disk I/O, it might
be the result of a poorly-performing SCSI adapter, even though the
SCSI and PCI buses themselves are nowhere near their bandwidth