Previous | Table of Contents | Next |
One of the most frustrating trouble reports you can get is that the network is slow. Particularly when youre sitting at your desk and using the network just fine, its hard to believe reports like this and even harder to do something about ityou cant fix what doesnt seem broken! Still, youve got a user reporting with a bona fide complaint, so your job is to track down this seemingly invisible problem.
Tracking down and vanquishing network slowdowns can be toughbut rewarding. The key to finding network slowdowns is to divide the problem into smaller, manageable pieces; that is, once you identify everything thats involved in a users connection, it becomes much more possible to test each piece individually and pinpoint which piece is causing the slowdown. The reason one user can sit on the same network as other users and run fine while they run slow as molasses is because network sessions can be complex beasts. Simplifying things enables you to troubleshoot quickly and effectively.
Here are the two kinds of slowness reports:
In this hour, well concentrate on the second type, because the first type is typically pretty easy. If you deploy something new on a known good network and it runs like a pig on roller skates, its fairly obvious where the trouble lies. Whats more, if you get good at troubleshooting the second type of slowness report, youll be able to specifically troubleshoot the first type as well, rather than just pulling the plug on things.
When folks report network slowness, theyre typically reporting application slowness. Because the application is the fruit of the network tree, lets take a systematic look at what the limiting factors are in any network:
The first and most obvious factor in any network is the speed of the shared network that an application lives on. This is referred to as bandwidth available and is sort of like the speed limit on a highway.
Raw bandwidth doesnt necessarily mean anything. Just as you dont always drive at the speed limit of the highway, network applications dont always take advantage of the bandwidth available. Your speed on the highway depends on your driving skills, your driving habits (who cares if you drive right at the limit if you stop every five minutes for a bathroom break?), and so on. The speed that your network application actually drives at is referred to as its throughput. You can arrive at an application sessions throughput by measuring how many bytes are in the packets that compose the network session and dividing this number by the number of seconds the session takes to complete. Heres the formula:
Application throughput per second = total application data / elapsed number of seconds
Remember that your total speed on a journey takes into account all roads you must drive on. For example, if you spend half your journey on a highway and the rest on a dirt road with pot holes every few yards, your total driving time might be twice or three times what it might be if you only traveled on a highway. Accordingly, when figuring out whats affecting application throughput, youll want to consider each hop that the application packet needs to traverse.
Lets go over commonly available network links and their speed:
As you can imagine, your application packet might make half its journey at 100 miles per hour on a Fast Ethernet connection but then would slow down to approximately 1 mile per hour to traverse a T1 leased line. If your packet travels across a 9.6Kbps line, this would be the rough equivalent of traveling at a hundredth of a mile per hour. A 56Kbps line does somewhat better, at approximately a twentieth of a mile per hour. Nutty!
Now consider that each road used by the application packet also has a tollbooth at each end of it. Im speaking, of course, of the router or switch that connects the two network segments. This delay is referred to as the latency of a device. Latency is usually pretty negligible, but it adds up on every device that a packet must pass through. Some wire speed routers and switches dont add appreciably to the delay (sort of like the new electronic toll passes), but others will. Youll have to test to find out.
I was in an awesome class a couple of years ago in which the instructor taught us a quick-and-easy way to measure the latency of any network tollbooth. Set up two network analyzers on each side of a device to capture traffic for station A and station B. Then have station A ping station B.Record the absolute time the analyzer reports the packet leaving the first station (A1) and then record the time of it arriving at the second station (B1). Next, record the absolute time that the reply packet comes back from the second station (B2), then the time for it to come back to the first station (A2).
You then perform the following calculation:
Latency = (A2 - A1) - (B2 - B1)/2Instant latency analysis. Yahoo!
The idea behind latency analysis is to measure the latency before loading and then to see if it moves after throwing lots of traffic at it. Therefore, if I measure 50 to 100 microseconds (.00005 to .0001 seconds) of latency in the evening, but during the day measure 300 microseconds or even 800 microseconds, my device is definitely experiencing an overload, and it should be considered a speed liability.
Heres the bottom line: The route thats used by an application is a major consideration when figuring out how fast the application can run.
Previous | Table of Contents | Next |