by Mark Sportack
The ever increasing amount of processing power at the Desktop has spawned wave after wave of technological innovation. Each innovation has sought to exploit this power. One of the more compelling but elusive challenges of desktop computing has been multimedia processing and communication. Ultimately, all forms of information, regardless of their current media, will be digitized and transmitted over a single network infrastructure. Unfortunately, that day is a long way off. In the meantime, numerous challenges face anyone seeking to integrate other media types onto a network designed specifically for data transport.
Multimedia applications impose very different performance requirements upon the network than do the traditional applications types that networks were designed to support. Besides being potentially bandwidth-intensive, multimedia applications are also usually time-sensitive. Traditional networks and protocols are designed to deliver packets of guaranteed integrity, rather than guarantee a timely delivery of packets. Significant progress has been made in this direction in communications technologies and protocols.
This chapter explores the various technologies that are driving multimedia communication, and identifies network performance issues and technologies that are being developed to address those issues.
In the past, separate infrastructures were required to support different types of communication. Most workstations are still equipped with both a telephone and a personal computer (PC) to support voice and data communication. The telephone is the user's front end to the Private Branch Exchange (PBX) or other telephony switch, while the PC is an intelligent device that uses LANs and WANs to communicate with other devices.
Technological innovations have slowly, but steadily, eroded the long-standing distinctions between previously distinct communication technologies. Advances in component technologies have enabled the development of increasingly sophisticated and specialized applications. Multimedia communication, for example, owes its existence to advances in microprocessors and magnetic storage media. Microprocessors are continuing to increase in power and speed while steadily becoming more affordable. Similar advances have been made in the various types of magnetic storage devices.
These advancements encouraged creative attempts to exploit their potential. One creative attempt was the digitization of data that traditionally existed in other forms. Today, numerous data types converging into a common format: digital encoding. Integrating multiple application types that use digitally encoded data into a single host platform, that is, a computer, has become euphemistically known as "multimedia computing." Figure 27.1 illustrates one form of multimedia computing.
FIGURE 27.1. Integrating different application types into a common platform, that is, the PC, constitutes multimedia computing.
Some of the more common multimedia applications include
Although some of these may seem so closely related as to be virtually identical, they differ in one critical aspect: the network performance that they demand.
Multimedia communication can be equally difficult to define. For the purposes of this book, multimedia communication means the integration of multiple data types into a common bit stream. This is illustrated in Figure 27.2.
A subtle but important point is that multimedia computing and multimedia communication are not completely synonymous. For example, a LAN can support client machines that are used exclusively for traditional forms of data communication and also provide networking for machines that are dedicated to providing video or audio service for external clients. Consequently, the LAN can be simultaneously transporting multiple media types with extremely different performance requirements, even though no true "multimedia" computers are directly connected to it. This is illustrated in Figure 27.3.
NOTE: A common misconception is that multimedia computing is live, interactive videoconferencing. This is, perhaps, the most compelling and resource intensive of the multimedia applications. However, it isn't the only one. Many other more subtle forms of multimedia applications will diffuse throughout the client population first. Consequently, you will need to consider ways to integrate support for multimedia communications long before the network infrastructure is ready to support the "killer" videoconferencing application.
FIGURE 27.2. Integrating different data types into a common network, such as the LAN, constitutes multimedia communication.
FIGURE 27.3. A network can be required to support multiple, mixed media types without having to support any "multimedia" computers.
Trade publications have fixated on multimedia computing to the almost complete exclusion of multimedia communication. This has resulted in an inappropriate focus on the integration of multiple media types at the application layer, without regard for preparing networks to accommodate multiple media and application types.
At the physical layer of the OSI Reference Model, all forms of communication, regardless of whether they are ASCII-encoded data or a live video conference, are nothing more than a stream of 1s and 0s. It is only at the application layer that the distinctions are obvious.
Given that the network moves packets or frames filled with 1s and 0s, transporting a stream of digitally encoded speech or video should be easy. Unfortunately, this is not the case. Network technologies have lagged behind the various computing technologies' advance rates. This is evident when one compares the network's ability to perform against the performance requirements of contemporary multimedia applications.
At the risk of oversimplifying, multimedia applications typically impose two basic types of performance requirements: latency and bandwidth.
Latency is defined as the minimum amount of time required for a packet to clear any given network device, such as a router, a hub, a switch, and so on. Time-sensitive applications like live voice or video conferencing require low-latency connections throughout the entire network. A low-latency connection, ideally, provides a low and consistent delay between the transmission and receipt of a packet. Low but inconsistent delays can result in jittery images, choppy sounds, or otherwise degraded performance of networked multimedia applications.
The cumulative latency of a network is affected in many ways. The first, and most obvious, is the LAN's access method. LANs that feature a contention-based access method (for example, all "flavors" of Ethernet) are likely to have a higher, and less consistent, latency due to the vicissitudes of competing for empty packets. LANs that use a deterministic access method (for example, Token Ring and FDDI) have a lower latency, but remain inconsistent and unpredictable.
Deterministic networks offer the ability to improve performance by decreasing the number of nodes connected to each segment or ring. This statistically decreases the amount of time before any given node receives permission to transmit, thereby decreasing latency somewhat. Unfortunately, this cannot provide a highly consistent delay as frame sizes used by these protocols are highly flexible.
A good way to improve the latency of both contention- and token-based networks is to implement them using switching hubs. Switches are very fast forwarding devices that function completely at Layers 1 and 2 of the OSI Reference Model. A LAN based on switching hubs enjoys a lower innate latency than one based on repeating hubs simply because the switches forward packets faster than conventional repeating hubs.
NOTE: The use of a full-duplex, switched version of any of the Ethernets will also help reduce latency by eliminating the effects of contention for media access. Full-duplex connections permit the simultaneous transmission and reception of data. Traditional Ethernet is only half-duplex: A device can either transmit or receive, but not both simultaneously.In a full-duplex switched Ethernet, there are only two ports on the segment, the switched hub port and the device it connects to. Each has its own dedicated wire path for transmission and reception. Consequently, there is absolutely no contention for packets on the wire; latency is effectively reduced.
Each port on a switch represents a separate segment or ring. That is, an 802.3 Ethernet switching hub provides a number of ports, each acting as a standalone collision domain, that are members of the same broadcast domain. Devices connected to a dedicated switched port still must compete for empty packets, but the competition is limited to that device and the hub port it is connected to. Switching is rapidly displacing traditional broadcast Ethernet hubs. It is even being implemented in Token Ring and FDDI versions.
Another aspect of a network's cumulative latency is whether or not routing is used. Routers, an integral part of virtually all WANs, operate at Layer 3 of the OSI Reference Model and require software-driven table lookups to forward packets. This means that they cannot forward packets as quickly as a switch. Thus, they directly increase the cumulative latency of a network.
Latency is also directly affected by a network's frame structure. Many of today's more common and mature network protocols also use flexible frame and packet sizes. Flexible data fields excel at transporting traditional data types by minimizing the packet-to-payload ratio. Unfortunately, this has the exact opposite impact on time-sensitive applications. Forcing such applications to intermingle in the bit stream with indeterminate packet sizes adversely affects time-sensitive applications by introducing inconsistency to the network's cumulative latency.
Given that conventional networks are designed explicitly to support the transport of data, the time-sensitive packets of multimedia applications require non-standard handling. Networks must be able to
A mechanism that can provide networks with the ability to discriminate between packets, based on their performance requirements, is known as Quality of Service (QoS). QoS has two distinct facets: network and application. Valid application QoS parameters include image size and quality, frame rate (if the application is video), start-up delays, reliability, and so on. The network, however, has a very different set of QoS parameters. These include: bandwidth, loss rate, and delay. Users are not allowed to specify network QoS parameters. A QoS-capable protocol, such as RSVP, provides the translation between application and network parameters.
Obviously, the relationship between these sets of parameters is very complex. In theory, QoS will enable the different application types to receive the special handling that they require. Applications that require guaranteed integrity of packet contents can receive that, while others that need low delay and/or response times can tell the network about those requirements through QoS tags, too.
In order for QoS to work, both the network and the networked multimedia applications must conform to a common standard for QoS. Unfortunately, this uniformity has yet to develop. QoS must be supported throughout the full spectrum of the networked computing infrastructure. It can't work unless all the network hardware, client platform, and applications support a common set of QoS tags.
The good news is that almost all of the major players in these technology areas are committed to implementing RSVP, a networking protocol with native QoS support. The bad news is that, until a critical mass is achieved, QoS is useless.
Conventional networks are artifacts of the recent past. Their access methods, route-calculation and packet-forwarding techniques, and even their frame structures make them well suited to guaranteeing the integrity and sequence of delivered packets. Conversely, they are ill-prepared for guaranteeing the timeliness with which those packets will be delivered. Rectifying these deficiencies, with respect to the low-latency demands of time-sensitive applications, is not a trivial task. The use of QoS is just one of the ways that existing networks can be retrofitted to provide some measure of support for time-sensitive applications.
In addition to the timeliness of delivery, there is another challenge that must be overcome to successfully transport multimedia communication. This challenge is the sheer quantitative challenge of transporting the volume of data generated by multimedia applications. Applications that are not time-sensitive, like high-density graphics or non-streaming audio and/or video transmissions, can be transported better by conventional networks. Their only performance requirements are the guaranteed integrity of packet contents, and re-sequencing upon arrival. However, they can be extremely bandwidth intensive.
Perhaps the best way to increase the amount of usable bandwidth is to conserve the consumption of existing bandwidth. There are numerous techniques for conserving bandwidth, most of which are automatically built in to multimedia applications software.
Compression is an invaluable tool for conserving bandwidth. Compression algorithms vary, based upon application and data type. For example, video conferencing software can compress data streams by transmitting only the pixels that have changed since the previously transmitted frame (in other words, motion). Other forms of compression may be based upon character strings or repeated patterns.
Non-interactive transmissions may also be effectively stored and forwarded, thereby conserving bandwidth during peak demand periods.
Regardless of which technique is used, conservation makes better use of the existing, available bandwidth. It may even forestall the need to reinvest in network hardware.
The other way to increase the bandwidth available for multimedia applications is to increase the speed of the network. Fast Ethernet, Gigabit Ethernet, ATM, FDDI, Fibre Channel, and so on, may all be used to substantially increase the available bandwidth on a LAN. Upgrading an existing network to these technologies may require a substantial investment in station wiring, network interface cards, hubs, and so on, and involve a considerable amount of planning and work to implement.
Often referred to as a "fork lift upgrade," very little, if any, of the previous network's components are retained. Such upgrades are expensive and non-trivial undertakings. They should be considered a last resort for increasing bandwidth.
More modest upgrades can be effective at increasing the supply of bandwidth. A typical, incremental, upgrade might be to implement a high-speed LAN backbone using ATM, FDDI, or Fast Ethernet, and then selectively introduce a switched variant of the existing LAN technology as a segmentation device.
FIGURE 27.4. Improving LAN performance with Switching.
Segmentation increases usable bandwidth without increasing a network's velocity by creating multiple collision domains or logical rings within a common broadcast domain. This type of upgrade allows the retention of the station wiring and network interface cards.
Certain applications, like video conferencing, simultaneously require low latency and high levels of throughput to operate successfully. Unless a network is intentionally and severely over-engineered, adding such applications to an existing network may tax the network's abilities, and reduce overall performance for all the applications that rely upon the network for transport.
Given the various contributors to a network's cumulative latency, the surest way to provide high bandwidth and low latency is to select network technologies that are specifically designed for this tandem purpose. Some specific examples include ATM and isochronous Ethernet.
Bolstering the available LAN bandwidth is not without risks. If a suitable LAN technology can be identified, and the embedded base of premise wiring is adequate, a successful LAN renovation may only create a greater mismatch between the LAN and the WAN. WANs typically are constructed with routers (antithetical to low-latency networking) and transmission facilities that are at least an order of magnitude "smaller" than the LANs they interconnect. Larger transmission facilities are available, for example, DS-3 and fiber-based MANs, but their cost may be prohibitive.
Multimedia applications require the lowest, and most consistent, cumulative latency possible. Upgrading or renovating LANs, without considering the WANs that interconnect them, may well be an expensive exercise in futility.
One way to avoid this trap is to use protocols that can reserve bandwidth. RSVP, for example, is an emerging network protocol that can reserve the amount of bandwidth that will be needed by establishing a temporary but dedicated virtual circuit between the source and destination machines.
The obvious danger inherent in bandwidth reservation schemes is that once bandwidth is reserved by an application, it is unavailable to other machines and their applications. A WAN comprised of T-1s will quickly exhaust its bandwidth if users start reserving 128Kbps virtual circuits for their videoconferencing sessions.
Multimedia communication imposes a combination of performance requirements that are beyond the abilities of many of today's mature LAN and WAN technologies and protocols. The difficulty is in simultaneously satisfying the combination of performance parameters using a platform that is more legacy- than future-oriented.
New internetworking protocols are emerging that may enable existing platforms to more effectively transport mixed media types, but they are not panaceas. They are designed to either identify specific packets as having priority over others, or provide a more or less guaranteed level of service by reserving bandwidth between the endpoints engaged in multimedia communication. The fundamental deficiencies remain unresolved.
The key to unleashing the power of high-performance networking by supporting multimedia communication lies in understanding the different types of multimedia communication, as well as their specific performance requirements. This knowledge provides a context for evaluating the potential of LAN, WAN, and application technologies to support multimedia communication.
Multimedia communication includes some applications that generate traffic that can be merely a quantitative challenge to transport. This type of traffic can include everything from low- resolution, cartoon-like images up to full-color, high-resolution photographs, as well as recorded streams of audio and/or video signals. High-density file transfers include three categories: graphics, audio, and video transfers.
They share common network performance requirements: guaranteed integrity of packetized data upon delivery, without regard for the sequence or timing of that delivery. In fact, failure to retransmit damaged or dropped packets will likely result in noticeable degradation of files. These files are no different than any conventional data file when being transferred. They require the transport protocol to provide error detection and correction, as well as resequencing of the received packets. Once this is accomplished, the transport protocol hands the packets to the appropriate application for storage.
Graphics files vary in size based upon the compression algorithm used, that is, the file's format, physical size, and pixel and color density. Graphics have become a readily accepted form of computing since the advent of graphical user interfaces. This integration has been so successful that most users do not consider graphics to be "multimedia." Additional evidence that supports this assertion is that this form of transmission is easily accommodated with conventional network technologies. Graphics file transmissions do not require timeliness of delivery. Damaged packets can be retransmitted and resequenced without adversely affecting the application.
One possible exception to this could be the World Wide Web (WWW). Web pages are a notorious example of the quantitative challenge of adding a new, bandwidth-intensive, multimedia application to an existing network.
This type of traffic is likely to contain in-line graphics embedded in text files, and users are likely to be waiting for the graphics to be delivered. Large, complex graphics and/or animation may require multiple retransmissions before being successfully reassembled by the user's browser. This may take more time than the user is willing to wait. Though not technically a time- sensitive application, this example demonstrates the difficulty of constructing categories to define a spectrum of uses. Fortunately, this type of multimedia communication is readily abetted by carefully selecting graphics formats for their effectiveness in compressing the file.
Prerecorded audio files can be encoded in several different formats and contain speech, music, sounds, and so on. A transferred file can be stored either on disk or in memory. Once received in its entirety, the file can be played back.
Audio transfers, by virtue of having been prerecorded, can effectively utilize transmission error detection and correction mechanisms. Temporarily storing them to disk or memory upon receipt provides the user with a more error-free file and smoother sound quality than is possible if the file were played back immediately upon receipt.
This temporary storage provides the time needed for damaged or dropped packets to be identified, and re-sent. This implies that this method can create a greater volume of overall network traffic than a streaming audio transmission would.
Recorded video files can also be encoded in a variety of formats. They may or may not contain supplemental audio tracks. Those with accompanying audio will be marginally larger than a similarly sized and same duration video-only file, but they require synchronization of audio and video. Any errors in the received file will be more obvious due to the temporary disruption in the continuity between image and sound.
Prerecorded video files, like their audio-only counterparts, can take advantage of transmission error detection and correction mechanisms. Storing them to disk (memory is probably not a viable storage option, given the likely sizes of even "small" video files) upon receipt provides the user with a more accurate replication of the original file than would have been possible if the file were being viewed immediately.
Temporarily storing, or buffering, the file provides the time needed for damaged or dropped packets to be re-sent and resequenced.
Audio communication can take three distinct forms, each with a slightly different set of network performance requirements. Specific application categories include: computer-based telephony, audio conferencing, and audio transmission.
Computer-based telephony uses PCs and LANs/WANs to integrate voice telephony into a data network. The client PC buffers inbound transmissions and plays them using its sound card and speakers. This buffering can "smooth out" the sound quality of the transmission somewhat, despite its having traversed contention-based LANs using error-correcting protocols.
Audio communication is not bandwidth intensive. Audio can be delivered over dialup facilities as slow as 14.4Kbps using Point-to-Point Protocol (PPP). This form of communication, however, is extremely susceptible to corruption from packets delivered late or out of sequence. Any such packets are discarded because, by the time a successful retransmission can be made, the stream being played back will likely have progressed beyond the point at which that packet was needed. Thus, re-inserting it late creates a second disturbance that is readily detectable by the user.
Computer-based telephony suffers from two significant limitations. Transmissions are, to date, half-duplex only. Half-duplex transmissions mean that only one party can "talk" at a time, much like "push-to-talk" walkie-talkies. Telephones are full-duplex mechanisms. Both parties on a telephone call can talk and listen simultaneously. They might not be able to communicate effectively this way, but the technology supports their ability to try.
The second limitation is that computer-based telephony is capable of providing sound quality on par with an AM radio. The combination of half-duplex transmission and relatively low sound quality renders this technology more of a curiosity or techno-toy than a business tool.
Interestingly, the bandwidth requirements for half-duplex audio can be as little as one percent of a video communication session. Thus, this technology is far easier to implement and may not require re-investment in the network infrastructure. Unfortunately, it may also be the least useful. The greatest impediment to this multimedia communication technology is the well-established base of higher-quality telephony equipment. Why use a networked PC to emulate a walkie-talkie when there's already a telephone on every desk?
Audio conferencing differs from computer-based telephony only in that it is used in other than point-to-point sessions. Conferences tend to be multipoint-to-multipoint in the case of a collaborative conference of peers, or point-to-multipoint for broadcasts of major events.
Given the half-duplex nature of this technology set, point-to-multipoint unidirectional broadcasts may be the best use of this technology. The network must have some mechanism for this form of multicasting. Multicasting is the transmission of a single stream of packetized data with an address that is recognized by more than one workstation. This is far more bandwidth- efficient than transmitting multiple simultaneous streams, each destined for a single, specific end-point. End-points that belong to a multicast group "listen" for both their unique Internet address, and the address of their group.
Streaming audio transmissions are unidirectional transmissions of a stream of audio data. It uses a host that either records audio in real-time, or uses prerecorded audio media. In either case, packets stream out onto the network as soon as they are generated. Recipients listen to them as they arrive, generally without buffering them. Dropped or damaged packets are usually left out of the playback session; some of the newer streaming audio products on the market, however, offer the opportunity to attempt a retransmission.
Streaming audio, like most audio-only multimedia applications, is relatively easy to support in a LAN/WAN environment. It is low bandwidth, and benefits from but does not require low network latency. Perhaps the most important attribute of streaming audio technology is that it does not pretend to be a bi-directional audio transmission technology. Therefore, its practical applications are much more readily determined. Streaming audio can be used to distribute, on demand, either a feed from a live speech or copies of recorded speeches, Question and Answer sessions, or even the latest disk from your favorite group.
Video communication is the acid test of any IT platform. It requires a fairly high-powered computer and can also be extremely bandwidth intensive. It also benefits greatly from low-latency network components. At a minimum, it should use network protocols that can reserve bandwidth at the time of call setup.
Video communication can occur at surprisingly low levels of throughput, given the right compromise of picture size, quality, and refresh rate. The ideal rate of refresh is 30 frames per second. At this rate, known as "full motion," the picture appears smooth, and movement is smooth, not jerky. Unfortunately, even using a small picture size, like 288 pixels by 352 pixels, the uncompressed stream is approximately 500Kbps! This represents a generous portion of a T-1's available bandwidth. It is also a sizeable portion of the useable bandwidth on most LANs.
Dropping the refresh rate to 15 frames per second dramatically reduces bandwidth consumption, but only marginally degrades perceivable performance. The use of sophisticated video engines, like those that offer compression "on-the-fly," can further reduce the bandwidth consumption, albeit at an increase in CPU cycle consumption.
Decreasing the number of colors recognized can also greatly reduce the size of the transmission stream. Similarly, reducing the size of the picture, as measured either in inches, pixels, or fractions of a screen (for example, full-screen, quarter-screen, eighth-screen, and so on) can also greatly reduce the transmission stream.
Unfortunately, compromises in refresh rate, color density, pixel density, and so on, effectively translate into degraded video picture quality. An overly aggressive bandwidth conservation effort can easily result in a jerky, tiny, "talking head" of a video image. This defeats the very purpose of video conferencing by failing to capture the body language and other subtle non-verbal communication that can be gleaned from face-to-face interactions.
The trick is to understand the limitations of the networking that will support the video transmissions, and strive for an optimal combination of tunable parameters that doesn't have adverse affects on the network.
Video communication includes video conferencing and streaming video transmissions. Although very similar, they have distinct functionality sets and, consequently, different network performance requirements.
Real-time, bi-directional transmissions between two or more points is known as video conferencing. The accompanying audio can be handled "in-band" or "out-of-band." In-band audio transmissions bundle the audio signals with the video signals in the same bit stream. This requires the video conferencing system to have its own speakers and microphone or to interface with those already installed in the computer.
Out-of-band audio relieves software from having to capture and play back the audio signals. It also means the video conferencing system doesn't have to synchronize the audio and video. Rather, the video conferencing system ignores the audio signals and requires conferees to establish a second communication link over conventional telephony.
Because a video conference entails a bi-directional transmission, the network interconnecting the conferenced end-points must be capable of supporting two separate video streams. If both conferees opt for full-screen, full-motion video and use 17-inch monitors set for 1024 x 768 dots per inch (dpi) resolution, most networks will be extremely hard pressed to deliver the desired level of service.
Proprietary video conferencing hardware and software bundles have been available for quite some time. Numerous transmission technologies are supported by these bundles, including ISDN at 128Kbps. Ostensibly, the controlling software is smart enough to understand the limits of the selected network, and throttles maximum quality settings accordingly.
Streaming video differs from video conferencing in that it is not bi-directional. Nor does it have to be live. The streams can be either live or prerecorded, but are transmitted, multicast, or broadcast uni-directionally.
As is the case with streaming audio, streaming video does not benefit from a network protocol's ability to detect and correct errors, or resequence received packets. Rather, the packets are played back immediately upon receipt. Compared to video conferencing systems, streaming video transmission systems are fairly crude.
Although it lacks the functionality and sophistication of a video conferencing system, streaming video may be even more useful just by virtue of being more useable. Video libraries can be maintained of public events, speeches, meetings, and so on, and played back on demand. Assuming the picture size and quality were carefully managed, the streams can be supported on as little as 128Kbps, although a vastly superior video would result from connections of 384Kbps or higher.
It is inevitable that the currently separate, and redundant, voice, data, and video communication infrastructure will eventually integrate into a single, broadband, multimedia communication infrastructure, with a common user interface and vehicle. This integration is only just beginning, and will take years to complete. It is already clear that the LAN is capable of incrementally growing into the role of multimedia communication network. In the interim, it is being used to support fledgling attempts at this degree of integration: today's multimedia applications.
Some of these applications require levels of performance that are difficult to achieve in a conventional LAN/WAN environment. For example, any of the live, interactive applications like computer and video telephony, or video conferencing, require that packets be delivered on time, in sequence, and intact! Packets that are damaged are simply discarded without generating a request to the sender for a retransmission. Similarly, packets that are delivered intact, but are delivered late or out of sequence are also discarded without requesting retransmission.
Today's LANs, WANs, and their protocols, are not well suited to transporting the time- sensitive data of many multimedia communication technologies. Contemporary networks and protocols are better suited to the transport of traditional data. They can guarantee the integrity of each packet's payload upon delivery, and are more efficient at transporting bulk data than they are at guaranteeing timeliness of delivery.
Even if the LANs have been engineered to support high-bandwidth and/or low-latency applications, WAN links can be the kiss of death. They use high-latency routers, which function at the relatively slow Layer 3 and in conjunction with relatively low-bandwidth transmission facilities (DS 1 or less). This can be a vexing problem as it is widely accepted that the business value of multimedia applications, like video conferencing, is directly proportional to the geographic distances separating the end-points. In other words, today's networks are least capable of delivering time-sensitive applications in their highest-value scenarios.
The good news is that today's data networks have the potential to evolve incrementally into a true multimedia communication infrastructure. The introduction of switching in both LANs and WANs, the development of bandwidth reservation protocols, Quality of Service protocols, increases in the network's transmission rates, and even prioritization protocols all contribute to making the network infrastructure increasingly multimedia capable. As a network designer, you must maintain a focus on the customer's performance requirements, and develop a forward-looking plan for introducing the appropriate network support mechanisms for multimedia applications.
Similar evolution of application software and hardware will complete this evolution. Until true multiple-media communication technologies are available, remember the benefits of open standards. They apply universally throughout the information technologies. Proprietary, single-function "multimedia" products, regardless of how well they perform, are likely to impede interoperability of multimedia applications.
There are, however, numerous multimedia applications that can be supported over networks today.
One example of how today's open standard components can be assembled to support a multimedia application and can be used with videoconferencing over ISDN. The recent trends toward telecommuting and virtual offices have created both near-ideal conditions and a legitimate business need for videoconferencing.
Telecommuters, particularly those in small offices or home offices, are often deprived the luxury of a dedicated leased connection (such as T-1 or fractional T-1) to the company's networked computing resources. Rather, they must use lower cost dial-up facilities. ISDN has finally found a legitimate market in providing these small offices and home offices with more robust connectivity than they would otherwise enjoy. With the right hardware, software, and drivers, telecommuters can enjoy a 128Kbps connection to other ISDN-connected users. This is more than ample to support a videoconference or even collaborative development of documents.
Using an inexpensive video camera, PC, and an ISDN connection permits these otherwise isolated workers to maintain visual contact with their managers, peers, and the like. The added benefit of this arrangement is that other company applications are not impacted. They continue to traverse the company's WAN, and are completely separate from the ISDN network. Until QoS and other network technologies mature, this approach is probably the best way to begin supporting multimedia communications without incurring any performance penalties for other networked applications.
© Copyright, Macmillan Computer Publishing. All rights reserved.