text
stringlengths
2.05k
2.05k
ntrolling VCR-style playback and random-access of bit streams encoded onto digital storage media (e.g., compact disc). Commands include fast forward, advance, still frame and go-to. Part 6 was approved in 1996. Part 7-NON-BACKWARDS COMPATIBLE AUDIO (13838-7). This part extends the two-channel audio of MPEG-1 (11172-3) by adding a new syntax to efficiently decorrelate discrete multi-channel surround sound audio. Part 7 of MPEG-2 was approved in 1997. Part 9- REAL-TIME INTERFACE (RTI) (13838-9). This part defines a syntax for video on demand control signals between set-top boxes and head-end servers. Part 9 was approved as an International Standard in July 1996. MPEG-2 Levels Each MPEG-2 defines "quality classifications" that are known as levels. Levels limit coding parameters (sample rates, frame dimensions, coded bit rates, etc.). The MPEG- 2 levels are as follows: Video Communications: The Whole Picture Level Resolution Frame Pixels Maximum Purpose Sampled bit rate Second 352 X 240 30 fps 3.05 M 4 Mbps Consumer Tape, 740 X 480 30 fps 10.40 M 15 Mbps CCIR-601 Studio 1440 X 30 fps 47.00 M 60 Mbps Commercial 1920 X 30 fps 62.70 M 80 Mbps Production SMPTE Figure Appendix B-3. MPEG-2 Levels The two most popular MPEG-2 levels are the Low (or SIF) Level and the Main (or CCIR-601) Level. MPEG-2 Profiles MPEG-2 consists of six different coding tools that are known as profiles. Profiles are a defined sub-set of the MPEG-2 specification's syntax (algorithms). Different profiles conform to different MPEG-2 levels and are aimed at different applications (high- or regular-definition television broadcasts over different types of networks). Profiles also provide backward compatibility with other specifications such as H.261 or MPEG-1. Simple Spatially 4:2:2 Profile Profile Scalable Scalable Profile Profile Profile Profile Level Level Level Figure Appendix B-4. MPEG-2 Profiles MPEG Family of Specifications The MPEG-2 Video Main Profile conforms to the CCIR-601 studio standard for digital TV, and is implemented widely in MPEG-2 d
ecoder chips and in direct broadcast satellite. It will also be used for the delivery of video programming over cable television. It is targeted at a higher encoded bit rate of less than 15Mbps and specifies a resolution of 720x480 at 30 fps, allowing for much higher quality than is typical with MPEG-1. It supports the coding parameters as set forth in MPEG-2's high, high 1440, main and low levels. It uses the 4:2:0 chroma format. The Video Main Profile also supports I, P, and B frames. The U.S. ATSC Digital Television Standard (Document A/53-HDTV) specifies MPEG-2 Video Mail Profile compression. The Simple Profile is nothing more than the Main profile without B frames. As we said earlier, B frames require a set-top or other MPEG decoder to have a certain amount of integrated circuit (IC) memory; that becomes too costly for some applications. The SNR Scalable and Spatially Scalable Profiles are very complex and are useful primarily for academic research. Scalability allows a decoder to divide a continuous video signal into two or more coded bitstreams that represent the video at different resolutions (spatial scalability) or picture quality (SNR scalability). Video is also scalable across time (temporal scalability). Equipment manufacturers do not generally pursue scalable profiles. They require twice as many integrated circuits as non-scalable profiles, which approximately double their cost. The High Profile is aimed at high-resolution video. It supports chroma formats of 4:2:2 and 4:2:0, resolutions that range between 576 lines by 720 pixels and 1152 lines by 1920 pixels, and data transfer rates of between 20 and 100 Mbps. It also supports I, P, and B frames. The 4:2:2 Profile (that was added to MPEG-2 Video in January of 1996) possesses unique characteristics more desirable in the professional broadcast studio and post- production environment. The 4:2:2 Profile uses a chroma format of 4:2:2 (or 4:2:0), uses separate luminance and chrominance quantization tables, allows an unconstrained number of bits in a macr
oblock and operates at 608 lines/frame for 25 fps or 512 lines/frame for 30 fps. The Multiview Profile, which was completed in October 1996, uses existing video coding tools for providing an efficient way to encode two slightly different pictures such as those obtained from two slightly separated cameras shooting the same scene. This allows multiple views of scenes, such as stereoscopic sequences, that are coded in a manner similar to scalable bit streams. How MPEG-2 DIFFERS FROM MPEG-1 MPEG-2 is not intended to replace MPEG-1. Rather, it includes extensions to MPEG- 1 to cover a wider range of applications. MPEG-1 deals only with progressive Video Communications: The Whole Picture scanning techniques where video is handled in complete frames. The MPEG-2 standard supports both progressive scanning and interlaced displays such as those used for televisions. In interlaced video, each frame consists of two fields (half frames) that are sent at twice the frame rate. The MPEG-1 specification was targeted nominally at single speed CD ROMs (1.5 Mbytes per second). MPEG-2 is targeted at variable frame rates, including those many times higher than MPEG-1. Each of the two standards, MPEG-1 and MPEG-2, is divided into parts. MPEG-1 has five parts (listed in this document); MPEG-2 has eight, of which five are either identical to or extensions of MPEG-1 parts and three are additions. The MPEG-1 stream is made up of two layers, the system layer and the compression layer. The MPEG-2 stream is made up of program and transport streams. MPEG-2 streams are subdivided into packets for transmission. The MPEG-2 program stream is designed for use in relatively error-free environments. As such, it is similar to the MPEG-1 system stream. The MPEG-2 transport stream is designed for error-prone environments (broadcast). MPEG-1 does not have an equivalent to the MPEG-2 transport stream. Both the MPEG-1 and MPEG-2 standards define a hierarchy of data structures in the video stream. There is a group of pictures (a video sequence). The next le
vel down in the hierarchy is the picture (the primary coding unit of a video sequence). MPEG-2 only defines a slice (used to handle errors in a bit stream). Both MPEG-1 and MPEG-2 define a macroblock (a 16 pixel by 16 line selection of luminance components and its corresponding 8 pixel by 8 line chrominance components), and a block (an 8-pixel by 8-line set of luminance and chrominance values). MPEG-1 handles only two channels of audio. MPEG-2 handles up to five channels (surround sound). Because of this difference, MPEG-1 and MPEG-2 use different techniques to synchronize audio and video. In summary, MPEG-2 extends MPEG-1 to handle audiovisual broadcasts (whereas MPEG-1 was aimed at playback applications). It can deliver audiovisual material at higher speeds, with greater resolution, with surround sound audio and in interlaced scanned environments. MPEG-2 may eventually replace MPEG-1 but it was not intended to do SO. MPEG 4-ISO/IEC 14496 The official title of the ISO/IEC's MPEG-4 specification is, "Very Low Bitrate Audio- Visual Coding." It was approved as a Working Draft in November 1994, finalized in October 1998, and became an International Standard in 1999. MPEG-4 is targeted at low bit rate applications with frame sizes of 176x144 or less, frame rates of 10Hz or less, and encoded bit rates of 4.8-64 Kbps. The promise of MPEG Family of Specifications MPEG-4 is to provide fully backward compatible extensions under the title of MPEG- 4 Version 2. MPEG-4 is designed to address digital television, interactive graphics applications that build on synthetic content, and interactive multimedia (e.g., distribution of, and access to, content). MPEG-4 provides standardized technological elements to enable the integration of the production, distribution, and content access paradigms of these three domains. MPEG-4 enables extensive reusability of content across disparate technologies such as digital television, animated graphics, and World Wide Web (WWW) pages. It also offers manageability of content owner rights. Furth
ermore, MPEG-4 offers a generic QoS descriptor for dissimilar MPEG-4 media that can be interpreted and translated into the appropriate native signaling messages of each network. End-to-end signaling of the MPEG-4 media QoS descriptors enables transport optimization in heterogeneous networks. However, the exact translations from the QoS parameters set for each media to the network QoS are left to network providers. MPEG-4 offers end users higher levels of interaction with content, and allows the author to set those limits. It also extends multimedia to new networks, such as mobile networks and others that employ lower bit rates. MPEG seeks to supersede proprietary, non-interoperable formats by providing standardized ways to represent units of aural, visual or audiovisual content, called media objects. These media objects could be recorded with a camera or microphone, or generated with a computer. MPEG describes the composition of these objects to create compound media objects that form audiovisual scenes. It multiplexes and synchronizes the data associated with media objects SO that they can be transported over network channels and provide a QoS that is appropriate for the nature of the specific media objects, and it interacts with the audiovisual scene that is generated at the receiver's end. Media objects may require streaming data that is conveyed in multiple elementary streams. Therefore, MPEG-4 defines an object descriptor that identifies all streams that are associated with one media object. This enables the handling of hierarchically encoded data, and the association of both object content and intellectual property rights information. Each stream is characterized by a set of descriptors for configuration information, (e.g., required decoder resources, or the required precision of encoded timing information). The descriptors may also describe the QoS it requests for transmission (e.g., maximum bit rate, bit error rate, or priority). MPEG-4 enables synchronization of elementary streams by time stamping indivi
dual access units within elementary streams. The synchronization layer manages the identification of access units and manages the time stamping. Independent of the media type, this layer allows identification of the type of access unit (e.g., video or audio frames, scene description commands) in elementary streams, and the recovery of the media object's or scene description's time base. It also enables synchronization among these. Video Communications: The Whole Picture The MPEG-4 ISO/IEC 14496 volume consists of four basic functions: Delivery Multimedia Integration Framework (DMIF) DMIF is a session protocol for the management of multimedia streaming over generic delivery technologies. It is similar to FTP except that, where FTP returns data, DMIF returns pointers toward (streamed) data. DMIF is both a framework and a protocol. The functionality provided by DMIF is expressed by an interface called DMIF-Application Interface (DAI), and translated into protocol messages that may differ based on any given network on which they operate. MPEG-4 specifies an interface to the TransMux (Transport Multiplexing) models, the layer that offers transport services for matching the requested QoS. Any suitable existing transport protocol stack (e.g., RTP in UDP/IP, AAL5 in ATM, or Transport Stream in MPEG-2) over a suitable link layer may become a specific TransMux instance. The synchronized delivery of streaming data may require the use of different QoS schemes as it traverses multiple public and private networks. MPEG defines the delivery-layer FlexMux multiplexing tool to allow grouping of Elementary Streams (ES) with low multiplexing overhead. It may be used to group ESs with similar QoS requirements, to reduce the number of network connections, or to reduce end-to-end delay. The FlexMux layer may be empty if the underlying TransMux instance provides adequate functionality. The DMIF framework enables control functions such as the ability to identify access units, to transport timestamps and clock reference information, and
to recognize data loss. It also enables interleaving of data from different elementary streams into FlexMux streams. Moreover, it permits the conveyance of control information to indicate the required QoS for each elementary stream and FlexMux stream, translation of such QoS requirements into actual network resources, association of elementary streams to media objects, and conveyance of the mapping of elementary streams to FlexMux and TransMux channels. Systems MPEG-4 supports scene description for composition (spatio-temporal synchronization with time response behavior) of multiple media objects. The scene description provides a rich set of nodes for two-dimensional and three-dimensional composition operators and graphics primitives. It also supports text with international language support, font and font style selection, timing and synchronization. Moreover, MPEG-4 supports interactivity such as client and server- based interaction, an event model for triggering events or routing user actions, and event management and routing between objects in the scene, because of user or scene triggered events. The FlexMux tool provides interleaving of multiple streams into a single stream, including timing information, and the TransMux provides transport layer independence through mappings to relevant transport protocol stacks. MPEG-4 also MPEG Family of Specifications provides the initialization and continuous management of the receiving terminal's timing identification, synchronization and recovery mechanisms, as well as other receiving terminal buffers. It also recognizes data sets such as the identification of Intellectual Property Rights that relate to media objects. Audio MPEG-4 Audio permits a wide variety of applications that could range from simple speech to sophisticated multi-channel audio, and from natural sounds to synthesized sounds. In particular, it supports the highly efficient representation of audio objects consisting of: Speech signals: Speech coding can be done at bitrates from 2 Kbps up to 24 Kbps usin
g the speech coding tools. Lower bitrates, such as an average of 1.2 Kbps, are possible through variable rate coding. Low delay is possible for communications applications. When using the HVXC tools, speed and pitch can be modified under user control during playback. If the CELP tools are used, a change of the playback speed can be achieved by using an additional tool for effects processing. Synthesized Speech: Scalable TTS coders bitrate range from 200 Bps to 1.2 Kbps that allows a text, or a text with prosodic parameters (e.g., pitch contour, phoneme duration,), as its inputs to generate intelligible synthetic speech. General audio signals: Support for coding general audio ranging from very low bitrates to high quality is provided by transform coding techniques. With this functionality, a wide range of bitrates and bandwidths is covered. It starts at a bitrate of 6 Kbps and a bandwidth below 4 kHz, but also includes mono or multichannel broadcast quality audio. Synthesized Audio: Synthetic Audio support is provided by a Structured Audio Decoder implementation that allows the application of score-based control information to musical instruments described in a special language. Bounded-complexity Synthetic Audio: This is provided by a Structured Audio Decoder implementation that allows processing of a standardized Wavetable format. Examples of additional functionality include speed control, or change of the time scale without altering the pitch while decoding, and pitch change, or change of the pitch without altering the time scale while encoding or decoding. Audio Effects provide the ability to process decoded audio signals with complete timing accuracy to enable such functions as mixing, reverberation, and spatialization. Visual The MPEG-4 Visual standard will allow the hybrid coding of natural (pixel based) images and video together with synthetic (computer generated) scenes. This will, for example, allow the virtual presence of video communication participants. To this end, the Visual standard will comprise t
ools and algorithms supporting the coding of Video Communications: The Whole Picture natural (pixel based) still images and video sequences as well as tools to support the compression of synthetic 2-D and 3-D graphic geometry parameters (i.e. compression of wire grid parameters, synthetic text). The MPEG-4 visual standard will support bitrates typically between 5 Kbps and 10 Mbps, progressive and interlaced video, and resolutions ranging from SQCIF to DTV. Moreover, it supports compression efficiencies from acceptable to near lossless, and will allow for random access of video such as fast forward and reverse. Spatial scalability allows decoders to decode a subset of the total bitstream generated by the encoder to reconstruct and display textures, images and video objects at reduced spatial resolution. For textures and still images, a maximum of 11 levels of spatial scalability is supported. For video sequences, a maximum of three levels is supported. Temporal scalability allows decoders to decode a subset of the total bitstream generated by the encoder to reconstruct and display video at reduced temporal resolution. A maximum of three levels is supported. Quality scalability allows a bitstream to be parsed into a number of bitstream layers of different bitrate such that the combination of only a subset of the layers can be decoded into a meaningful signal. The bitstream parsing can occur either during transmission or in the decoder. The reconstructed quality, in general, is related to the number of layers used for decoding and reconstruction. MPEG-4 supports shape coding to assist the description and composition of conventional images and video as well as arbitrarily shaped video objects. Applications that benefit from binary shape maps with images are content-based image representations for image databases, interactive games, surveillance, and animation. Error resilience assists the access of image and video over a wide range of storage and transmission media such as the useful operation of image and video comp
ression algorithms in error-prone environments at low bit-rates (i.e., less than 64 Kbps). Tools are provided that address both the band-limited nature and error resiliency aspects for access over wireless networks. The Face Animation part of the standard allows sending parameters that calibrate and animate synthetic faces. Tools TwinVQ Wave- General Algo- Scalable table rithmic thetic synth- Synthesis & Audio MPEG-2 AAC Main MPEG- 2AAC LC MPEG- 2AAC SSR MPEG Family of Specifications Noise Shaping Long Term Prediction Tools for Large Step Scalability TwinVQ Structured Audio tools SA Sample Format Figure Appendix B-5. MPEG-4 Tools MPEG-4 only standardizes the parameters of these models, not the models themselves. MPEG-4 Parts The MPEG-4 requirements have been addressed by the six parts of the MPEG-4 Version 1 standard: Part 1: Systems - specifies scene description, multiplexing, synchronization, buffer management, and management and protection of intellectual property Part 2: Visual - specifies coded representation of natural & synthetic visual objects Part 3: Audio - specifies the coded representation of natural and synthetic audio objects Part 4: Conformance Testing - defines conformance conditions for bitstreams and devices; this part is used to test MPEG-4 implementations Part 5: Reference Software - includes software corresponding to most parts of MPEG-4 (normative and non-normative tools); it can be used for implementing compliant products as ISO waives the copyright of the code Part 6: Delivery Multimedia Integration Framework (DMIF) - defines a session Video Communications: The Whole Picture protocol for the management of multimedia streaming over generic delivery technologies. Parts 1-3 and 6 specify the core MPEG-4 technology; Parts 4 and 5 are supporting parts. Parts 1, 2 and 3 are delivery independent, leaving to Part 6 (DMIF) the task of dealing with the idiosyncrasies of the delivery layer. The MPEG-4 parts are independent and can be used independently or in conjunction with proprietary technologies.
The MPEG-4 AAC LTP object type is similar to the AAC Main object type but replaces the MPEG-2 AAC predictor with the long-term predictor to provide the same efficiency at a lower implementation cost. The AAC Scalable object type allows a large number of scalable combinations such as combinations with TwinVQ and CELP coder tools as the core coders. It supports only mono or 2-channel stereo sound. 6. The TwinVQ (Transform domain Weighted Interleave Vector Quantization) object type is based on fixed rate vector quantization instead of AAC's Huffman coding. It operates at lower bitrates than AAC and supports both mono and stereo sound. MPEG-4 includes two different algorithms for coding speech; each operates at different bitrates, plus a Text-to-Speech Interface. MPEG Family of Specifications The CELP (Code Excited Linear Prediction) object type supports 8 kHz and 16 kHz sampling rates at 4-24 Kbps. CELP bitstreams can be coded in a scalable way through bit rate scalability and bandwidth scalability. The HVXC (Harmonic Vector Excitation Coding) object type gives a parametric representation of 8 kHz, mono speech at fixed bitrates between 2-4 Kbps and less than 2 Kbps using a variable bitrate mode, and supports pitch and speed changes. The TTSI (Text-to-Speech Interface) object type gives an extremely low- bitrate phonemic representation of speech. The actual text-to-speech synthesis is not specified; only the interface is defined. Bit rates range from 0.2 to 1.2 Kbps. The synthesized speech can be synchronized with a facial animation object (see below). Lastly, a number of different object types exist for synthetic sound. The Main Synthetic object type collects all MPEG-4 Structured Audio tools. Structured Audio is a way to describe methods of synthesis. It supports flexible, high-quality algorithmic synthesis through the Structured Audio Orchestra Language (SAOL) music-synthesis language, efficient Wavetable synthesis with the Structured Audio Sample-Bank Format (SASBF), and enables the use of high-quality mixing and
postproduction in the Systems Audio BIFS tool set. Sound can be described at '0 Kbps' (i.e., sound continues without input until it is stopped by an explicit command) to 3-4 Kbps for extremely expressive sounds in the MPEG-4 Structured Audio format. The Wavetable Synthesis object type is a subset of the Main Synthetic object type that makes use of the SASBF format and the widely used MIDI (Musical Instrument Digital Interface) Wavetable format tools to provide simple sampling synthesis. The General MIDI object type gives interoperability with existing content (see above). Unlike the Main Synthetic or Wavetable Synthesis object types, it does not provide completely predictable (i.e., normative) sound quality and decoder behavior. 10. The Algorithmic Synthesis and AudioFX object type provides SAOL-based synthesis capabilities for very low-bitrate terminals (FX stands for effects). 11. The NULL object type provides the possibility to feed raw PCM data directly to the MPEG-4 audio compositor to allow local sound enhancement at the decoder. Support for this object type is in the compositor, not in the decoder. Video Communications: The Whole Picture MPEG-4 Profiles Although there are numerous object types in the audio area, there are only four distinct profiles (as defined below). Codec builders can claim conformance to profiles at a certain level, but cannot claim conformance to object types. Two levels are defined, determining whether either one or a maximum of 20 objects can be present in the (audio) scene. The Scalable profile was primarily defined to allow good quality, reasonable complexity, low bitrate audio on the Internet (an environment in which bitrate varies between users and over time). Scalability enables optimal use of limited and of varying bandwidth without encoding and storing the material multiple times. The scalable profile has four levels that restrict the amount of objects in the scene, the total amount of channels, and the sampling frequency. The highest level employs the concept of complexity
units. The Synthetic profile groups the synthetic object types. The target applications are defined by the need for good quality sound at very low data rates. There are three levels that define the amount of memory for data, the sampling rates, the amount of TTSI objects, and some further processing restrictions. The Main profile includes all object types. It is useful in environments in which processing power is available to create rich, highest quality audio scenes that may combine organic sources with synthetic ones. Two applications are the DVD and multimedia broadcast. This profile has four levels that are defined in terms of complexity units. There are two different types of complexity units: processor complexity units (PCU), which are specified in millions of operations per second, and RAM complexity units (RCU), that are specified in terms of the number of kWords. The standard also specifies the complexity units required for each object type. This provides authors with maximum freedom in choosing the right object types and allocating resources among them. For example, a profile could contain main AAC and Wavetable synthesis object types, and a level could specify a maximum of two of each. This would prevent the resources reserved for the AAC objects to be used for a third and fourth Wavetable synthesis object even though it would not break the decoder. Complexity units enable the author freedom to use decoder resources for any combination of profile-supported object types. How MPEG-4 DIFFERS FROM MPEG-2 The difference between MPEG-4 and MPEG-2 is not improvement in video or audio reproduction for a single application (as MPEG-2 specifically supports DTV). MPEG- 4 represents the establishment of an adaptable set of tools for combining various types of video, audio, and interaction to provide environments beyond what anyone may now imagine. The essence of MPEG-4 is an object-based audiovisual representation model that allows authors to build scenes using individual objects MPEG Family of Specifications that
have relationships in time and space. It addresses the fact that no single coded representation is ideal for all object types. For instance, animation parameters ideally address a synthetic self-playing piano; an efficient representation of pixel values best suits organic video such as a dancer. MPEG-4 facilitates integration of these different types of data into one scene. MPEG-4 also enables interactivity through hyperlinking with the objects in the scene (e.g., through the Internet). Moreover, MPEG-4 enables selective bit management (pre-determining which bits will be forfeited if bandwidth becomes less than desirable), and straightforward re-use of content without transcoding. Object Type Speech Scalable Synthetic AAC Main AAC SSR AAC LC AAC LTP AAC Scalable TwinVQ Synthetic Wavetable Synthesis General MIDI Algorithmic Synthesis Number of levels Figure Appendix B-6. MPEG-4 Object Types MPEG-4 benefits myriad applications in various environments. Whereas MPEG-2 is constructed as a rigid standard, MPEG-4 represents a set of tools, or profiles that address numerous settings, and countless combinations. MPEG-4 is less an extensive standard than an extensive collection of standards that authors can select and Video Communications: The Whole Picture combine as they choose to improve existing applications or to deploy entirely new ones. It facilitates creation, alteration, adaptation, and access, of audiovisual scenes. With the advent of MPEG-4, one can expect richer virtual environments that will benefit such applications as conventional terrestrial multimedia entertainment, remote multimedia, broadcasting, and even surveillance. MPEG-4 is designed to enable convergence through the coalescence of such different service models as communication, on-line interaction, and communication. To foster competition in the non-normative areas, MPEG-4 specifies normative tools only as interoperability necessitates. For instance, decoding is specified in the standard because it must be normative; video segmentation and rate con
trol are not strictly specified because they can be non-normative. The MPEG-4 groups "specified the minimum for maximum usability." This strategy facilitates creativity and competition, and ensures that authors can make optimal use of the continuous improvements in the relevant technical areas. Competitors will continue to establish differentiation through the development of non-normative tools. MPEG-7 Formally called "Multimedia Content Description Interface," MPEG-7 will be a standardized description of various types of multimedia information (it is worth noting that the group apparently arbitrarily chose the number seven; at the time of this writing, MPEG-5 MPEG-6 have not been defined). It will complement MPEG-1, MPEG-2, and MPEG-4. This description will be associated with the content itself to enable fast and efficient searching for material that is of interest to the user. The MPEG-7 Group is comprised of broadcasters, equipment manufacturers, digital content creators and managers, transmission providers, publishers and intellectual property rights managers, and university researchers who are interested in defining the standard. Participants include Columbia University, GMD-IPSI, Instituto Superior Técnico, Kent Ridge Digital Labs, KPN Research, Philips Research, Riverland, and Sharp Labs. The purpose of MPEG-7 is to establish a methodology for searching for video and audio context on the Internet as one can search for text now. The group recognized the immense amount of audio and video content that may be accessible on the Internet, the interest in locating that content, and the current lack of any way to locate that content. The problem is that no methodology exists for categorizing such information. One can search on the basis of the color, texture, and shape of an object in a picture. However, one cannot effectively search for the moment in which the wicked witch melts in The Wizard of Oz. Because the same limitations apply to audio, the group alluded to enabling a search based on humming a portion of a
melody. It is not hard to understand how this applies to the ongoing cable television paradox of 500 channels but nothing worth watching by allowing users to search on characteristics that suit their whim. MPEG Family of Specifications This newest member of the MPEG family, called "Multimedia Content Description Interface" (MPEG-7), may extend the limited capabilities of existing proprietary solutions in identifying content by including more data types. MPEG-7 will specify a standard set of descriptors to describe various types of multimedia information, ways to define other descriptors, and structures, or Description Schemes, for the descriptors and their relationships. The combination of descriptors and description schemes will be associated with the content itself to allow fast, efficient searching. MPEG-7 will also standardize a Description Definition Language (DDL) to specify description schemes. MPEG-7 will allow searches for AV content, such as still pictures, graphics, 3D models, audio, speech, video, and scenarios (combinations of characteristics). These general data types may include special cases such as personal characteristics or facial expressions. The MPEG-7 standard builds on other representations such as analog, PCM, MPEG- 1, -2 and -4. The intent is for the standard to provide references to suitable portions of such standard representations as a shape descriptor that is used in MPEG-4, or motion vector fields used in MPEG-1 and MPEG-2. However, MPEG-7 descriptors are not dependent upon the ways the described content is coded or stored. One can attach an MPEG-7 description to an analog movie or to a picture that is printed on paper. Even though the MPEG-7 description does not depend upon the representation of the material, the standard builds on MPEG-4, which provides the means to encode audio-visual material as objects that have certain relations in time and space (either two- or three-dimensional). Using MPEG-4 encoding, it will be possible to attach descriptions to audio and visual objects wi
thin a scene. MPEG-7 will offer different levels of discrimination by allowing variable degree of granularity in its descriptions. To ensure that the descriptive features are meaningful in the context of any given application, MPEG-7 will allow description of the same material using different types of features. In visual material, for instance, a lower abstraction level may describe shape, size, texture, color, movement (trajectory) and position (where the object can be found in the scene). In audio, the lower abstraction may describe pitch, timbre, tempo, or changes and modulations. The highest abstraction level would provide semantic information such as, "This scene includes a child playing a Mozart sonata on a piano, a gray and yellow cockatiel squawking in a gold cage, and an adult trying to calm the bird." All these descriptions would be coded to allow for searching. The level of abstraction is related to the way the features can be extracted: many low-level features can be extracted in fully automatic ways, whereas high-level features may require human interaction. In addition to a description of the content, other types of information about the multimedia data may be required. Examples may include: The form-Such as the coding scheme used (e.g. JPEG, MPEG-2), or the file size. This information helps determine whether the user can read the material. Video Communications: The Whole Picture Conditions for accessing the material-Such as copyright and price information. Classification-Content sorting into pre-defined categories (e.g., parental rating). Links to other relevant material - To assist the user in conducting a timely search. Context - For recorded non-fiction content, it is important to know the occasion of the recording (e.g. Superbowl 2005, fourth quarter) The Group seeks to make the descriptions as independent from the language area as possible, textual descriptions will be desirable in many cases - e.g., for titles, locations, or author's name. MPEG-7 data may be physically located with the associ
ated AV material, in the same data stream, or on the same storage system. The descriptions may reside remotely. When the content and its descriptions are not co- located, mechanisms that link MPEG-7 will address applications that can be stored on-line or off-line, or streamed, and can operate in both real-time and non real-time environments. Digital libraries (e.g., image catalog or musical dictionary), multimedia directory services (e.g., yellow pages), broadcast media selection (e.g., radio or TV channel), and multimedia editing (e.g., personalized electronic news service or media authoring) will all benefit from MPEG-7. MPEG-7 allows for any type of AV material to be retrieved by means of any type of query material. For example, video material may be queried using video, music, or speech. MPEG-7 does not dictate how the search engine may match the query data and the MPEG-7 AV description, but it may describe a standard programming interface to a search engine. A few query examples may include: Music- Play a few notes on a keyboard to receive a list of musical pieces that contain the required tune or images that in some way match the notes. Graphics- Draw a few lines on a screen and receive a set of images that contain similar graphics or logos. Image- Define objects, including color patches or textures and receive examples from which to select. Movement- On a given set of objects, describe movements and relations between objects and receive a list of animations that match the described temporal and spatial relations. Scenario- For a given content, describe actions to receive a list of scenarios in which similar actions occur. Voice- Using an excerpt of a given vocalist's voice, receive a list of that vocalist's recordings or video clips. MPEG-7 is still at an early stage and is seeking the collaboration of new experts in relevant areas. The preliminary work plan for MPEG-7 projects a committee draft in October 2000, a final committee draft in February 2001, a draft international standard in July 2001, and an i
nternational standard in September 2001. CCIR-601 CIR-601 is the ISO/IEC standard that defines the image format, acquisition semantic, and parts of the coding for digital "standard" television signals. Because many chips that support this standard are available, CCIR-601 is commonly used in digital video applications for computer systems and digital television. It is central to the MPEG, H.261, and H.263 compression specifications. CCIR-601 is applicable to both NTSC and PAL/SECAM systems. In the U.S., CCIR- 601 is 720x243 fields (not frames) of luminance information, sent at a rate of 60 per second. The fields are interlaced when displayed. The chrominance channels are 360x243 by 60 fields a second, again interlaced. CCIR-601 represents the chroma signals (Cb, Cr) with half the horizontal frequency as the luminance signal, but with full vertical resolution. This particular ratio of sub- sampled components is known as 4:2:2. The sampling frequency of the luminance signal (Y) is 13.5 MHz. Cb and Cr are sampled at 6.75 MHz. CCIR-601 describes the way in which analog signals are filtered to obtain the samples. Often RGB signals are converted to YCbCr. The formulas given for the CCIR-601 color conversion are for gamma corrected RGB signals. The gamma for the different television systems are specified in CCIR Report 624-4. The encoding of the digital signal is described in detail in CCIR Rec. 656. The correspondence between the video signal levels and the quantization levels is also specified. The scale is between 0 and 255, the luminance signal provides for 220 quantization levels; for the color-difference signals, it provides for 225 quantization levels. The signals are only coded with 8-bits per signal. WORLD TELEVISION AND COLOR SYSTEMS Country Television System PTT Digital Service Network Interface Abu Dhabi No digital services Afghanistan PAL B, SECAM B No digital services Albania PAL B/G No digital services Algeria PAL B/G No digital services Andorra No digital services Angola PAL I No digital services Antarcti
ca NTSC M No digital services Antigua and Barbuda NTSC M No digital services Antilles No digital services Argentina PAL N No digital services Australia PAL B/G Austria PAL B/G Azerbaijan SECAM D/K No digital services Azores PAL B No digital services Bahamas NTSC M No digital services Bahrain PAL B/G No digital services Bangladesh PAL B No digital services Barbados NTSC M No digital services Belgium PAL B/H Belgium (Armed Forces NTSC M Network) Belize NTSC M No digital services Benin SECAM K No digital services World Television and Color Systems Bermuda NTSC M No digital services Bolivia NTSC M No digital services Bosnia/Herzegovina PAL B/H No digital services Botswana PAL I, SECAM K No digital services Brazil PAL M No digital services British Indian Ocean NTSC M No digital services Territory Brunei Darrussalam PAL B No digital services Bulgaria No digital services Burkina Faso SECAM K No digital services Burma No digital services Burundi SECAM K No digital services Cambodia PAL B/G, NTSC M No digital services Cameroon PAL B/G No digital services Canada NTSC M V.35/V.25 Canary Islands PAL B/G No digital services Central African Republic SECAM K No digital services SECAM D No digital services Chile NTSC M China PAL D No digital services CIS (formerly USSR) SECAM (V) No digital services Columbia NTSC M Congo SECAM K No digital services Cook Islands PAL B No digital services Costa Rica NTSC M No digital services Cote D'Ivoire (Ivory SECAM K/D No digital services Coast) Croatia PAL B/H No digital services NTSC M No digital services Cyprus PAL B/G No digital services Czech Republic PAL B/G (cable) / PAL No digital services D/K (broadcast) Video Communication: the Whole Picture Denmark PAL B/G X.21/V.35 Diego Garcia NTSC M No digital services Djibouti SECAM K No digital services Dominica NTSC M No digital services Dominican Republic NTSC M No digital services Dubai No digital services East Timor PAL B No digital services Easter Island PAL B No digital services Ecuador NTSC M No digital services Egypt PAL B/G, SECAM B/G
No digital services El Salvador NTSC M No digital services Equitorial Guinea SECAM B No digital services Estonia PAL B/G No digital services Ethiopia PAL B No digital services Falkland Islands PAL I NTSC M No digital services Finland PAL B/G France SECAM L France (French Forces SECAM G Gabon SECAM K No digital services Galapagos Islands NTSC M No digital services Gambia PAL B No digital services Georgia SECAM D/K Germany PAL B/G Germany (Armed Forces NTSC M Ghana PAL B/G No digital services Gibraltar PAL B/G No digital services Greece PAL B/G No digital services Greenland PAL G No digital services Grenada NTSC M World Television and Color Systems Guadeloupe SECAM K No digital services NTSC M Guatemala NTSC M No digital services Guinea PAL K Guyana (French) SECAM M No digital services Haiti SECAM No digital services Honduras NTSC M No digital services Hong Kong PAL I Hungary PAL K/K No digital services Iceland PAL B/G No digital services India PAL B No digital services Indonesia PAL B PAL B/G No digital services No digital services Ireland PAL I Isle of Man Israel PAL B/G Italy PAL B/G Ivory Coast SECAM No digital services Jamaica NTSC M No digital services Japan NTSC M Johnston Island NTSC M No digital services Jordan PAL B/G No digital services Kazakhstan SECAM D/K No digital services Kenya PAL B/G No digital services Korea, North SECAM D, PAL D/K No digital services Korea, South NTSC M Kuwait PAL B/G No digital services Kyrgyz Republic SECAM D/K No digital services PAL B No digital services Latvia PAL B/G, SECAM D/K No digital services Video Communication: the Whole Picture Lebanon PAL B/G No digital services Lesotho PAL K No digital services Liberia PAL B/H No digital services Libya PAL B/G No digital services Liechtenstein PAL B/G Lithuania PAL B/G, SECAM D/K Luxembourg PAL B/G /SECAM L Macau PAL I No digital services Macedonia PAL B/H No digital services Madagascar SECAM K No digital services Madeira No digital services Malaysia PAL B Maldives PAL B No digital services SECAM K No digital services Malta PAL B
No digital services Marshall Islands NTSC M Mauritania SECAM B Martinique SECAM K No digital services Mauritius SECAM B No digital services Mayotte SECAM K No digital services Mexico NTSC M Micronesia NTSC M No digital services Midway Island NTSC M No digital services Moldova SECAM D/K No digital services Monaco SECAM L, PAL G No digital services Mongolia SECAM D No digital services Montserrat NTSC M Morocco SECAM B No digital services Mozambique PAL B No digital services Myanmar (Burma) NTSC M No digital services Namibia PAL I No digital services Nepal No digital services World Television and Color Systems Netherlands PAL B/G Netherlands (Armed NTSC M Forces Network) Netherlands Antilles NTSC M New Caledonia SECAM K No digital services New Zealand PAL B/G Nicaragua NTSC M No digital services Niger SECAM K No digital services Nigeria PAL B/G No digital services Norfolk Island PAL B No digital services North Mariana Islands NTSC M No digital services Norway PAL B/G Okinawa No digital services PAL B/G No digital services Pakistan PAL B No digital services Panama NTSC M No digital services Papua New Guinea PAL B/G No digital services Paraguay PAL N No digital services NTSC M No digital services Philippines NTSC M Poland PAL D/K No digital services Polynesia SECAM K No digital services Portugal PAL B/G No digital services Puerto Rico NTSC M Qatar PAL B No digital services Reunion SECAM K No digital services Rumania PAL D/G No digital services Russia SECAM D/K Sabah and Sarawak No digital services Samoa NTSC M No digital services Sao Tomé E Princepe PAL B/G No digital services Saudi Arabia SECAM B/G, PAL B No digital services Video Communication: the Whole Picture Senegal SECAM K No digital services Serbia SECAM No digital services Sierra Leone PAL B/G No digital services Singapore PAL B/G South Africa PAL I Spain PAL B/G Sri Lanka No digital services St. Kitts NTSC M No digital services St. Lucia NTSC M No digital services St. Pierre and Miquelon SECAM K No digital services St. Vincent NTSC M No digital services Sud
an PAL B No digital services Surinam NTSC M No digital services Swaziland PAL B/G No digital services Sweden PAL B/G Switzerland PAL B/G Syria SECAM B, PAL G No digital services Tahiti SECAM No digital services Taiwan Tajikistan SECAM D/K No digital services Tanzania PAL B No digital services Thailand PAL B/M Tibet No digital services SECAM K No digital services Trinidad and Tobago NTSC M No digital services Tunisia SECAM B/G No digital services Turkey PAL B No digital services Turkmenistan SECAM D/K No digital services Uganda No digital services Ukraine SECAM D/K United Arab Emirates PAL B/G No digital services United Kingdom PAL I World Television and Color Systems United States NTSC M Uruguay PAL N No digital services Uzbekistan SECAM D/K No digital services Venezuela NTSC M Vietnam NTSC M, SECAM D No digital services Virgin Islands NTSC M No digital services Wallis & Futuna SECAM K No digital services Yemen PAL B, NTSC M No digital services Yugoslavia PAL B/G Zaire SECAM No digital services Zambia PAL B/G No digital services Zanzibar No digital services Zimbabwe PAL B/G No digital services Note (1). Digital services can be obtained via private satellite services in some cases. Check with the PTT to determine which countries will allow private networks and what the conditions of service are. This Page Intentionally Left Blank VIDEOCONFERENCING RFP CHECKLIST Define: Application and locations involved Category of system requested for each location Features required for each site Basic System Specifications Cameras (e.g., room, auxiliary, document) Monitors (e.g., single, dual or other) Algorithms (e.g., H.264, G.729a) Network supported (e.g., ISDN, T-1,IP) Network interfaces (e.g., V.35, RS-449/422) Network type (e.g., dedicated, switched or hybrid) If behind PBX, who provides interface components? Who provides CSU/DSUs, cables & incidental equipment? Who provides NT-1 (NT-2) in ISDN applications? If an MCU is required, how is it configured? If an inverse multiplexer is required, how is it configured? Video Comm
unication: the Whole Picture History of manufacturer Request financial data on supplier Sign non-disclosure to determine mfg. future plans Learn about manufacturer's/distributors partnerships Examine distribution channels Request information on users group Determine desktop videoconferencing strategy Characteristics of the system Is the system personal or room system product? Is the product software-only or software and hardware? Integrated into single package or component-based? If component based is integration also proposed? If PC is required for operation, who provides PC? Product's dimensions and its weight (if hardware) State system's environmental requirements. What documentation is provided with the product? Proprietary Algorithms Offered Name compression algorithms employed for video? Name compression algorithms employed for audio? Backward compatibility with earlier algorithms? Picture resolutions offered with each algorithm? Average fps across range of network speeds offered? Maximum fps across range of speeds offered? Echo cancellation algorithms offered? Videoconferencing RFP Checklist Standards Compliance Ask about compliance with H.32X Recs. individually Does product offer 4CIF/CIF/QCIF or QCIF only? Average/maximum fps at various H.320 bandwidths? T.120/T.130 compliance? Does the product support JPEG and MPEG? How? Network-Specific Issues What network arrangements does product support? Can system transmit images using only POTS lines? What network interfaces are required for configuration as proposed? Who supplies interface equipment and cables? Does proposal offer turnkey installation including network connections? For what network speed is the codec optimized? How is the transmission speed changed? Describe. What LAN interfaces are available for the product? With which ITU-T network-oriented standards does the product comply? Does the product offer a built-in inverse multiplexer? If so, describe it. Does it comply with BOnDInG? Videoconferencing System Features Examine system documentation durin
g procurement. Compare system features relative to how well-documented they are for each product considered. What type of system interface is provided with the system? Is the control unit a push button-type device, a touch-screen based unit, a PC keyboard, wireless remote or electronic tablet and stylus based? Can the document camera be controlled from the primary system control unit or operator's console? If the system's primary control unit is not a PC, is a PC keyboard also used to activate or change any system features that a conferee would commonly need during a conference? Which ones? Does the product offer camera presets? Does it offer far-end camera control? Does the product offer picture-in-picture? In a dual-monitor arrangement what can be seen in the window? A single-monitor arrangement? Video Communication: the Whole Picture Can audio-only conferees be part of conference? How many audio-only conferees can be included? How are conferences set up? Describe process. Can frequently-dialed numbers be stored? How many speed dialing numbers can be stored? How are they activated? Do codecs automatically handshake to determine compatible algorithm? Can the system be placed in a multipoint mode, and support a multipoint conference? Does the product offer on-line help? Is it backward compatible with codec manufacturer's previous products? Describe fully. Does the product offer continuous presence? Does the product offer scheduling software? Does the product offer applications sharing? What OSs (e.g., MacOS, Linux)? Audio considerations List the names of proprietary audio algorithms offered and describe them. What range of frequencies is encoded with the above? Is bandwidth assignment flexible? How are adjustments made? What method is used to eliminate echoes? If echo cancellation is used, is a burst of noise used to acquaint the system with the room? Or, does unit use voices to train acoustic line-side echo cancellers? What is the echo cancellation system's convergence time measured in milliseconds? What is the
Can participants electronically annotate shared images in real time (T.120)? Can users at different sites interactively share, manipulate and exchange still images (T.120)? Does system offer very-high-resolution graphics (including CAD support)? Is graphics subsystem an integral component or is it an outboard product? If external and PC-based, how does the graphics sub-system connect to the codec? Can system support the need for photorealism? (X-rays, medical imaging applications). Is a flatbed scanner included in price? (This is an application-driven requirement). If scanner is included what is the size of the bed? What is the scanner's color depth (4-bit grayscale, 24-bit color, etc.)? Multipoint Control Unit List proprietary video/ audio algorithms supported. Are conference ports are universal? Video Communication: the Whole Picture State the MCU maximum port size. With how many ports is it proposed? Describe the expansion process. Describe how the product can be upgraded to a more fully-featured model. How can MCUs be cascaded to expand capacity? Does cascading consume ports? Does it create delays in end-to-end transmission? Explain. Describe MCU in terms of network interfaces (e.g., public, private, LAN). What network speeds are supported? How many simultaneous conferences can take place at any given speed and how are the MCU bridging capabilities subdivided? Does system offer a director/chairperson control mode of operation? A voice- activated mode? A rotating mode? Other? Explain. In voice-activated mode, how does the system prevent loud noises from diverting the camera? Does the product offer full support of the ITU-T H.231/H.243 Recommendation? Explain fully. How does the MCU provide graphics support and what limitations exist? Does the MCU support G.729a audio? Does the MCU operate in meet-me, dial-out and hybrid arrangements? After a conference is arranged, state all limitations in terms of the user's ability to add conferees or change transmission speeds. Are conference tones provided to signal a conf
eree's entry-to and exit-from a conference? Can audio-only participants be included? Does the system provide for voice and video encryption and if so, which methods are used? How is the MCU managed? Provide detailed information on the administrative subsystem. Can the MCU store a database of sites and the codecs installed at those locations as well as network information? How does the network operator configure the MCU for a multipoint conference? Can the administrative console be used to configure, test and diagnose problems with the MCU? Can it keep an event log and account for use? What type of conference scheduling package is provided? Can conferences be scheduled in advance? Does the MCU automatically configure itself? Can it dial out to participating sites? Does the MCU reservation system allow the user to control videoconference rooms or is it used just to schedule bridge ports? Is there an additional cost for the scheduling package? Videoconferencing RFP Checklist Security-Related Features Is AES encryption supported? Are H.233/234 and H.235v3 supported? How does encryption differ between point-to-point and multipoint conferences? Is the operator's console password-protected? Miscellaneous Questions If system is sold through a distributor or value-added reseller, can you call the manufacturer directly? Who will install the equipment? Do they have a local presence? Who will provide technician and end-user training? What is the length of warranty and terms of service? Where are spare parts stocked and what response time is guaranteed when the system fails? What are the terms of the second year service agreement? Cost? How is defective equipment or software repaired or replaced? Is immediate replacement in possible? In what time frame? This Page Intentionally Left Blank INSTALLATION PLANNING CHECKLIST Physical space considerations Acceptable and convenient location Will excess capacity be brokered? Plan for it. Dimensions adequate for group size Minimal windows Proximity to communications demarcation Anteroo
m or next-group gathering space Videoconferencing coordinator's work space Sufficient electrical power and additional outlets Conduit, ducts, etc. for concealing cables Signage directing conferees to room Sufficient privacy to meet application's needs Security (locking door, card-key, punch-coded entry) Interior Design Carpeting Wall color and treatments Drapes, curtains or blinds Facade wall and shelving (boardroom applications) Table type, shape, surface and placement Spectator seating for Chairs and upholstery Whiteboards Clocks showing times at local and primary remote sites Table sign identifying site and organization Approaches to ambient noise control HVAC system improvements Acoustic panels and heavy draperies Video Communication: the Whole Picture Replacement, rearrangement or addition of lighting fixtures Double-glazed windows Network considerations Network installed and tested Necessary cables on hand CSU/DSU or NT-1 installed and tested Network documented at completion of project Videoconferencing peripherals Cameras (auxiliary) Monitors (auxiliary) Audio bridging Still-video (graphics) components such as document cameras Presentation graphics subsystems PCs for file access and file sharing applications Electronic whiteboards Fax machines and copiers DVD burner & player? RGB projection system interfaces (front or rear) Interfaces to CAD and/or scientific equipment (scopes, etc.) I-MUX installed and tested Multipoint conferencing unit installed and tested Promotional and Usage-Oriented Preparation Select personnel in remote locations to support system Develop and publish videoconferencing usage policy Determining chargeback policy Determining scheduling system Address scheduling issues (prioritization, bumping, etc.) Develop instruction sheet for scheduling Develop etiquette tips for point-to-point and multipoint conferences Develop tips for setting up inter-company conferences PERSONAL CONFERENCING PRODUCT EVALUATION CRITERIA Implementation Client/server, server-only or client-only? Point-to-point or
multipoint? Standards-based or proprietary? Is the system offered as a cross-platform solution? Product family? What capabilities are included in the family? Do all applications in the product family have a similar look and feel? Is the product offered in multiple languages? Platforms Supported Which Microsoft Windows versions Which Mac os versions UNIX (Linux, Solaris, BSD, other) Hardware Configuration What class of processor is required (minimum / recommended)? Processor clock speed (minimum / recommended)? How much RAM is required (minimum / recommended? How much free space on the hard drive is required? What number & type slots (ISA, EISA, PCI) does the system require? Hardware Supplied with Product V.90 modem Camera Sound card Microphone Speakers Video capture board Video Communication: the Whole Picture Hardware-level codec NT-1 for ISDN BRI All required cables Networks and Protocols Supported Circuit-switched digital ISDN BRI Other circuit-switched (T-1, E-1, ISDN PRI) Multirate ISDN (Nx64, H0, H11, H12) IP networks (TCP or UDP) Frame Rates, Data Transfer Rates Range of operating data rates? Optimal data rate? Average frame rate at optimal data rate? Maximum frame rate at optimal data rate? Can users control bandwidth allocation between voice, video & data? Standards Compliance H.320 compatible? H.264 H.320-based multipoint support? H.321 ATM network support? H.323 supported? H.324 supported? If so, is V.90-compliant modem included? H.323-compliant gateway between H.320 and packet network? Picture Viewing and Resolution What is the maximum / minimum size of viewing window? Can the user scale the viewing window(s) using a mouse? What video resolutions are supported (SQCIF, QCIF, CIF, 4CIF, other)? Can the user adjust the picture resolution? Can users make on-the-fly trade-offs between frame rate and resolution? Does the product provide a local video window for self-view? Can users control the viewing window screen quadrant? Does the product offer far-end camera control? Personal Conferencing Product Evalua
tion Criteria Making and Receiving Calls Does the product offer "voice-call first" (to add a video connection)? Does an on-line directory permit "scroll & click" to place a video call? Is an incoming caller's ID captured and displayed? Is an incoming caller's number automatically stored for callback (ISDN)? Audio Sub-system In-band or out-of-band audio? Are microphone and speakers included? Is microphone built in to the camera? Is a headset or handset included? Audio/video synchronization (subjective evaluation of) Audio-delivery quality (subjective evaluation of microphone) Audio-receive quality (subjective evaluation of speakers) Camera Features Does the camera offer pan/tilt/zoom? Can camera swivel in order to act as a document camera? Choices of focal lengths? Color or black and white? Does the camera offer brightness, contrast, color, and tint adjustments? Does the camera include an audio/video shutter for privacy? Data Collaboration / Graphics / Document Conferencing Features & Support T.120 compliance? Document conferencing / data collaboration? Application viewing / screen sharing? Bit-mapped whiteboard (clipboard) capabilities? File transfer? Can the user control the flow of files during a file transfer? If so, how? Can conferees prevent network floods during multipoint file transfers? Chat (message pad) features? Annotation tools (markers, highlighters, pens, etc.)? Drawing tools (inclusion and sophistication of)? Pointing tools? Slide show capabilities? Video Communication: the Whole Picture Screen-capture and storage as "photo"? Are full- or partial-screen video "freeze frame" snapshots offered? What format are snapshots stored in? Intranet / Internet Adaptations Is the client browser-based? RSVP client capabilities? UDP (vs. TCP) sessions? Does the system support the User Location Service (IP-network)? Is a user automatically logged onto the ULS when they sign on to their computer? Multipoint Capabilities Does the system support multipoint conferencing? How many parties? What conference management fe
atures are provided? Multipoint Control Unit (bridge) required? Is it included? Multipoint file transfers supported? Multipoint capabilities-broadcast one-to-many Multipoint capabilities-can all desktops interact with one another? Multipoint capabilities-maximum number of simultaneous conferees? Password protection? Conference attendance control features Standards supported When one first signs on to a conference, does the system display the number of meeting participants and the names of the other sites? Miscellaneous Features Does the product offer traditional "voice" features (do-not-disturb, call forward, call waiting and hold)? Does the product offer video messaging? What compression method is used to compress audiovisual files? Context-sensitive on-line help or application "assistant" to guide user? Audio-only callers included in conference? Call center features? MPEG video file playback from video sources? Cost per desktop Personal Conferencing Product Evaluation Criteria Cost of server Concurrent-use licensing cost Other miscellaneous costs Installation and Performance Tests Ease-of-installation, as rated by microcomputer-oriented trade press & surveys? Trial installation. How easy is the product to install? Does the product include a quick install feature? Does the package also include an uninstall feature? Is documentation adequate? Does the system minimize the transfer of non-essential data (e.g., mouse movements) in performance tests? How fast can the system transfer files? Did you encounter any bugs that crashed the application during performance tests? Supplier Support / Commitment to Customer's Success What support (applications, installation, testing) does the supplier provide? On screen diagnostics and messages provided? Are application and installation notes offered on line? Does the manufacturer offer low-cost (no cost) upgrades to allow the installation to keep pace with product evolution? Does the manufacturer publish a list of known bugs and product shortcomings and provide information on wh
en and how these problems will be corrected? Does the supplier provide a test-center that you can call when you are trying to get your application working? This Page Intentionally Left Blank GLOSSARY OF TERMS 0 )...9- A way to visually describe an image using height, width and depth components SO that the object appears to have physical depth in relation to its surroundings. 3-D modeling is the process of defining the shape and related characteristics of objects that can later be rendered in 3-D form. The ITUT's H.263 coding and compression standard specifies a common intermediate format to provide resolutions that are four times greater than that of CIF. Support of 4CIF in H.263 enables the standard to compete with higher bit-rate coding schemes such as MPEG. 4CIF specifies 576 non-interlaced luminance lines, that each contain 704 pixels. Support of 4CIF in H.263 is optional. 4CIF is also referred to as Super CIF, which was defined in Annex IV of H.261 in 1992. 16CIF As is true with 4CIF, the ITU-T's H.263 standard specifies, but does not mandate, support of 16 CIF, a picture resolution that is composed of 1152 non- interlaced luminance lines, that each contain 1408 pixels. At this resolution, H.263 can provide resolutions about twice as good as NTSC television. A/D conversion Analog to digital conversion. A/D converters accept a series of analog voltages and describe them via a series of discrete binary-encoded values that approximate the original signal. This process is known as digitizing or sampling. D-to-A converters reverse the process. absorption loss The attenuation of an optical signal within a transmission system, specified as dB/km. Video Communication: the Whole Picture access A method of reaching a carrier or a network. In the world of wide area networking, access channels (which may be copper, microwave, fiber) carry a subscriber to a carrier's point of presence (POP). In the world of local area networking access methods are used to mediate the use of a shared bus. access method The technique and p
rotocols that govern how a communications device uses a local area network (LAN). The IEEE's 802 standards 802.3 through 802.12 specify access methods for LANs and MANs. acoustic echo canceller An AEC is used to eliminate return echoes in an acoustically-coupled tele-meeting. AEC's are used in full-duplex audio arrangements in which all microphones are active at all times. This situation causes an increase in ambient noise that an AEC is designed to mediate. acoustics The qualities of an enclosed space that define how sound is transmitted, its clarity and how the original signal will be distorted. active video lines The lines that convey information in a television signal, e.g., all except for those that occur in the horizontal and vertical blanking intervals. additive color Direct light that is visible directly from the source: the sun, light bulbs, video monitors. The wavelengths of direct light can be viewed in three primary colors, red, greed and blue (RGB). Combinations of these three frequencies (for that is what colors are) result in most perceivable color variations. additive primaries By definition, three primary colors result when light is viewed directly as opposed to being reflected: red, green and blue (RGB). According to the tri-stimulus theory of color perception, blending some mixture of these three lights can adequately approximate all other colors. This theory is harnessed in color television and video communications. addressable The ability of a device to receive communications over a network whereby the unique destination of the device can be specified. Typically, an address is a set of numbers (such as a telephone number) that allows a message to be intercepted and interpreted for Glossary of Terms purposes of an application. ADPCM CCITT Recommendation G.721. Adaptive Differential Pulse Code Modulation. A technique for converting an analog signal into digital form. It is based on standard sampling at 8 kHz and generates a 32 Kbps output signal. ADPCM was extended in G.726, which replaces both
G.721 and G.723, to allow conversion between 64 Kbps PCM and 40, 32, 24 or 16 Kbps channels. Asymmetrical Digital Subscriber Line. A method of sending high-speed data over existing copper-wire twisted pair POTS lines. ADSL, developed by Bellcore and deployed by the telephone companies, uses a modulation technique known as discrete multitone (DMT) to transmit multimegabit traffic more slowly upstream than downstream. ADSL will not work over portions of the network that attenuate signals above 4 kHz. It also can not be used where there is bridged taps and cross-coupled interference. affine map A function that identifies similar frequency patterns in an image and uses one to describe all that are similar. Advanced Intelligent Network. A digital network architecture based on out-of-band signaling that maximizes the intelligence, efficiency, and speed of the PSTN. AIN relies on databases that store vast amounts of data about network nodes and end- points and which are accessed across a packet- switched network that is separate from the one that carries customer traffic. AIN allows moment-to- moment call routing, automatic number identification (ANI), customer call-control, and more. algorithm A computational procedure that includes a prescribed set of processes for the solution of a problem in a finite number of steps; the underlying numerical or computational method behind a code or process. Algorithms are fundamental to image compression (both motion and still), because they allow an information-intensive file or transmission to be squeezed to a more economical size. Video Communication: the Whole Picture alias Unwanted signals generated during the A-to-D conversion process. Aliasing is typically caused by a sampling rate that is too low to faithfully represent the original analog signal in digital form; typically, this occurs at a sampling rate that is less than half the highest frequency to be sampled. aliasing A subjectively disturbing distortion in a video signal that manifests in different ways depending on th
e cause. When the sampling rate interferes with the frequency of program material, aliasing takes the form of artifact frequencies known as sidebands. Spectral aliasing is caused by interference between two frequencies such as the luminance and chrominance signals and appears as herringbone patterns, wavy lines where straight lines should be, and loss of color fidelity. Temporal aliasing is caused when information is lost between line or field scans. It appears when a video camera is focused on a CRT. The lack of scanning synchronization produces an annoying flicker on the receiving device's screen. amplifier A device that receives an input signal in wave form (analog) and gives a magnified signal as an output. amplify To increase the magnitude of a voltage or a waveform in order to increase the strength of the signal. amplitude The magnitude of a waveform or voltage. Greater amplitude results when waves are set in motion with greater force. The term amplitude is also used to describe the strength of a signal. amplitude modulation AM. A method of changing a signal by varying its height or amplitude in order to superimpose it on a carrier wave. Used to impress radio waves (audio or video) onto a carrier in analog transmissions. analog Representations of numerical values by physical variables such as voltage and amplitude. Analog signals are continuously varying; indeed, depending on the precision with which they are sampled/measured, they can vary infinitely. By this we mean that each sample can produce a value that corresponds to the unique magnitude of the variable. An analog signal is one that uses electrical Glossary of Terms transmission methods to duplicate an original waveform, and thereby capture and convey these unique magnitudes. analog transmission A method of sending signals whereby the transmitted signal is analogous to the original signal. Sending a stream of continuously varying electrical waves represents the original sine wave. animation The process used to link a series of still images to create
the effect of a motion sequence. Annex A (To Recommendation H.261). Inverse Transform Accuracy specification that defines the maximum tolerable error thresholds for the DCT. Annex B (To Recommendation H.261). Sets forth a Hypothetical Reference Decoder. Annex C (To Recommendation H.261). Specifies the method by which the video encoder and decoder delays are established for a particular H.261 implementation. The American National Standards Institute. A non- governmental industry organization that develops and publishes voluntary standards for the US ANSI has published standards for out-of-band signaling, for voice compression, for network performance, and for various electrical and network interfaces. antenna An aerial or other device that collects and radiates electromagnetic energy. Application Programmer Interface A set of formalized software calls and routines, which can be referenced by an application program to access underlying network or other services. application An application is software that performs a particular useful function for a user-e.g., a spreadsheet tool, a word processing facility. Examples include word processing, spreadsheets, distance learning, document conferencing, and telemedicine. application sharing A collaborative conferencing feature that provides personal conference participants with read/write access to an application, even when one or more of these participants does not have the application Video Communication: the Whole Picture running at their desktop. In application sharing, one user launches and controls the application. application viewing In personal conferencing, the ability of one system to screen-slave off another system. Every keystroke or mouse movement made by the user who runs the application can be seen by the user at the other end, even though he/she is not running the application and has no control over it. architecture The design guidelines, physical and conceptual organization, and principles that describe how a system or network will support an activity. Arch
itecture discusses scalability, security, topology, capacity and other high-level attributes. artifacts Spurious effects introduced to a signal that result from digital signal processing. These effects manifest as jagged edges on moving objects and flicker on fine horizontal edges. ASCII American Standard Code for Information Interchange, a digital coding scheme that is capable of representing 256 (text) characters. ASCII is a 7-level code for asynchronous character transmission over a network. It is a universal code. Application-Specific Integrated Circuit. A chip designed for a specific application or purpose. aspect ratio The ratio of the width to the height of an image or video displayed on a monitor. NTSC and PAL television uses an aspect ratio of 4 wide to 3 high, which is expressed 4:3. asymmetrical compression Techniques in which the decompression process is not the reverse of the compression process. Asymmetrical compression is more processing- intensive on the compression side SO that video images can be easily decompressed at the desktop or in applications in which sophisticated codecs are not cost effective. asynchronous Lacking in synchronization. A method of transmitting data over a network using a start bit at the beginning of a character and a stop bit at the end. The time interval between characters may be of varying lengths. In video, a signal is asynchronous Glossary of Terms when its timing differs from that of the system reference signal. Asynchronous Transfer Mode (also known as cell relay). ATM provides a single network interface for audio, video, image and text with sufficient flexibility for handling these different media types. The ATM transport technique uses a multiplexing scheme in which data are divided into small but fixed-size units called cells. Each cell contains a 48-byte information field and five-bytes of header information for a total cell size of 53-bytes. Although it is a packet switching technique, ATM can achieve the integration of all types of traffic, including those th
at require isochronous service. The Advanced Television Systems Communications. This group was formed by the Joint Committee on Inter-Society Coordination (JCIC) to establish voluntary technical standards for advanced television systems, including HDTV. In April 1995, the ATSC approved the Digital Television Standard for HDTV Transmission. ATSC A/53 The digital television standard for HDTV transmission proposed by the Grand Alliance and approved by the Technical Subgroup of the Federal Communications Commission (FCC) Advisory Committee. The standard specifies the HDTV video formats, the audio format, data packetization, and RF transmission. New television receivers will be capable of providing high- resolution video, CD quality multi-channel sound, and ancillary data delivery to the home. attenuation The decrease in the amplitude of a signal. In video communications this usually refers to power loss in electromagnetic signals between a transmitter and the receiver during the process of transmission. Thus, the received signal is weaker or degraded when compared to the original transmission. Advanced TV. Any system of distributing TV programming that results in better video and audio quality than that offered by the NTSC standard. ATV is based on digital signal processing and Video Communication: the Whole Picture transmission. HDTV can be considered one type of ATV but systems can also carry multiple pictures of lower quality. audio In video communications, electrical signals that carry sounds. The term is also describes sound recording and transmission systems-speech pickup systems, transmission links that carry sounds, amplifiers. audio bridge Equipment that mixes multiple audio inputs and feeds back composite audio to each station after it removes the individual station's input. This equipment may also be called a mix-minus audio system. auto focus In a camera, a device for measuring the distance of the lens from a given object is included to automatically set the lens-film distance. In videoconferences when th
ere is almost no set-up time and subjects may be moving around from time to time this is particularly valuable. auto iris A process of correlating aperture size to the amount of light entering the camera. Auto Iris produces unpredictable quality in video production because white backgrounds or clothing will cause a camera to close down the lens when a person's face would be the desired gauge for the f-stop. Although it is a good feature in a videoconferencing camera, auto iris is not as effective as manual adjustment of the camera's iris in video production. Audio Video Interleaved. The filename extension for compressed video usually used under Microsoft Windows. AVI decompression usually takes place in software. AVI compression works on key frames to achieve the maximum possible entropy through redundancy elimination. After key frames are intra- frame compressed AVI then constructs subsequent delta frames by recording only interframe differences. AVI competes with MPEG-1 although MPEG-1 produces higher-quality video. Audio Visual Terminal. A term used in the ITU-T's H.320 specification. It refers to a videoconferencing Glossary of Terms implementation that can deliver an audio and video signal. American Wire Gauge, a standard measuring technique used for non-ferrous conductors (copper, aluminum). The lower the AWG the thicker the wire; 22 AWG cable is thicker than 26 AWG cable. Blue (as in RGB). B channel The ISDN circuit-switched bearer channels, capable of transmitting 64 Kbps of digitized information. B frame In MPEG, the B frame is a video frame that is created using bi-directional interframe compression. Computationally demanding, B frames are created by using I frames and P frames. Through bi-directional encoding the P (predictive) frame, which is created by using a past frame as a model, is compared an I (intraframe coded) frame: a frame that has had the spatial redundancy eliminated from it, without reference to other frames. Using interpolation the codec uses hints derived by analyzing past and predicte
d events to develop a "best-guess" present frame. back porch The portion of a video signal that contains color burst information and which occurs between the end of the horizontal synch pulse and the start of active video. back projection When a projector is placed behind a screen (as it is in television and videoconferencing applications) it is described as a back projection system. The viewer sees the image via the transmission of light as opposed to reflection used in front projection systems. backbone network A transmission facility used to interconnect distribution networks of typically lower speed. A backbone network often connects major sites (hubs). From these sites, spoke-like tail circuits (spurs), emanate and, in turn, often terminate in minor hubs. bandwidth A term that defines the information carrying capacity of a channel-its throughput. In analog systems, it is Video Communication: the Whole Picture the difference between the highest frequency that a channel can carry and the lowest, measured in hertz. In digital systems the unit of measure of bandwidth is bits per second. bandwidth-on-demand The ability to vary the transmission speed in support of various applications, including videoconferencing. In videoconferencing applications, an inverse multiplexer or I-MUX takes a digital signal that comes from a codec and divides it into multiple 56- or 64 Kbps channels for transmission across a switched digital network. On the distant end, a compatible I- MUX recombines these channels for the receiving codec, and, therefore, ensures that, even if the data takes different transmission paths, it will be smoothly recombined at the receiving end. Bit-rate Allocation Signal. Used in Recommendations H.221 and T.120 to transmit control and indication signals, commands and capabilities. baseband In a Local Area Network (LAN) context, this means a single high-speed information channel available to and shared by all the terminals or nodes on the network. Because there is sharing of this resource, means have to be p
rovided to control access to the channel and to minimize information "collisions" and distortions caused by more than one terminal transmitting at the same time. Different types of LANs use different access methods to avoid collisions. Baseband LANs present a challenge to companies that wish to put video over their networks because video requires isochronous service (i.e., the delivery of information is smoothly timed). Baseline Sequential JPEG The most popular of the JPEG modes. It employs the lossy DCT to compress image data as well as lossless processes based on DPCM. The "baseline" system represents a minimum capability that must be present in all Sequential JPEG decoder systems. In this mode, image components are compressed either individually, or in groups. A single scan pass completely codes a component or group of components. If groups of components are coded, the data is interleaved; it allows color images to be Glossary of Terms compressed and decompressed with a minimum of buffering. Basic Rate ISDN See BRI. British Broadcasting Corporation, formed in 1923 as the monopoly radio and later television, broadcaster in the United Kingdom. Also used as an abbreviation of background color cancellation. Bell Operating Company Any of the 22 regulated telephone companies that were "spun off" from AT&T during divestiture. The BOCs (or regional bell operating companies-RBOC) are grouped into RBHCs-Regional Bell Holding Companies such as Nynex, BellSouth and others. Bellcore An abbreviation for Bell Communications Research. Bellcore is a resource for software engineering and consulting that created many public network architectures for the Regional Bell Holding Companies (RBHCs) over the years. Formed to take the place of Bell Labs, which, after divestiture, severed all formal ties with the BOCs, it was owned by all seven RBHCs until the fall of 1996. At that time the RBHCs sold it to Science Applications International Corporation (SAIC), a company that specializes in government consulting for the Defense Departmen
t's Advanced Research Projects Agency (DARPA) and other federal customers. B-frame A mandatory MPEG picture coding technique that provides bi-directional interframe compression and which uses interpolation to predict a current frame of video data based on a past frame and a "future" frame. binary A method of coding in which there are only two possible values, 0 and 1, for a given digit. Each binary digit is called a "bit." binary large objects BLOBs. Events on a network caused by the transmission of bit-intensive images that cause bottlenecks. B-ISDN Broadband ISDN. The ITU-T is developing the B- ISDN standard, incorporating the existing ISDN switching, signaling, multiplexing and transmission Video Communication: the Whole Picture standards into a higher-speed specification that will support the need to move different types of information around the public switched network. Binary Digit. The basic signaling unit in all digital transmission systems. bit plane The memory used to represent, on a VDT, one bit per pixel. Multiple bit planes can be introduced to produce deeper color and, as the number of bit planes increase, SO does the color resolution. One bit plane yields two colors (monochrome). Two yields four colors (00, 01, 10, 11), four can describe 16 colors, and SO on. bit rate The number of bits of information transmitted over a channel in a given second. Typically expressed bps. bit-block transfer Bit-BLT. The movement of an associated group of pixels around on a screen. When a window is opened or moved around on a PC or X-terminal a Bit-BLT occurs. bitmap The total of all bit planes used to represent a graphic. Its size is measured in horizontal, vertical and depth of bits. In a one-bit (monochrome) system there is only one bit plane. As additional planes are added, color can be described. Two bit planes yield four possible values per pixel, eight yield 256, and SO on. black level The lowest luminance level that can occur in video or television transmission and which, when viewed on a monitor, appears as
the color black. blanking interval Period during the television picture formation when the picture is suppressed to allow the electron gun to return from right to left after a line (horizontal blanking) or from top to bottom after each field (vertical blanking). blanking pulses The process of transmitting pulses that extinguish or blank the reproducing spot during the horizontal and vertical retrace intervals. block In H.261, a block consists of 8 pixels by 8 pixels. It is the lowest element in the hierarchical video multiplex structure, which, at the top of the hierarchy, includes the picture, then a group of blocks, then a Glossary of Terms macroblock and individual blocks that comprise the macroblock. A block can be of two types, luminance or color difference. One of three additive primaries and B in the RGB. Refers to Bayonet Neill-Concelman. A twist-lock connector widely used for the connection of video cables. board Boards consist of a flat backing made of insulating material and inscribed with conductive circuits etched on their surface. A fully-prepared circuit board is meant to be permanently affixed into a system as opposed to a module that is meant to slide in although the terms are now used interchangeably. See Bell Operating Company. Also referred to as Regional Bell Operating Company or RBOC. BOnDInG Bandwidth On Demand Interoperability Group. This consortium of over 30 vendors developed the standard for inverse multiplexing that carries their name. Version 1.0 of the standard, approved in August 1992, describes four modes of inverse multiplexer interoperability. It allows inverse multiplexers from different manufacturers to subdivide a wideband signal into multiple 56- or 64 Kbps channels, pass these individual channels over a switched digital network and recombine them into a single high-speed signal at the receiving end. The speed at which bits are transmitted over a communications medium; in other words, the number of bits that pass a given point in a communications line in one second. The term
"bps" is also used more generically to describe the information-carrying capacity of a digital channel. branch Part of a cable television distribution system. Branches in the network are analogous to tree limbs that attach to a main trunk. Branches provide separate locales or communities with cable television service. Branch and tree systems are being replaced with fiber-optic distribution systems in which the cable television head-end is connected via fiber optics Video Communication: the Whole Picture to a local hub. Basic Rate Interface. In ISDN there are two interfaces, the BRI and the PRI or Primary Rate Interface. The BRI offers two circuit-switched B (bearer) channels of 64 Kbps each and one packet-switched D (delta) channel that is used for exchanging signals with the network. Known in Europe as the Basic Rate Access or bridge A bridge connects three or more conference sites SO that they can simultaneously communicate. In video communications, bridges are often called MCUs, multipoint conferencing units. In IEEE 802 parlance, a bridge is a device that interconnects LANs or LAN segments at the data-link layer of the OSI model to extend the LAN environment physically. brightness The luminance portion of a television or video signal. broadband The term applied to networks that have bandwidths significantly greater than that found in telephony networks. Broadband systems can carry a large number of moving images or a vast quantity of data simultaneously. Broadband techniques usually depend on coaxial or optical cable for transmission. They utilize multiplexing to permit the simultaneous operation of multiple channels or services on a single cable. Frequency division multiplexing or cell relay techniques can both be used in broadband transmission. broadcast To send information to two or more receiving devices simultaneously. The term originated in farming in which it referred to the scattering of seeds. Now it is used to describe the transmission of radio and television signals. broadcast quality In the US thi
s corresponds to the NTSC system's 525- line, 30 fps, 60 fields-per-second audio-video delivery system. It is also a subjective concept, used to describe an audiovisual signal that delivers quality that appears to be approximately as good as that of television. broadcasting A means of one-way, point-to-multipoint transmission. For our purpose, we will consider this Glossary of Terms word to have two meanings. First it is the relaying of audio/visual information across the frequency spectrum where it propagates in free space and is picked up by properly equipped antennas. Second, it is the placement of information on digital networks (LAN, MAN or WAN) which can support many different applications including cable television. See business television. buffer A storage reservoir designed to hold digital information in memory. Used to temporarily store data when the circuit used to transmit it is engaged or when differences in speed are involved. burst To send a group of bits in data communications, typically in a baseband transmission scheme. A color burst is used for synchronization in the NTSC standard for color television. bursty data Information that flows in short intense data groupings (often packets) with relative long silent periods between each transmission burst. A common path shared by multiple input and output devices. In the computer world a bus can be the short cable link between terminals networked in an office; in the world of circuitry a bus can be a thin copper wire on a printed circuit board. In video production, there are program buses that determine what is sent to the record deck, preview buses that allow a video source to be shown on a separate monitor, and mixing buses which work with special effect generators and which allow separate video signals to be combined. business television Point-to-multipoint videoconferencing. Often refers to the corporate use of video for the transmission of company meetings, training and other one-to-many broadcasts. Typically incorporates satellite transmission m
ethods and is migrating from analog to digital modulation techniques. Also known as BTV. One of the color signals of a color difference video signal the blue minus luminance signal. The Video Communication: the Whole Picture formula for deriving B-Y is -.30R, -.59G and -.89B. A group of eight bits usually the smallest addressable unit of information in a data memory storage unit. Also known as an octet. cable A number of insulated metallic conductors or optical fibers assembled in a group and covered by a flexible protective sheath. Sometimes used in a slang sense to refer to cable television. Cable Act of 1984 An Act passed by Congress that deregulated most of the CATV industry including rates, required programming and municipal fees. The FCC was left with virtually no jurisdiction over cable television except among the following areas: (1) registration of each community system prior to the commencement of operations; (2) ensuring subscribers had access to an A-B switch to permit the receipt of off-the-air broadcasts as well as cable programs; (3) carriage of television broadcast programs in full without alteration or deletion; (4) non-duplication of network programs; (5) fines or imprisonment for carrying obscene material; and (6) licensing for receive-only earth stations for satellite-delivered via pay cable. The FCC could impose fines on CATV systems violating the rules. The Cable Reregulation Act of 1992 superseded this Act. cable modems Cable modems are external devices that link PCs to cable television systems' coaxial networks to provide broadband connectivity. They work by modulating the Ethernet data that comes out of a PC, and converting it to a specific frequency to send it over the cable network. The cable modem also receives and demodulates incoming data, and re-converts it into Ethernet format. To the PC, a cable modem looks and acts like an Ethernet-based connection. Cable Reregulation Act Reregulation Bill 1515 that passed Congress in October of 1992, that forced the FCC to regulate cable televis
Glossary of Terms defined allowable monthly rates for Basic service, Expanded Basic service, equipment and installation. Rates must now conform to these FCC benchmarks and can be reduced if too high. Another provision of the Act requires cable television companies to sell cable programming to DBS operators and owners of home satellite dishes. The Act places a huge regulatory burden on the understaffed FCC. President Bush vetoed it in his last months of office but Congress overrode the veto. camcorder Cameras and video recorder systems packaged as a whole that permanently integrate camera, recorder and microphone components. Camcorders are used for remote production work and consumer activities. Cameo Macintosh-based personal videoconferencing system announced by Compression Labs in January of 1992. Developed jointly with AT&T and designed to work over ISDN lines and, most recently, Ethernet LANs. The Cameo transmits 15 fps of video and requires an external handset or headset for audio transmission. camera In video, an electronic (or in the past electromechanical) device used to convert visual images into electrical impulses. The camera scans an image and describes the light that is present using an optical system and a light-sensitive pick-up tube. carrier A term used to refer to various telephone companies that provide local, long distance or value added services; alternately, a system or systems whereby many channels of electrical information can be carried over a single transmission path. carrier wave A single frequency that can be modulated by another wave that contains information. Thus, the information contained in the second wave form is superimposed on the carrier for the purpose of transmitting it. cathode ray tube Developed by a German Karl Ferdinand Braun, the CRT is a glass picture tube, narrow at one end and wide at the other. The narrow end contains a negative terminal called a cathode The cathode emits a stream of electrons. These electrons are focused or beamed Video Communication: the Whole Pictu
re with a "gun" to "paint" an image on a luminescent screen at the wide end. The inside of the wide end is coated with phosphors that react to the electron beam by lighting up, thus creating a picture. CRTs are used in TV receivers, oscilloscopes, PC monitors, and video displays. In video cameras, they are part of the scanning mechanism. Community Antenna Television. Developed in 1958, this technology was first used to carry television programming to areas where television service was not available. The term is now used to refer to cable television; which is a method of distributing multi- channel television signals to subscribers via a broadband cable or fiber optics networks. Early systems were generally branch-and-tree types, with all programs transmitted to all subscribers, who used a channel selection switch to indicate which program they wanted. Constant Bit Rate. A feature offered with isochronous service and required for real-time interactive video and voice communications. Charge coupled device. Used in cameras and telecines as an optical scanning mechanism. It consists of a shift register that stores samples of analog signals. An analog charge is sequentially passed along the device by the action of stepping voltages and stored in potential wells formed under electrodes. The charge is moved from one well to another by the stepping voltages. Comité Consultatif International Radio- communications. An organization, part of the United Nations, that sets technical standards for international television systems as part of its responsibilities. The CCIR is now known as the ITU-R. CCIR Rec. 656 The international standard that defines the electrical and mechanical interfaces for digital TV that operates under the CCIR-601 standard. It defines the serial and parallel interfaces in terms of connector pinouts as well as synchronization, blanking and multiplexing schemes used in these interfaces. Glossary of Terms CCIR Rec. 601 An internationally agreed-upon standard for the digital encoding of component color telev
ision derived from the SMPTE RP125 and EBU 324E standards. It uses a 4:2:2 sampling scheme for Y, U and V with luminance sampled at 13.5 MHz and chrominance (U and V components) sampled at 6.75 MHz. After sampling, 8-bit digitizing is used. The particular frequencies set forth in the standard were chosen because they work for both 525/60 (NTSC) and 625/50 (SECAM and PAL) television systems. In the US the system specifies that 720 pixels be displayed on 243 lines of video and that 60 interlaced fields be sent per second. Chrominance channels are sent with 360 pixels on 243 lines, again at 60 fields/second. CCIR Recommendation 601 is used in professional digital video equipment. Common Channel Inter-Office Signaling. In this scheme, which is used for ISDN, the signaling information is carried out-of-band over a special packet-switched signaling channel. CCITT Abbreviation of Comité Consultatif International Téléphonique et Télégraphique, an organization that sets international telecommunications standards. The CCITT is now called the International Telecommunications Union's Telecommunications Standardization Sector or ITU-T. Closed circuit television. Typically used in security and surveillance applications and usually based on slow-scan technology. A high-capacity optical storage device that measures 4.75-inch in diameter and which contains multimedia or audio-only information. Originally developed for sound, CD technology was quickly seen as a storage medium for large amounts of digital data of any type. The information on a CD is digitally encoded in the constant linear velocity (CLV) format, which replaced the older CAV (constant angular velocity) format. Philips Compact Disc-interactive specification, which embraces the same storage concept as CD-ROM and Audio CD, except that CD-i also stores compressed Video Communication: the Whole Picture full-motion video. CD-ROM Compact Disc Read-Only Memory. A standard used to place any type of digital data onto a compact disc. Compression Labs Compressed Digital Video,
a compression technique used in satellite broadcast systems. CDV is the technique used in CLI's SpectrumSaver system to compress a NTSC or PAL analog TV signal SO that it can be transmitted via satellite in as little as 2 MHz of bandwidth. CellB A Sun Microsystems Computer Corporation- proprietary video compression encoding technique developed by Michael F. Speer. cell relay The process of transferring data by dividing all transmissions (voice, video, text, image, etc.) into 53- byte packets called cells. A cell has 48 bytes of information and 5 bytes of address. The objective of cell relay is to develop a single high-speed network based on a switching and multiplexing scheme that works for all data types. Small cells favor low-delay, a requirement of isochronous service. Code-Excited Linear Prediction, a low-bit audio encoding method, a low-delay variation of which is used in the ITU-T's G.728 compression standard. channel A physical transmission path along which signals can be sent, e.g., a video channel. charge-coupled device CCD (full name Interline Transfer Charge-Coupled Device or IT CCD). CCDs are used as image sensors in an array of elements in which charges are produced by light focused on a surface. They are specialized semiconductors, based on MOS technology and consist of a rectangular array of hundreds of thousands of light-sensitive photo diodes (pixels). Light from a lens is focused onto the pixels, and thereby releases electrons (charges) which accumulate in the photo diodes. The charges are periodically dumped into vertical shift registers and moved by charge-transfer SO they can be amplified. An integrated circuit. The physical structure upon which circuits are fabricated as components of systems such as memory systems, and coding and Glossary of Terms decoding systems. chroma The color information in a television or video signal composed of hue and saturation. chromaticity The quality of light, in terms of its color, as defined by its wavelength and purity. Chromaticity charts describe this com
bination of hue and saturation, independent of intensity. The relative proportion of R, G and B determines the color perceived. chrominance The combination of hue and saturation that, taken together with luminance (brightness), define color. Commission Internationale de l'Eclairage, an international body that specifies colors based on their frequencies. Common Intermediate Format, an optional part of the ITU-T's H.261 and H.263 standards. CIF specifies 288 non-interlaced luminance lines, that each contain 352 pixels and 144 chrominance lines that contain 176 pixels. CIF is to be sent at frame rates of 7.5, 10, 15 or 30 per second. When operating with CIF, the number of bits that result cannot exceed 256 K bits (where K equals 1024). Cinepak A proprietary software-based compression method developed by Radius for use on Apple Macintosh computers. Cinepak video is sent at 15 fps, with a 240-pixel high by 320-pixel wide resolution. circuit In telecommunications, pair of channels, which together provide bi-directional communications. A circuit includes the associated terminal equipment at the carrier's switching center. circuit switching The process of establishing a connection for the purpose of communication in which the full use of the circuit is guaranteed to the parties or devices that are exchanging information. After the communication has ended, the connection is released for use by others. CISSP (ISC) - grants the Certified Information Systems Security Practitioner designation to information systems security practitioners for passing a rigorous CISSP examination and subscribing to the (ISC). Code of Video Communication: the Whole Picture Ethics. Additional information is available at http://www.isc2.org/welcome.html CIVDL Collaboration for Interactive Visual Distance Learning. A collaborative effort by 10 leading US universities that uses dial-up videoconferencing technology for the delivery of engineering programs. clearchannel The characteristic of a digital transmission path in which the circuit is entire b
andwidth is available for information exchange. This differs from channels in which part of the channel is reserved for signaling, control or framing bits. Compression Labs, Incorporated, San Jose, California is one of the foremost codec manufacturers in the world. CLI was the developer of the first "low- bandwidth" codec in the US, VTS 1.5. This codec was one of the first two codecs (along with one from NEC from Japan) able to compress full-motion video to 1.5 Mbps transmission speeds. client A service-requesting program in a client/server computing environment that solicits support (service/resources) from a server, using a network to convey its request. The client provides the important resources required by a user to interface with a server. The term 'client' has, however, strayed from this strict definition to become a catchall phrase. A client today is often assumed to be a front-end application that offers user-friendly GUI tools for such actions as setting up conferences, adding additional participants, opening applications, copying files, and storing files. clock An oscillator. A PC's CPU clock regulates the execution of instructions. Clocks are also used to create timing reference signals in digital networks and systems for the purpose of synchronization. Central office. A CO can be one of many types of switching systems, either analog or digital, which connect subscriber lines to other lines and network trunks on a circuit-switched basis. The two most common in the US are AT&T's 5ESS and Nortel's DMS-100, both of which are digital. Glossary of Terms coaxial cable A cable with one central conductor surrounded by an insulator that is, in turn, surrounded by a cylindrical outer conductor and covered by an outer insulation sheath. The insulator next to the central conductor is typically polyethylene or air and the outer conductor is typically braided copper. Coaxial cables are used to carry very high-frequency currents with low attenuation; often in cable television networks. codec A sophisticated digital
signal-processing unit that takes an analog input and converts it to digital on the sending end. At the receiving end, another codec reverses the by reconverting the digital signal back to analog. Codec is a contraction of code/decode (some experts in the video industry assert it also stands for compress/decompress). codec conversion The back-to-back transfer of an analog signal from one codec into another codec in order to convert from one proprietary coding scheme to another. The analog signal, instead of being displayed to a monitor, is delivered to the dissimilar codec where it is re-digitized, compressed and passed to the receiving end. This is obviously a bi-directional process. Carriers offer conversion service. color That which is perceived as a result of differing qualities of the light reflected or emitted. Humans see color via the additive process in direct light (e.g., television in which the primary colors are red, green, and blue) and the subtractive process in reflected light (e.g., books, in which the primary colors are yellow, magenta, cyan, and black). The three basic color components are hue, saturation, and brightness. colorburst A few cycles (8 to 12) of sub-carrier frequency that serves as a color synch signal and communicates the proper hues to a video monitor or television. The color burst is part of an NTSC or PAL composite video signal. It provides a reference for the demodulation of the color information. The absence of color burst indicates black and white video or television. colordepth The number of distinct colors that can be represented Video Communication: the Whole Picture by a piece of hardware or software. The number of bits that are used to describe a color determine the system's color depth. color difference signal The first step in encoding the color television signal. Subtracting the luminance information from each primary color forms the color difference signals: red, green or blue. Color difference conventions include the Betacam format, the SMPTE format, the EBU- N10 for
mat and the MII format. Color difference signals are NOT component video signals-these are, strictly speaking, the pure R, G and B waveforms. colormonitor CRT that works on the principle of the additive primary colors of red, green and blue. The phosphors of these monitors are tinted with these hues SO that they glow in unique colors when excited with electrons beamed by an electron gun. The phosphor dots inside the visible face of the screen are organized in tightly grouped trios of red, green and blue; each trio is a pixel. colorshift The unwanted changing of colors caused when too few bits are used to express a color. color space The three properties of brightness, saturation and hue can be pictured as a three-dimensional color space. The center dividing line or brightness column is the axis where no color exists at all. Hues (colors) form circles around this axis. There is also a horizontal axis that describes the amount of saturation. Highly saturated colors are closest to the center and less saturated colors are arranged toward the outer edges. color subcarrier The NTSC color subcarrier conveys color information and has a frequency of 3.579545 MHz. Color saturation is conveyed via signal amplitude and hue (tint) is conveyed via signal phase. colortiming The synchronization of the burst phase of two or more video signals to ensure that no color shifts occur in the picture. combfilter An electrical filter that separates the chroma (color) and luma (brightness) components of a video signal into separate parts. It does this by passing some Glossary of Terms frequencies and rejecting others in between. Using a comb filter reduces artifacts but also causes some resolution loss in the picture. S-Video permits a video signal to bypass a comb filter, and thereby results in a better image. common carrier A telecommunications operating company that provides specific telephony services. compact disc CD. Information is stored on a CD's surface SO that when it is scanned the fluctuations in the surface create two states:
on and off. See CD-ROM. companding Like 'codec' or 'modem,' companding is a contraction, in this case, combining the words compressing and expanding. It refers to the reduction of the dynamic range of an audio or video signal in which the signals are sampled and transformed into non-linear codes. component video Transmission and recording of color television with luminance and chrominance (red, green and blue picture components) treated as separate signals. Component video is not a standard but rather a technique that yields greater signal controls and image quality. Hue and saturation (chrominance) are considered a single component, as is luminance (brightness), which is recorded at a higher frequency than chrominance; this makes it possible to exceed 400 lines of resolution. Component video is also known as Y/C. In component video, synchronization information may be added with the G signal; it can also be a separate signal. composite video A color television signal in which the chrominance signal is a sine wave that is modulated onto the luminance signal that acts as a subcarrier. This is used in NTSC and PAL systems. compression The process of reducing the information content of a signal SO that it occupies less space on a transmission channel or storage device and a fundamental concept of video communications An uncompressed NTSC signal requires about 90 Mbps of throughput, greatly exceeding the speed of all but the fastest and shortest of today's networks. Squeezing the video information Video Communication: the Whole Picture can be accomplished by reducing the quality (sending fewer frames in a second or displaying the information in a smaller window) or by eliminating redundancy. conditional frame A process of compression in which only the changes replenishment that are present in the current video frame, when compared to the past video frame, are transmitted. conferencing The ability to meet over distance in which meetings can include both visual and audible information. Typically videoconferencing syste
ms incorporate screens that can show the faces of distant-end participants, graphics, close-ups of documents or diagrams and other objects. connection A path that is established between two devices and which provides reliable stream delivery service. content In the context of video, the information object or objects that are packaged for playback by a viewer. continuous presence A technique used in video processing and transmission in which the sending device combines portions of more than one video image and transmits those separate images in a single data stream to a receiver or receivers. The receiver displays these multiple images on a single monitor where they are arranged side-by-side or stacked vertically. Continuous presence images can also be displayed on multiple monitors. contone Continuous tone. Used to describe the resolution of an image, particularly photographic-quality images. contrast The range of light-to-dark values of an image that are proportional to the voltage differences between the black and white levels of the signal. convergence The trend, now that media can be represented digitally, for historical distinctions between the boundaries of key industries to blur. Companies from consumer electronics, computer and telecommunications industries are forming alliances and raiding each other's markets. Convergence will be accelerated with the coming of the much-heralded U.S. information superhighway. Glossary of Terms Central processing unit: the chip in a microcomputer or printed circuit board in a mainframe or minicomputer in which calculations are performed. crossconnect The equipment used to terminate and manage com- munications circuits in a premises distribution system. Jumper wires or patch cords are used to connect station wiring to hardware ports of various types. CSMA/CD Carrier Sense Multiple Access with Collision Detection. A baseband LAN access method in which terminals "listen" to the channel to detect an idle period during which they can begin the transmission of their own message
s. They might also hear a collision occur when two devices attempt to send information across the same channel at the same time. Channel Service Unit. A customer-provided device, a CSU provides an interface between the customer and the network. The CSU ensures that a digital signal enters a communications channel in a format that is properly shaped into square pulses and precisely timed. It also provides a physical and electrical interface between the data terminal equipment (DTE) and the line. CU-SeeMe A free videoconferencing program (under copyright of Cornell University and its collaborators) available to anyone with a Macintosh or Windows and a connection to the Internet. CU-SeeMe allows a user to set up an Internet-based videoconference with another site anywhere in the world. By using a reflector, multiple parties at different locations can participate in a CU-SeeMe conference, each from his or her own desktop computer. D-channel In an ISDN network the D-channel is a signaling channel over which the carrier passes packet-switched information. The D channel can also support the Video Communication: the Whole Picture transmission of low-speed data or telemetry sent by the subscriber. Digital Tape Component Format. The CCIR 601- approved digital standard for making digital video recordings. It records each component separately and employs the YCbCr coding scheme. Digital Tape Composite Format. A digital system that is considerably less costly than D1. It records a composite signal in an 8-bit digital format that is derived by sampling the video signal at a rate of four times the frequency of the subcarrier. D2-MAC One of two European formats for analog HDTV. Digital-analog converters. DMIF-Application Interface (DAI) is an expression (a protocol message) of the functionality provided by DMIF. The much-misused term that indicates any representation, such as characters or analog quantities, to which meaning is or can be attached or assigned. Data is the plural of datum, which means "given information" or "the b
asis for calculations." In general usage, however, data usually means characters (letters, numbers, and graphics) that can be stored or transmitted over various types of telecommunications networks. Until recently, voice and video signals were not considered data but now that they are being converted to a digital format they are being referred to as data. data compression Reducing the size of a data file by reducing unnecessary information, such as blanks and repeating or redundant characters or patterns. Data Over Cable Service A standard interface for cable modems, the devices Interface Specifications that manipulate signals between cable TV operators and customer computers or TVs. DOCSIS specifies modulation schemes and the protocol for exchanging bi-directional signals over cable. Now known as CableLabs Certified Cable Modems, the ITU ratified DOCSIS 1.0 in March 1998. Decibel. One-tenth of a Bell and a logarithmic Glossary of Terms measure of electric energy. A decibel expresses the ratios between voltages, sound intensities, currents, the power (amplitude) of sound waves, and SO on. Direct broadcast satellite, a transmission scheme used for program delivery, most generally entertainment. There are several DBS providers; none of the systems that they use are, however, compatible. These systems provide downstream speeds of 400 Kbps to 30 Mbps or higher and upstream speeds of 28 Kbps or higher. Direct Current. Data communications equipment. The network side of an equipment to network connection with DTE, data terminal equipment that plugs into DCE, which provides a means of network connection. See Discrete Cosine Transform. DCT coefficient An expression of a pixel block's average luminance, as used in DCT. The value for which the frequency is zero in both the horizontal and vertical directions. decode A process that converts an incoming bitstream, that consists of digitized images and sounds, into a viewable and audible state. decryption Decryption reverses an encryption process to return an encrypted transmis
sion into its original form. Decryption applies a special decoding algorithm (key) to the encrypted exchange; any party without access to the proper key required for decryption can not receive the transmission in an intelligible form. dedicated access A leased, private connection between a customer's equipment and a telephone company location, most often that of an IXC. dedicated leased line A transmission circuit leased by one customer for exclusive use around the clock. See also private line. delay The time required for a signal to pass between a sender and a receiver; alternately the time required for a signal to pass through a device or conductor. demultiplex The process of separating two or more signals previously combined for transmission over a shared channel. Multiplexing merges multiple channels onto Video Communication: the Whole Picture one channel prior to transmission; de-multiplexing separates them again at an appropriate network node. Often shortened to DeMUX. depth of field The range of distances from the camera in which objects around a focal point (the distance from the surface of a lens or mirror to its subject)) will be in focus. Use of a smaller lens aperture increases depth of field. Data encryption standard developed by the National Bureau of Standards and specified in the FIPS Publication 46, published in January 1977. Generally replaced by AES. desktop A term used to refer to a desktop computer or workstation used by an individual user. Dense wavelength division Dense WDM transmits up to 32 OC-48 (2.5Gbps) multiplexing signals over a single fiber, and offers the potential to transmit trillions of bits per second over one fiber. dial-up The ability to arrange a switched connection, whether analog or digital, by entering a terminating address such as a telephone number, in order that the call can be routed by the network. Differs from point-to-point services that can be used only to communicate between two locations. digital Information that is represented using codes that consist of zeros
and ones (binary coding). Binary digits (bits) can be either zeros or ones and are typically grouped into "words" of various lengths-8 bit words are called bytes. Digital Access Cross DACS. A switch/multiplexer that permits DSO cross Connection System connection from one T-1 transmission facility to another. Digital Signal Hierarchy A TDM multiplexed hierarchy used in telephone networks. DSO, the lowest level of the hierarchy, is a single 64 Kbps channel. DS-1 (1.544 Mbps) is 24 DSOs. DS-2 (6.312 Mbps) is four DS-1 signals multiplexed together; DS-3 (44.736 Mbps) is seven DS-2 signals multiplexed together. At the top of the hierarchy is DS-4, which is six DS-3 signals and which requires a transmission system capable of handling a Glossary of Terms 274.176 Mbps signal. Digital Signal Processor See DSP. digital transmission The conveyance over a network of digital signals by means of a channel or channels that may assume in time any one of a defined set of discrete values or states. digitizing The process of sampling an analog signal SO that it can be represented by a series of bits. digitizing tablets Graphics systems used in conjunction with videoconferencing applications. Using a special stylus or electronic "pen" a meeting participant can write on the tablet and the message can be viewed by the distant end and, if desirable, stored on a PC. Photos and text can also be annotated electronically. These devices are unsettling to use, however, because no image appears on the tablet, thus it is difficult to orient the letters. direct broadcast satellite The use of satellite to broadcast directly to homes or businesses. Subscribers are obliged to purchase and install a satellite dish. DBS service originated in Japan, which is composed of many islands and which has a harsh geography that includes mountains, rivers, valleys and ridges that made it very difficult to plan and execute a terrestrial broadcasting CATV system. directional couplers In cable systems, multiple feeder cables are coupled with these devices to matc
h the impedance of cables. Discrete Cosine Transform DCT. A pixel-block based process of formatting video data in which it is converted from a three- dimensional form to a two-dimensional form suitable for further compression. In the process, the average luminance of each block or tile is evaluated using the DC coefficient. Used in the ITU-T's H.261 and H.263 videoconferencing compression standards and the ISO/ITU-T's MPEG and JPEG image compression recommendations. Discrete Wavelet DWT. Based on same principles as DCT, this method Transform segregates the spectrum into waves of different lengths and then processes all frequencies to retain the image's sharp lines, which are partially lost in the Video Communication: the Whole Picture DCT process. distance learning The incorporation of video and audio technologies into the educational process SO that students can attend classes and training sessions in a location distant from that where the course is being presented. dithering In color mapping, dithering is a method of representing a hue, not available in the color map, by intermingling the pixels of two colors that are available and letting the eye-brain team average them together into a single perceived median color. Divestiture In early 1982, AT&T signed a Consent Decree and thereby agreed to spin off the 22 local Bell Operating Companies (BOCs). These were grouped into seven Regional Bell Holding Companies that managed their business, coordinated their efforts, and provided strategic direction. Restrictions were placed on both the BOCs and AT&T. The US Department of Justice stripped AT&T of the Bell name (except in their Bell Labs operation), the authority to carry local traffic, and the option to discriminate in favor of their former holdings. BOCs were awarded Yellow Pages publishing and allowed to supply local franchised- monopoly services, but were not allowed to provide information services or to manufacture equipment. They could carry calls only within local access and transport areas (LATAs). This agre
ement changed the composition of telephone service in the US when it became effective on, January 1, 1984. document camera A specialized camera that is mounted on a long adjustable neck for taking pictures of still images-pictures, graphics, pages of text and objects-for manipulation such as a video conference. downlink The communications path from a satellite to an earth station. Delivery Multimedia Integration Framework is a session protocol for the management of multimedia streaming over generic delivery technologies. It is similar to FTP except that, where FTP returns data, DMIF returns pointers toward (streamed) data. DOCSIS See Data Over Cable Service Interface Specifications. Glossary of Terms Differential Pulse Code Modulation is a compression technique that sends only the difference between what was (a past frame of video information) and what is (a present frame). DPCM requires identical codecs on each, the transmitting and receiving ends to predict, from a past frame of pixels, what the present frame will be. The transmitting frame, after computing its prediction, compares the actual to its speculation and sends information on the difference. In turn, the receiving codec interprets these differences (called errors) and makes adjustments to the present video frame. driver A driver is software that provides instructions for reformatting or interpreting software commands for transfer to and from peripheral devices and the central processing unit (CPU). Many printed circuit boards require a software driver in order for the other PC components to work correctly. In other words, the driver is a software module that drives the data out of a specific hardware port. Video drivers may be required for desktop video. Digital service, level zero or DS-zero. A single 64 Kbps channel in a multiplexing scheme that is part of the North American and European digital hierarchy and which results from the process of digitizing an analog voice channel through the application of time division multiplexing, pulse code modulat
ion and North American or European companding. (T-1). A multiplexing scheme that is part of the North American digital hierarchy and which specifies how to subdivide 1.544 Mbps of bandwidth into twenty-four 64 Kbps channels using time division multiplexing, pulse code modulation and North American companding. Europe has a similar multiplexing hierarchy that produces a 2.048 Mbps signal (E- 1/CEPT). Also called T-3. A multiplexing scheme that is part of the North American digital hierarchy and which specifies how to subdivide 45 Mbps of bandwidth into 28 T-1 (1.544 Mbps) carrier systems-a total of Video Communication: the Whole Picture 672 channels. The techniques for accomplishing this include time division multiplexing, pulse code modulation and North American companding. Digital signal processor. A specialized computer chip designed to perform speedy and complex operations on digitized waveforms. Useful in processing sound and video signals. Data service unit. A device used to transmit digital data on digital transmission facilities. It typically interfaces to data terminal equipment via an RS-232- C or other terminal interface, connecting this device to a DSX-1 (digital system cross connect) interface. Digital Signal Cross-Connect, a panel that connects digital circuits to allow cross-connections via a patch and cord system. Data terminal equipment. The equipment side of an equipment to network connection with DCE, data communications equipment connecting the DTE to the network for the purposes of transmission. dual 56 Combination of two 56 Kbps lines (usually switched 56) to yield a 112 Kbps channel used for low- bandwidth videoconferencing. Dual 56 allows for direct dialing of a videoconference call and can be obtained from IXCs or LECs. Digital Video Cassette. A DVC is a storage medium based on a _-inch-wide tape made up of metal particles. The DVC source is sampled at a rate similar to that of CCIR-601 but additional chrominance subsampling (4:1:1 in the NTSC 30 kHz mode) provides better resolutions. When
the NTSC 30-fps signal is encoded, the image frame resolution is 720 pixels by 480 lines with 8 bits used for each pixel. DVD (Digital Versatile Disk) is a type of CD. It has storage capacity of 17 gigabytes, which is much higher than CD-ROM's 600 Mbytes and a higher data delivery than that of CD-ROM. DVD uses MPEG and Dolby compression algorithms to achieve its storage capacity. Digital Video Everywhere was developed by InSoft Incorporated as a core software architecture for open Glossary of Terms systems platforms. It features a hardware- independent API for running multimedia collaborative and conferencing applications across LANs, WANs, and TCP/IP-based networks. Digital Video Interactive. A proprietary compression and transmission scheme from Intel. Compression is asymmetric, requiring relatively greater amounts of processing power at the encoder than at the decoder. DVI played an important role in the PC multimedia market. See Dense Wavelength Division Multiplexing. earth station An antenna transmitter or receiver that accepts a signal from a satellite and may, in turn, be capable of transmitting a signal to a satellite. A three-bit integer that indicates the number of bits that should be ignored in the final data octet of a RTP H.261 packet. European Broadcasting Union, an organization that developed technical recommendations for the PAL television system. The reflection of sound waves that results from contact with non-sound-absorbing surfaces such as windows or walls. Reflected signals sound like a distorted and attenuated version of the original sound that, in videoconferencing, would primarily be speech. Echoes in telephone and videoconferencing applications are caused by impedance mismatches, points where energy levels are not equal. In a four- wire to two-wire connection, the voice signal moving along the four-wire section has more energy than the two-wire section can absorb; consequently, the excess energy bounces back along the four-wire path. When the return-delay approaches 500 ms, speakers will
hear their own words transmitted back at them. echo cancellation A process that uses a mathematical process to predict an echo and remove that portion of the signal from an audio waveform to eliminate acoustical echo. Video Communication: the Whole Picture echomodeling A mathematical process whereby an echo is created from an audio waveform and, subsequently subtracted from that form. The process involves sampling the acoustical properties of a room, calculating approximately what form an echo might take, and then removing that information from the signal. echosuppression The insertion of mild attenuation in audio transmit and/or receive signal paths. Used to reduce annoying echoes in the audio portion of a videoconference, an echo suppresser is a voice-activated "on/off" switch that is connected to the four-wire side of a circuit. It silences all sound when it is on by temporarily deadening the communication link in one direction. Unfortunately, echo suppression clips the remote end's new speech as it stops the echo. Electrical Interface Association, a standards-setting group in the US that defines standards for interfaces including jacks and other network connections. Extended Industry Standard Architecture. The independent computer industry's alternative to IBM's Micro-Channel data bus architecture that IBM used in its PS/2 line of desktop computers. EISA is a 32-bit bus or channel that expands on the original Industry Standard Architecture (ISA) 16-bit channel. EISA capabilities are particularly important when a machine is being used for processor-intensive applications. electromagnetic spectrum The range of wavelengths that includes light, physical movement of air (sound) or water, radio waves, x-rays, etc. These wavelengths propagate throughout the entire universe. electromagnetic waves Oscillations of electric and magnetic forces that produce different wavelengths and which include light, radio, gamma, ultraviolet, and other forms of energy. electron beam A stream of electrons focused on a phosphorescent s
creen and fired from a "gun" to create images. Deflecting it from magnetic coils or plates SO that it hits a precise location on the screen focuses the beam. Glossary of Terms electrons Negatively charged particles that, along with positively charged protons, allow atoms to achieve a neutral charge. encoding The process through which media content is transformed for the purposes of digital transmission. encryption The conversion, through the application of an algorithm, of an original signal into a coded signal in order to secure it from unauthorized access. Typically the process involves the use of "keys" that can unlock the code. The most common encryption standard in the US Bureau of Standard's is the DES (data encryption standard) which enciphers and deciphers data using a 64-bit key specified in the Federal Information Processing Standard Publication 46, published in January, 1977. enhanced standard Standards enhancement is a common practice of videoconferencing codec manufacturers. They begin with an algorithm that is compliant with a formal standard (typically the ITU-T's H.320 algorithm). However, they add capabilities that are transparent to dissimilar products that run the standard, but which improve implementation when operating exclusively in their proprietary environment. entrance facilities In a premise distribution system, entrance facilities are the point of interconnection between a building wiring system and a telecommunications network outside the building. entropy A measure of the information contained in any exchange. Entropy is the goal of nearly every compression technique. If information is duplicated, the excess amount (that portion that is over and above what is necessary to convey) is redundant. The goal is to remove the redundant portion of whatever is being conveyed (for our purposes, motion, video, audio, text, and still images). The remainder is entropy-information. envelope The variations of the peaks of the carrier that contain the video signal information. envelope delay The char
results in the disappearance of a picture to black or, in the case of fade in, the gradual introduction of light to an all-black image. fast packet multiplexing The combination of TDM, packet and computer intelligence to allow multiple digital signals to share a high-speed network. Fast packet multiplexing assumes a clean network and, therefore, does not buffer information, but rather moves it along, and assumes it will arrive with little or no degradation. The two most common forms of fast packet multiplexing are cell relay and frame relay. Federal Communications Commission, a US regulatory body established by Congress in 1934. Fiber Distributed Data Interface. An ANSI standard for a 100 Mbps token-ring-based fiber-optic LAN. It uses a counter-rotating token ring topology and is compatible with the physical layer of the OSI model. FDDIII Emerging ANSI standard that incorporates both circuit and packet switching over fiber optics at 100 Mbps. Not compatible with the original FDDI standard. Frequency Division Multiplexing. A method of transmitting multiple analog signals on a single carrier by assigning them to separate and unique frequency bands and then transmitting the entire aggregate of all frequency bands as a composite. fiber optics cable A core of fine glass or plastic fibers that has extremely high purity and is surrounded by a cladding of a slightly lower refractive index. Light or infrared pulses that carry coded information signals are injected at one end. They pass through the core using a method of internal reflection or refraction. Attenuation is low; pulses travel as far as 3,500 feet or more before needing regeneration. field A normal television image is comprised of interlaced fields-each field contains one-half of a video frame's Video Communication: the Whole Picture information. Each field carries half the picture lines. In the NTSC video standard 60 fields/30 frames are sent in a second and in the European PAL and SECAM systems 50 fields/25 frames are sent in a second. field interlacing In te
levision, the process of creating a complete video frame by dividing the picture into two halves with one containing the odd lines and the other containing the even lines. This is done to eliminate flicker. field of view The focal length combined with the size of the image area light is focused on. Field of view is measured in degrees and is not dependent on the distance from a subject. field sequential system The first broadcast color television system, approved by the FCC in 1950. It was later changed to the NTSC standard for color broadcasting. A set of related records treated by a computer as a complete unit. Retrieving information from one computer memory storage facility to another is called a file transfer. FireWire A high-speed serial bus that Apple Computer and Texas Instruments developed that allows for the connection of up to 63 devices. Also known as the IEEE 1394 standard, it provides 100, 200, 400, 800, 1600, and 3200 Mbps transfer rates. FireWire supports isochronous data transfer, and thereby guarantees bandwidth for multimedia operations. It supports hot swapping and multiple speeds on the same bus. FireWire is used increasingly for attaching video devices to the computer. filter An electrical circuit or device that passes a selected range of energy while rejecting all others that are not in the proper frequency range. firmware Programs or data that are stored on a semiconductor memory circuit, often a plug-in board. Firmware- stored memory is non-volatile. Fiber in the loop. A Bellcore technical advisory, number 909, Issue 2, which addresses the ability of telephone companies to provide video services and Glossary of Terms delivery using optical fiber cable. FlexMux The synchronized delivery of streaming data may require the use of different QoS schemes as it traverses multiple public and private networks. MPEG defines the delivery-layer FlexMux multiplexing tool to allow grouping of Elementary Streams (ES) with low multiplexing overhead. It may be used to group ES with similar QoS requirements,
to reduce the number of network connections, or to reduce end-to-end delay. Use of the FlexMux tool is optional, and the FlexMux layer may be empty if the underlying TransMux instance provides adequate functionality. flicker An unwanted video phenomenon that results when the screen refresh rate is too slow or when the two interlaced fields that make up a video frame are not identically matched. Flicker is also known as jitter and sometimes as jutter. In the early days of television, interlacing two video fields to create a single video frame was used to combat flicker and eliminated it fairly well at rates over 40 fields per second. Flicker is not a problem with non-interlaced video display formats (those used for computer monitors). fluorescent lights Used for illumination in many corporate and public settings. These lights produce spectral frequencies of a less balanced nature than incandescent lights, and cause problems in a videoconference or video production process, and may cause 60 Hz flicker. footprint The primary service area covered by a satellite where the highest field intensity is normally in the center of the footprint, with intensity reducing toward the outer edges. format In television the specific arrangement of signals to compose a video signal. There are many different ways of formatting a video signal: NTSC, PAL, SECAM, component video, composite video, CD-I, QuickTime and SO on. forward motion vector Used in motion compensation. A motion vector is a physical quantity with both direction and magnitude; a course of motion in which pixels are the objects that Video Communication: the Whole Picture are moving. A forward motion vector is derived from a video reference frame sent previously. forward prediction A technique used in video compression. Specifically, compression techniques based on motion compensation in which a compressed frame of video is reconstructed by working with the differences between successive video frames. Frames per second. The number of frames contained in a single second
of a moving series of video images. 30 fps is considered to be 'full-motion' video in Japan and the US, while 25 fps is considered to be full- motion video in Europe. fractal compression An asymmetrical compression technique that shrinks an image into extremely small resolution- independent files and stores it as a mathematical equation rather than as pixels. The process starts with the identification of patterns within an image, and results in collection of shapes that resemble each other but that have different sizes and locations. Each shape-pattern is summarized and reproduced by a formula that starts with the largest shape, repeatedly displacing and shrinking it. Patterns are stored as equations and the image is reconstructed by iterating the mathematical model. Fractal compression can store as many as 60,000 images on one CD-ROM and competes with techniques such as JPEG, which uses DCT to drop redundant information. One disadvantage of fractal compression is that it requires considerable processing power. JPEG is much faster but fractal compression is more efficient; it squeezes information into smaller files. Applications using fractal compression center on desktop publishing and presentation creation. fractal geometry The underlying mathematics behind fractal image compression, discovered by two Georgia Tech mathematicians, Michael Barnsley and Alan Sloan. Fractal Image Format FIF. A compression technique that uses on-board ASIC chips to look for patterns. Exact matches are rare; the basis of the process is to find close matches using a function known as an affine map. Glossary of Terms fractional T-1 FT-1 or fractional T-1 refers to any data transmission rate between 56 Kbps and 1.544 Mbps. It is typically provided by a carrier in lieu of a full T-1 connection and is a point-to-point arrangement. A specialized multiplexer is used by the customer to channelize the carrier's signals. frame An individual television, film or video image. There are either 25 or 30 frames per-second sent with television, 24 a
re sent in moving picture films. A variable number, typically between 8 and 30, are sent in videoconferencing systems, depending on the transmission bandwidth offered. frame buffer Memory used for holding the data for a single and complete frame (or screen) of video. Some systems have enough memory to store multiple screens of data. Frame buffers are evaluated in terms of how many bits are used to represent a pixel. The more bits that are used, the "deeper" the color. The greater the number of buffers used to store captured video frames the higher the possible frame rate. frame dropping The process of dropping video frames to accommodate the transmission speed available. frame grab The ability to capture a video frame and temporarily store it for later manipulation by a graphics input device. frame grabber A PC board used to capture and digitize a single frame of NTSC video and store it on a hard disk. frame rate The number of frames that are sent in a second, and the equivalent of fps. NTSC video has a frame rate of 30 fps. PAL and SECAM send frames at a rate of 25 per second and motion picture (film) images are delivered at a frame rate of 24 per second. frame store A system capable of storing complete frames of video information in digital form. This system is used for television standards conversion, computer applications that incorporate graphics, video walls and various video production and editing systems. freeze frame A single frame from a video segment displayed motionless on a screen. Also, a method of Video Communication: the Whole Picture transmitting video images in which less than one or two frames are sent in any given second. Sometimes known as slow-scan, still video or captured frame video. When freeze frame video is viewed, the viewer sees successive images refreshing a scene but they lack a sense of continuous motion. frequency The number of times that a periodic function or oscillation repeats itself in a specified period of time, usually one second. The unit of measurement of frequency is typ
ically hertz (Hz) which is used to measure cycles or oscillations per second. frequency interleaving The process of putting hue and color saturation information into the vacant frequency spectrum via a process of offsetting that chrominance spectrum exactly SO that its harmonics are made to fall precisely between the harmonics of the luminance signal. frequency Modulation FM. A method of passing information by altering the frequency of the carrier signal. front porch With reference to a composite video signal, the front porch is the portion of the signal that occurs between the end of the active video on each horizontal scan line and the beginning of the horizontal synch pulse. full-motion video Generally refers to broadcast-quality video transmissions in which the frame rate is 30 per second in North American and Japan and 25 per second in Europe. - G - Green, as in RGB. The G signal sometimes includes synchronization information. G.711 CCITT Recommendation entitled, "Pulse Code Modulation (PCM) of Voice Frequencies." G.711 defines how a 3.1 kHz audio signal is encoded at 64 Kbps using Pulse Code Modulation (PCM) and either mu-law (US and Japan) or A-law (Europe) companding. G.721 CCITT Recommendation that defines how a 3.1 kHz audio signal is encoded at 32 Kbps using Adaptive Differential Pulse Code Modulation (ADPCM). Glossary of Terms G.722 CCITT Recommendation that defines how a 7.5 kHz audio signal is encoded at a data rate of 64 Kbps. G.723 ITU-T Recommendation entitled, "Dual Rate Speech Coder for Multimedia Communications Transmitting at 5.3 and 6.4 Kbps." G.723 is part of the H.323 and H.324 families G.728 ITU-T Recommendation for audio encoding using Low Delay Code Excited Linear Prediction (CELP). The bandwidth of the analog audio signal is 3.4 kHz whereas after coding and compression the digitized signal requires a bandwidth of 16 Kbps. G.729 Coding of speech at 8 Kbps/s using conjugate- structure algebraic-code-excited linear-prediction (CS-ACELP). Part of the ITU-T's H.323 standard for videoconfere
ncing over non quality-of-service guaranteed LANs. An increase in the strength of an electrical signal; measured in decibels. gamma A display characteristic of CRTs defined in the following formula: light = volts gamma. Gamma values range in CRTs, with most ranging between 2.25 and 2.45. A digital logic component whose output state depends on the states of the logic signals presented to its inputs. Gigabit Interface Converter generic Applicable to a broad range of applications, i.e., application independent. Genlock Short for generator locking device, a genlock is a device that enables a composite video system (e.g., a television) to accept two signals simultaneously. A genlock holds on to one set of signals while it processes a second set. This allows the combination of signals (video and graphics) into a single stream of video. Traditionally, genlock allowed multiple devices (video recorders, cameras, etc.) to be used together with precise timing SO that they captured a scene in unison. geostationary satellite A satellite whose orbital speed is matched or Video Communication: the Whole Picture synchronized with respect to the earth's rotation SO that the satellite remains fixed, relative to a specific point on the earth's surface. geosynchronous orbit A satellite orbit that is synchronous with a point on the earth's surface. An orbit, approximately 22,500 miles above the earth's equator, where satellites circle at the same speed as the rotation of the earth, and thereby appear stationary to an earth bound observer. ghost A duplicate shadowy image set off slightly from the primary picture image. Graphical Interchange Format, a commonly-used graphics file format for transferring image files over a network. A prefix denoting a factor of 10°, abbreviated GHz. Gigabit Interface Converter An interface for attaching network devices to fiber- based transmission systems (e.g., Fibre Channel or Gigabit Ethernet). GBICs convert serial electrical signals to serial optical signals and vice versa. GBIC modules contain ID and
system information that a switch can use to assess network device capabilities, and are hot-swappable. Group of Blocks. The ITU-T's H.261 Recommendation process divides a video frame into a group of blocks (GOB). At CIF resolutions there are 12 such blocks and at QCIF there are three. A GOB is made up of 12 macroblocks (MB) that contain luminance and chrominance information for 8448 pixels. A GOB relates to 176 pixels by 48 lines of Y and the spatially-corresponding 88 pixels by 24 lines of CB and CR. Each GOB is divided in 33 macroblocks, in which the macroblocks are arranged in three rows that each contain 11 macroblocks. A codec manufacturer. GEC and Plessy combined their efforts to create an early video codec that depended on BT's compression algorithm. GPT codecs are prevalent in Europe. Grand Alliance The merger of four competitive HDTV systems, previously proposed to the FCC as individual standards, into a collaborative venture that is backed by all the supporters of the original four separate Glossary of Terms systems. The companies that comprise the Grand Alliance include AT&T, the David Sarnoff Research Center, General Instrument, MIT, North American Philips, Thomson Consumer Electronics, and Zenith Electronics. The new HDTV standard was approved in late 1996 after a compromise between the computer and broadcasting industries was reached. graphical user interface (GUI). A "point and click" computing environment that depends on a pointing device such as a mouse to evoke commands and move around a screen as opposed to a standard computer keyboard. graphics Artwork, captions, lettering, and photos used in programs, or the presentation of data of this nature, in a video communications system. graphics accelerator A video adapter with a special processor that can enhance performance levels during graphical transformations. A CPU can often become bogged down during an activity of this type. The accelerator allows the CPU to execute other commands while it takes care of the compute-intensive graphics processi
ng. graphics coprocessor A programmable chip that speeds video performance by carrying out graphics processing independently of the computer's CPU. Among the coprocessor's common abilities are drawing graphics primitives and converting vectors to bitmaps. gray scale A range of luminance levels with incremental brightness steps from black to gray to white. The steps generally conform to a logarithmic relationship. groupware A term for software that runs on a LAN and allows coworkers to work collaboratively and concurrently. Groupware is now being enhanced with video capabilities and many of the new desktop conferencing products offer capabilities commonly associated with groupware. See Global System for Mobile Communications guardband Unused radio frequency spectrum between television channels. Video Communication: the Whole Picture See Graphical user interface. H.221 The framing portion of the ITU-T's H. .320 Recommendation that is formally known as "Frame Structure for a 64 to 1920 Kbps Channel in Audiovisual Teleservices." The Recommendation specifies synchronous operation in which the coder and decoder handshake and agree upon timing. Synchronization is arranged for individual B channels or bonded H0 connections. ITU-T Recommendation, ratified in July of 1995, that H.222 specifies "Generic coding of moving pictures and associated audio information. H.222.0 is entitled, "MPEG-2 Program and Transport Stream" and H.222.1 is entitled, "MPEG-2 streams over ATM." H.223 Part of the ITU-T's H.324 standard that specifies a control/multiplexing protocol, it is formally called "Multiplexing protocol for low bitrate multimedia communication." Annexes, passed in February 1998, deals with video packetization with H.263, and with multimedia mobile communications over error-prone channels. H.225 Part of the ITU-T's H.323 Recommendation. H.225 establishes specific messages for call control such as signaling, registration and admissions as well as the packetizing and synchronizing of media streams. Annex I addresses call signal
ing protocols and media stream packetization for packet-based multimedia communications systems. H.226 Part of the ITU-T's H.32x Recommendation, H.226 establishes channel aggregation protocols for multilink operation on circuit-switched networks. H.230 A multiplexing Recommendation that is part of the ITU-T family of video interoperability Recommendations. Formally known as "Frame- synchronous Control and Indication Signals for Audiovisual Systems," the Recommendation specifies how individual frames of audiovisual information are Glossary of Terms to be multiplexed onto a digital channel. H.231 ITU Recommendation, formally known as "Multipoint Control Unit for Audiovisual Systems Using Digital Channels up to 2 Mbps," H.231 was added to the ITU-T's H.320 family of Recommendations in March 1993. It specifies the multipoint control unit used to bridge three or more H.320-compliant codecs together in a multipoint conference. H.235 Formerly known as H.SECURE, H.235 was ratified in February 1998 to establish security and encryption protocols for H.323 and other H.245-based multimedia terminals. H.235 is not itself a method of encryption and security, but establishes a methodology for leveraging such protocol families as IPSec (that, in turn, leverages security association and key distribution protocols such as ISAKMP and Oakley). H.242 Part of the ITU-T's H.320 family of video interoperability Recommendations Formally known as the "System for Establishing Communication Between Audiovisual Terminals Using Digital Channels up to 2 Mbps," H.242 specifies the protocol for establishing an audio session and taking it down after the communication has terminated. H.243 ITU-T "System for Establishing Communication Between Three or More Audiovisual Terminals Using Digital Channels up to 2 Mbps." H.245 Part of the ITU's H.324 protocol that defines control of communications between multimedia terminals. Its formal Recommendation name is "Control protocol for multimedia communication." H.246 ITU recommendation to establish a method
for interworking of H-Series multimedia terminals with H-Series multimedia terminals and voice/voiceband terminals on the PSTN (more accurately, GSTN, or General Switched Telephone Network) and ISDN. H.247 ITU recommendation to establish a method for multipoint extension for broadband audiovisual Video Communication: the Whole Picture communication systems and terminals. H.261 The ITU-T's Recommendation that allows dissimilar video codecs to interpret how a signal has been encoded and compressed, and to decode and decompress that signal. The standard, formally known as "Video Codec for Audiovisual Services at Px64 Kbps," it also identifies two picture formats: the Common Intermediate Format (CIF) and the Quarter Common Intermediate Format (QCIF). These two formats are compatible with all three television standards: NTSC, PAL and SECAM. H.262 ITU recommendation for using the MPEG-2 compression algorithm for compressing a videoconferencing transmission. H.263 H.263 is the ITU-T's "Video Coding for Low Bit Rate Communication." It refers to the compression techniques used in the H.324 Recommendation and in other ITU-T recommendations, too. H.263 is similar to H.261 but differs from it in several ways. Its advance negotiable options enable implementations of the Recommendation that employ them to achieve approximately the same resolution quality as H.261 at half the data rate. H.263 uses half-pixel increments for motion compensation (optional in both standards) while H.261 uses full-pixel precision and specifies a loop-filter. In H.263, portions of the data stream hierarchy have been rendered optional. This allows the codec to be configured for enhanced error recovery or, alternatively, for very low data rate transmission. There are, in H.263, four options designed to improve performance. They are advance prediction, forward and backward frame prediction, syntax-based arithmetic coding and Unrestricted Motion Vectors. H.263 supports not only H.261's optional CIF and mandatory QCIF resolutions but also SQCIF (128x96 p
ixels), 4CIF (704x576 pixels) and 16CIF (1408x1152 pixels). H.310 H.310 is an ITU-T draft standard for broadcast (HDTV) quality video conferencing. It describes how MPEG-2 video can be transmitted over high-speed ATM networks. The standard includes subparts such Glossary of Terms as H.262 (MPEG-2 video standard), H.222.0 (MPEG-2 Program and Transport Stream), H.222.1 (MPEG-2 streams over ATM) and a variety of G.7XX audio compression standards. H.310 takes H.320 to the next generation of networks (broadband ISDN and ATM). H.320 An ITU-T standard formally known as "Narrow-band Visual Telephone Systems and Terminal Equipment." H.320 includes a number of individual recommendations for coding, framing, signaling and establishing connections (H.221, H.230, H.321, H.242, and H.261). It applies to point-to-point and multipoint videoconferencing sessions and includes three audio algorithms, G.721, G.722 and G.728. H.321 H.321 adapts H.320 to next-generation topologies such as ATM and broadband ISDN. It retains H.320's overall structure and some of its components, including H.261 and adds enhancements to adapt it to cell-switched networks. H.322 H.322 is an enhanced version of H.320 optimized for networks that guarantee Quality of Service (QoS) for isochronous traffic such as real-time video. It will be first used with IEEE 802.9a isochronous Ethernet LANs. H.323 H.323 extends H.320 to Ethernet, Token-Ring, and other packet-switched networks that do not guarantee QoS. It will support both point-to-point and multipoint operations. QoS issues will be addressed by a centralized gatekeeper component that lets LAN administrators manage video traffic on the backbone. Another integral part of the spec defines a LAN/H.320 gateway that will allow any H.323 node to interoperate with H.320 products. In addition to H.320's H.261 video codec H.323 also H.263, a more sophisticated video codec. Also in the family are H.225 (specifies call control messages); H.245 (specifies messages for opening and controlling channels for media streams)
; and the G.711, G.722, G.723, G.728 and G.729 audio codecs. H.324 H.324 defines a multimedia communication terminal Video Communication: the Whole Picture that operates over POTS lines. It can incorporate H.261 or H.263 video encoding. The H.324 family includes H.223, for multiplexing, H.245 for control, T.120 for audiographics, and the V.90 modem specification. H.332 ITU Recommendation for large multipoint H.323 conferences (defined as "loosely coupled conferences"). H.450.1 ITU Recommendation that defines a generic functional protocol for the support of supplementary services in H.323. H.450.x ITU Recommendation that defines specific supplementary services to H.323 that leverage H.450.1. The switched 384 Kbps dialing standard as defined by an ITU-T Recommendation. The switched 1.544 Mbps dialing standard as defined by an ITU-T Recommendation. The switched 1.920 Mbps dialing standard as defined by an ITU-T Recommendation. handshake The electrical exchange of predetermined signals by devices in preparation of a connection. Once completed, the transmission begins. In video communications, handshake is a process by which codecs interoperate by seeking out a common algorithm. hard disk A sealed mass storage unit that allows archival of large amounts of data and smaller amounts of video or audio information. hardware The mechanical, magnetic and electronic components of a system. Examples are computers, codecs, terminals, and scanners. harmonic distortion A problem caused by the production of nonlinearities in a communications channel in which harmonics of the input frequencies appear in the output channel. harmonics In periodic waves, the component sine waves that are integer multiples-exponents-of the fundamental frequency. Glossary of Terms One of the two analog HDTV standards the EC's Council of Telecommunications Ministers promoted, and which was abandoned in favor of digital formats that were subsequently proposed. High Definition Television. TV display systems with approximately four times the resolution of s
tandard television systems. Systems labeled as HDTV typically offer at least 1,000 lines of resolution and an aspect ratio of 16:9. header A string of bits in a coded bit stream that is used to convey information about the data that follows. head-end The originating point in a cable television system where all the television and telecommunication signals are assembled for transmission over a broadband cable system to the subscribers. Signals may be generated from studios, received from satellites or conveyed via landline or microwave radio trunks. Head-end sites can be located in the same building as the cable operator's business headquarters or they can be sited close to satellite receiving dishes. They are generally equipped with amplifiers or signal regenerators. hertz Hz. The unit of measurement used in analog or frequency-based networks named after the German physicist Heinrich Hertz. One hertz equals one cycle- per-second. high frequency Electromagnetic waves between 3 and 30 MHz. high-pass filter A device that attenuates the frequencies below a particular cut-off point, a high-pass filter is useful in removing extraneous sounds such as hum and microphone boom. holography The recording and representation of three- dimensional objects by interference patterns formed between a beam of coherent light and its refraction from the subject. The hologram is the holographic equivalent of a phonograph. A three-dimensional image is stored which, if broken apart, can still reconstruct a whole image, though at reduced definition. Holograms require laser light for their Video Communication: the Whole Picture creation and, in many cases, for their viewing. horizontal H. In television signals, the horizontal line of video information that is controlled by a horizontal synch pulse. horizontal blanking The period of time during which an electron gun interval shuts off to return from the right side of a monitor or TV screen to the left side in order to start painting a new line of video. horizontal resolution Detail expressed
in pixels that provide chrominance and luminance information across a line of video information. A computer that processes data in a communications network. A network or system signal distribution point where multiple circuits convene and are connected. Some type of switching or information transfer can then take place. Switching hubs can also be used in Ethernet LAN environments in an arrangement whereby a LAN segment might support only one workstation. This relieves congestion through a process called micro-segmenting. The attribute by which a color may be identified within the visible spectrum. Hue refers to the spectral colors of red, orange, yellow, green blue and violet. A huge variety of subjective colors exist between these spectral colors. Huffman encoding A lossless, statistically based entropy-coding compression technique. Huffman encoding is used to compress data in which the most frequently occurring code groups are represented by shorter codes and rarely occurring code groups are represented by longer codes. The idea behind Huffman encoding is similar to that of Morse code in that short, simple codes are assigned to common characters and longer codes are assigned to lesser- used characters. Huffman coding, used in H.261 and JPEG, can reduce a test file by approximately 40%. hybrid fiber/coax Hybrid fiber-coax (HFC) is being used by the cable companies to provide local loop service. HFC uses fiber links from the central site to a neighborhood Glossary of Terms hub. Existing coax cable connects the hub to several hundred nearby homes. Once installed, HFC provides subscribers with telephone service over the cable and, in addition, interactive broadband signaling, which can support videoconferencing over the cable network. See Internet Corporation for Assigned Names and Numbers ICCAN See Internet Corporation for Assigned Names and Numbers frame Intraframe coding, as specified by the MPEG Recommendation in which an individual video frame is compressed without reference to other frames in the sequence. A
lso used in the JPEG compression method. I-frame coding generates much data and is used when there are major scene changes or as a reference frame to eliminate accumulated errors that result from interframe coding techniques. I-Signal In the NTSC color system, the I signal represents the chrominance on the orange-cyan axis. International Electrotechnical Commission. Also synonymous with IXC-interexchange carrier. Institute of Electrical and Electronics Engineers; the organization that evolved from the IRE-the Institute of Radio Engineers. IEEE 802 standards Various IEEE committees that developed LAN physical layer standards and other attributes of LANs and MANs. IEEE 802.9 specifies the isochronous Ethernet communications protocol, which adds 6 Mbps of bandwidth to the same twisted pair copper wire that carriers 10 Mbps Ethernet. This additional bandwidth takes the form of 96 ISDN circuit switched B channels. IEEE P 802.14 is a protocol for "Cable-TV Based Broadband Communications Network," another IEEE 802.X standard that has relevance for video communications Video Communication: the Whole Picture Internet Engineering Task Force. The standards body that adopted the MIME protocol for sending video- enabled e-mail and other compound messages across TCP/IP networks and one of two working bodies of the Internet Activities Board. image An image, for the purposes of this book, refers to a complex set of computer data that represents a picture or other visual data. Images can offer high- resolution (in other words, they can be composed of many pixels-per-square-inch that causes them to have a photographic quality) or low-resolution, (crude animation, sketches and other pictures that contain a minimal number of pixels-per-square-inch. image bit map Digital representation of a graphics display image as a pattern of small units of information that represents one or more pixels. imaging The process of using equipment and applications to capture visual representations and transmit them over telecommunications networks. Ima
ging can be used to move and store medical X-rays, for engineering and design applications that allow engineers to develop 3-D images of a products or components for the purpose of design-refinement, and to store large quantities of documents. impedance The ratio of voltage to current as it is measured along a transmission circuit. in-band signaling Networks exchange information between nodes through the use of signaling. In-band signaling uses the communications path to exchange such information as request for service, addressing, disconnect, busy, etc. In-band signaling has been largely replaced by out-of-band signaling, an example of which is the ITU-T's Signaling System Number 7 or SS7. input The data or signals entering a computer or, more generally, the signal applied to any electronic device or system. input selector A switch or routing device that is used to select video inputs in a digital picture manipulator or keying device. Glossary of Terms integrated circuit An electronic device IC's are made by layering semiconductive materials and packaging both active and passive circuits into a single multi-pin chip. intelligent peripheral In the AIN, the intelligent peripheral or IP collects information from designated "triggers" and issues SS7 requests based on that information. inter-LATA Communications that cross a LATA boundary and, therefore, must be carried by an IXC in accordance with the MFJ. interactive Action in more than one direction, either simultaneously or sequentially. In interactive video there is a bi-directional interplay between two or multiple parties, this is different from television, which is a send-only system that does not allow the receiver to respond to the signal. interactivity The ability of a video system user to control or define the flow of information, including the capability of communicating in return on a peer-to-peer basis. interexchange carrier IXC or IEC. The long distance companies in the US that provide inter-LATA telephony and communications services. This concept is g
oing away as a result of the passage of the Telecommunications Deregulation and Competition Act of 1996. interface A shared boundary or point common to two or more systems across which a specified flow of information passes. If interfaces are carefully specified, it should be possible to plug in, switch on, and operate equipment purchased from different sources. Interface can also refer to a circuit that converts the output of one system or subsystem to a form suitable to the next system or subsystem. interference Unwanted energy received with a signal that scrambles it or otherwise degrades it. interframe coding A compression technique used in video communi- cations in which the redundancies that can be found between successive frames of video information are removed. This type of compression is also referred to as temporal coding. Video Communication: the Whole Picture interlace A technique for "painting" a television monitor in which each video frame is divided into two fields with one field composed of the odd-numbered horizontal scan lines and the other composed of the even- numbered horizontal scan lines. Each field is displayed on an alternating basis-this is called interlacing. It results in very rapid screen refreshment and is done to avoid visual flicker. Interlacing is not used on computer monitors and is not required as part of the US HDTV standard. International Standards A non-treaty standards-making body that, among Organization other things, helped to develop the MPEG and JPEG standards for image compression. See ISO. International One of the specialized agencies of the United Nations Telecommunications that is composed of the telecommunications Union administrations of roughly 113 participating nations. Founded in 1865, it was invented as a telegraphy standards body. It now develops international standards for interconnecting telecommunications equipment across networks. Known as the CCITT until early 1994, the ITU played a big role in developing audiovisual communications standards through its T
elecommunications Standardization Sector (see ITU-T). In this book we refer to standards as CCITT Recommendations if they were ratified before 1994 and ITU-T Recommendations if they were adopted after 1994. Internet The Internet is a vast collection of computer networks throughout the world. Originally developed to serve the needs of the Department of Defense's Advanced Research Projects Agency, the Internet has become a common vehicle for information exchange, commerce, and communications between individuals and businesses. Internet Corporation for ICANN, the successor to the Internet Assigned Assigned Names and Names Authority (IANA) is a not-for-profit Numbers organization with an international Board of Directors that oversees the operations of the necessary central coordinating functions of the Internet. Internet reflector site A multipoint technique used by Internet-based Glossary of Terms videoconferencing. A reflector is a server that bounces signals back to all parties of a multipoint connection, allowing any number of users to conference with each other. internetworking The ability of LANs to interconnect and exchange information. Bridges and routers are the devices that connect these LANs with each other, with WANs, and with MANs. Users can, therefore, exchange messages, access files on hosts other than those on their own LAN, and access and utilize various inter-LAN applications. interoperability The ability of electronic components produced by different manufacturers to communicate across product lines. The trend toward embracing standards has greatly furthered the interoperability process. interpolation A compression technique with which a current frame of video is reconstructed using the differences between it and past and future frames. intra-LATA A connection that does not cross over a LATA boundary, and that regulated LECs are allowed to carry on an end-to-end basis. Recently passed legislation may make this term obsolete. intraframe coding A compression technique whereby redundancies within a vi
deo frame are eliminated. DCT is an intraframe coding method. Interframe coding is also called spatial coding. inverse multiplexer Equipment that receives a high-speed input and breaks it up for the network into multiple 56- or 64 Kbps signals SO that it can be carried over switched digital service. The I-MUX supports the need for dialing over this network. It also provides the synchronization necessary to recombine the multiple channels into a single integrated transmission at the receiving end. This is necessary because, once the transmission has been subdivided into multiple channels, some parts of the transmission may take a different and longer path to the destination than others. This results in a synchronization problem even though the difference may be measured in milliseconds. The job of the I-MUX is to make sure that the channels are recombined into a Video Communication: the Whole Picture cohesive single transmission. Internet Protocol. Defined in RFC 791, the Internet Protocol is the network layer for the TCP/IP protocol stack. It is a connectionless, best-effort packet- switched delivery mechanism. By connectionless we mean that IP can carry data between two systems without first requiring their connection. IP does not guarantee the data will reach its destination; neither can it break large packets down into smaller ones for transmission or recover lost or damaged packets. For that it relies on the Transmission Control Protocol. Integrated Services Digital Network. ITU-T standard for a digital connection between user and network. A multiplicity of services, voice, data, full-motion video and image can be delivered. Two interfaces are defined, Basic Rate Interface, and Primary Rate Interface. Basic Rate Interface (BRI) provides two 64 Kbps circuit-switched bearer (B) channels for customer data and one 16 Kbps delta (D) or signaling channel that is packet-switched. The Primary Rate Interface (PRI) differs between the US and Europe: in Europe it provides thirty 64 Kbps B channels and two 64 Kbps D chan
including the US's ANSI and its British equivalent, the British Standards Institution (BSI). ISO members include the national standards bodies of most industrialized countries. The ISO, in conjunction with the ITU-T, continues to develop video communications standards. Their joint efforts have produced the Joint Photographic Experts Group (JPEG) and the Motion Picture Experts Group (MPEG) family of encoding techniques. isochronous Channels that are capable of transmitting timing information in addition to the data itself, and which thereby allow the terminals at each end to maintain exact synchronization with each other (the term is Glossary of Terms pronounced 'I-SOCK-ron-us'). IsoENET An approach to providing isochronous service over Ethernet. It derives an additional 96 switched B channels from 10Base-T. National Semiconductor and IBM developed it. International Teleconferencing Association, a professional association organized to promote the use of teleconferencing (audio and video conferencing). Located in Washington D.C. ITFS Antenna System Instructional Television Fixed Service. A type of local distance learning system that provides one-way over- the-air television service at microwave frequencies. These frequencies are reserved for educational purposes. The signals can be received only by television installations equipped with a converter that changes the signals to NTSC. One of the specialized agencies of the United Nations, the International Telecommunications Union was founded in 1865, before telephones were invented, as a telegraphy standards body. ITU-R Formerly the United Nation's CCIR, the ITU-R sets international standards that relate to how the radio spectrum is allocated and utilized. ITU-T The International Telecommunications Union's Telecommunications Standardization Sector. The ITU-T (formerly the CCITT) is the telephony standards-setting arm of the ITU. The ITU-T developed the H.320, H.323 and H.324 protocol suites that define how videoconferencing codecs interoperate over various types of n
etworks. Interexchange carrier, long distance service providers in the US that provide inter-LATA service. Jitter Random signal distortion, a subjective effect caused by time-base errors. Jitter in a reproduced video image moves from left to right and causes an irregular. Jitter can be controlled by time-base correction. Video Communication: the Whole Picture Joint Photographic Experts Group. The joint ITU-T/ ISO standard for still image compression. A prefix that denotes a factor of one thousand or 103. This abbreviation is used frequently when discussing computer networks and digital transmission systems. In computer parlance, K stands for 1024 bytes as in 64K bytes memory. In transmission systems the K stands for 1000-e.g., 64 Kbps means 64 thousand bits per second bandwidth. K-Band In satellite communications a frequency band that ranges between 10.9 and 36 GHz. Kilobits per second. The transport of digitized information over a network at a rate of 1,000 bits in any given second. Kilohertz. One thousand Hertz or cycles. kilobyte 1,024 bytes of data. kiosk A small structure, open at one or more sides located in a public place. Kiosks today can be designed and equipped with motion video-enabled and multimedia displays to enable even the least computer-literate people to access information on a particular topic or product. Typically they use point-and-click or single- press methods for video-enabled information retrieval. Ku-Band In satellite communications systems, this frequency band ranges between 10.9 and 11.7 GHz for Direct Broadcast Satellite (DBS) service. - L- - Local Area Network. A computer network, usually within a single office complex and increasingly within a single department, that connects workstations, servers, printers and other devices, and thereby permits resource sharing and message exchange. LAN segmentation Splitting one large LAN into multiple smaller LANs. This technique is used to keep LANs from becoming Glossary of Terms congested with multimedia and desktop video applications. laser L
ight Amplification by the Stimulated Emission of Radiation. A device that produces optical radiation both in the range of visible light wavelengths and outside this range. Lasers are used in optical fiber networks to transmit signals through a process of oscillation. Local Access and Transport Areas. The areas within which the Bell Operating and independent telephone companies can provide transport services. The Telecommunications Deregulation Act of 1996 is changing this distinction as sufficient competition in local access is achieved. The FCC is gradually allowing RBOCs and IXCs to compete outside of their historic constraints. latency Latency refers to the transmission delays encountered in a packet-switched network. Latency refers to the tendency of packet-switched networks to slow down when they become congested. It is lethal for data types such as voice and video that require constant bit rates to ensure that these time-sensitive data types arrive with minimal and predictable delays. lavaliere microphone A small clip-on microphone, popular because it is unobtrusive and maintains a fixed distance to the speaker's mouth. layer Layering is an approach taken in the process of developing protocols for compatibility between dissimilar products, services and applications. In the seven-layer OSI model, layering breaks each step of a transmission between two devices into a discrete set of functions. These functions are grouped within a layer according to what they are intended to accomplish. Each layer communicates with its counterpart through header-defined interfaces. The flexibility offered through the layering approach allows products and services to evolve while accommodating changes made at the layer level. leased line A transmission facility reserved from a communications carrier for the exclusive use of a Video Communication: the Whole Picture subscriber. See private line. Local Exchange Carrier. LECs include the Bell Operating Companies and independent telephone companies that provide the subscriber local
loop. Lempel-Ziv-Welch LZW, a data compression method named after its compression developers. LZW techniques are founded on the notion that a given group of bytes can be compressed by substituting an index phrase. An optical device of one or more elements in an illuminating or image forming system such as the objective of a camera or projector. level The intensity of an electrical signal. A facility between a subscriber and a telecommunications carrier, generally a single analog communications path physically composed of twisted copper wire. line of sight Some spectrum-based transmission systems need an unobstructed path between the transmitter and receiver. The ability for the receiver to see the sender is described as having a clear line of sight. lipsynch The techniques used to maintain a precise coordination between the delivery of sound and the facial movements associated with speech. Lip synch is required because it takes much longer to process the video portion of a signal than the audio portion (the video part contains much more information and takes longer to compress). To mitigate the delay, a codec incorporates an adjustable audio delay circuit to delay-equalize sounds with faces, and allow talking-head images to look natural. LiveBoard LiveWork's group-oriented electronic whiteboard that allows users in different locations to interactively share information. A wireless pen is used to create documents or images viewed and edited in real-time over POTS lines. LiveBoard can also display full- motion color video and provide audio for mixed media applications. local loop In telephone networks, the lines that connect customer equipment to the switching system in the Glossary of Terms central office. local multipoint Broadband wireless local loop service that offers two- distribution service way digital broadcasting and interactive service over a (LMDS) special type of microwave. logarithm An exponential expression of the power to which a fixed number or base must be raised in order to produce a given number
. Logarithms are usually computed to the base of 10 and are used for shortening mathematical calculations-e.g. 10³ = 1,000. Logarithmic Quantization A digital audio encoding method that yields 12-bit Step Encoding accuracy using only eight-bit encoding. It is a technique used in CCITT G.711 audio. loop filter H.261 codecs break an original video field into 8-x-8 pixel blocks, then filters and compresses each block. A loop filter is used to separate the images' spatial frequencies SO that they can be uniquely compressed. lossless compression Techniques that, when they are reversed, yield data identical to the original. lossy compression A compression technique that, when reversed, contains less information than the original image. Techniques that, when reversed, do not yield data that is identical to the original. Lossy compression techniques are good at compressing motion video sequences and achieve much higher compression ratios than lossless techniques. low-pass filter A device that attenuates the frequencies above the cut-off point and which is used in sound synthesis and digital audio sampling. Large Scale Integration. Refers to degree of miniaturization achieved in manufacturing complex integrated circuits. The brightness signal in a video transmission. lumen A measure of light emitted by a source. luminance The information about the varying light intensity of an image produced by a television or video camera. Also called brightness. Video Communication: the Whole Picture A contraction of luminance and flux and a basic unit for measuring light intensity. A Lux is approximately 10 foot candles. Lempel-Ziv-Welch coding developed by three mathematicians in the 1970s. LWZ looks at repetitive bit combinations and represents the most commonly occurring sequences with abbreviated codes. Multiple Analog Component, one of the original European HDTV formats. In data networking, media access controllers are the interface between an application and an output device. macroblock In H.261 encoding, a macroblock is made up
of six 8x8-pixel blocks of which two contain chrominance information and four contain luminance information components. A macroblock contains 16x16 pixels- worth of Y (luminance) information and the spatially- corresponding 8x8 pixels that contain CB and CR information. Metropolitan Area Network. Megabits per second or approximately one million bits per second. See multipoint conferencing (or control) unit. media Air, water, space and solid objects through which information travels. The information that is carried through all natural media takes the form of waves. Media Access Control MAC. The network protocol that controls access to a LAN's bandwidth and could include techniques that would reserve bandwidth for isochronous communications. megabyte 1,048,576 bytes-220. Megastream British Telecom's brand name for a digital system that offers the customer thirty 64 Kbps channels. Used in high-bandwidth video conferencing systems, whether point-to-point or dial-up (using the ITU-T's HO dialing standard). memory A digital store for computer data that commonly Glossary of Terms consist of integrated circuits. There are two primary types: read-only memory or ROM and random-access memory or RAM. Merged Gateway Control A protocol for controlling media gateways. It is the Protocol combination of Level 3's IPDC and Bellcore's and Cisco's SGCP (Simple Gateway Control Protocol). mesh topology A networking scheme whereby any node can communicate directly with any other node. Modified Final Judgment. The out-of-court settlement that broke up the Bell System, severing the Bell Operating Companies from AT&T. See Merged Gateway Control Protocol. microsegmenting The process of configuring Ethernet and other LANs with a single workstation per segment using hubs and inexpensive wiring. The goal is to remove contention from Ethernet segments to guarantee enough bandwidth for desktop video and multimedia. With each segment having access to a full 10 Mbps of Ethernet bandwidth, users can avail themselves of applications that incorporat
e compressed video. microwave Radio transmission above 1 GHz used for transmitting communications signals, including video pictures, between various sites using dish aerials. middleware Middleware is a layer of software that provides the services required to link distributed pieces of an application across a network, typically in a client/server environment. Middleware is transparent, hiding the fact that components of an application are distributed. It helps developers to overcome the difficulties of interfacing application components into heterogeneous network environments. Multipurpose Internet Mail Extension. Developed and adopted by the Internet Engineering Task Force, this applications protocol is designed for transmitting mixed-media files across TCP/IP networks. mixing The process of combining separate sound and/or visual sources to make a smooth composite. Video Communication: the Whole Picture modulation Alteration of the amplitude, frequency or phase of an analog signal by a different frequency in order to impress a signal onto a carrier wave. Also can be used in digital signaling to make multiple signals share a single channel. In the digital realm, this is generally achieved by dividing a channel into time slots, into which each separate signal contributes bits in turn. moiré In a video image, a wavy pattern caused by the combination of excessively high frequency signals; the mixing of these signals results in a visible low frequency that looks a bit like French watered silk, after which it is named. monitor A precision display for viewing a picture. The word is now often applied to any TV set with video (as opposed to RF) input. A receiver-monitor is a set that is both a conventional TV used for viewing broadcast television but which also accepts video input from a computer. Also refers to computer displays. monochrome Reproduction in a single color, normally as a black and white picture. motion compensation An interframe coding technique that examines statistics of previous video frames to predict
subsequent ones. In a motion sequence each pixel can have a distinct displacement from one frame to the next. From a practical point of view, however, adjacent pixels tend to have the same displacement. Motion compensation uses motion vectors and rate- of-change analysis to exploit inter-frame similarities. An image is divided into rectangular blocks and a single displacement vector to describe the movement of all pixels within the block. Motion JPEG A compression technique that applies JPEG compression to each frame of a video clip. motion vectors In the H.261 Recommendation this optional calculation can be included in a macroblock. Motion vector data consists of one code word for the horizontal information followed by a code word for the vertical information. The code words can be of variable length and are specified in the H.261 standard. Performing motion vector calculations Glossary of Terms places more demand on a processor and many systems do not include this capability, which is optional for the source coder. If a source encoder performs the motion compensation processing, the H.261 Recommendation requires that the decoder use it during decompression. motion video capture board A device designed to capture, digitize and compress multiple frames of video for storage on magnetic or optical storage media. Motion Picture Experts Group JTC1/SC29/WG11 standard that specifies a variable rate compression algorithm that can be used to compress full-motion image sequences at low bit rates. MPEG is an international family of standards that are not used for videoconferencing, but are more generally used for video images stored on CD-ROM or video servers and retrieved in a computer or television application. MPEG compresses YUV SIF images. It uses the same DCT algorithm used in H.261 and H.263 but uses 16 by 16 pixel blocks for motion compensation. MPEG-1 Joint ISO/IEC recommendation known as "Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to about 1.5 Mbps." MPEG-1 was completed in Oct
ober 1992. It begins with a rather low image resolution; about 240 lines by 352 pixels per line. This image is compressed at 30 frames per second. It provides a digital image transfer rate up to about 1.5 Mbps and compression rates of about 100:1. The MPEG-1 specification consists of four parts: system, video, audio, and compliance testing. MPEG-2 The MPEG-2 is Committee Draft 13818, and can be found in documents MPEG93/N601, N602 and N603. This draft was completed in November 1993. Its formal name is "Generic Coding of Moving Pictures and Associated Audio." MPEG-2 is targeted for use with high-bandwidth applications aimed at digital television broadcasts. It specifies 720-by-408 pixel resolution at 60 fields-per-second with data transfer rates that range from 500 Kbps to more than 2 Mbps. Video Communication: the Whole Picture MPEG-4 "Very Low Bitrate Audio-Visual Coding." An 11/94 Call for Proposals by ISO/IEC resulted in a Working Draft specification in 11/96. MS-DOS A computer operating system developed by Microsoft and originally aimed at the IBM PC. multicasting Conferencing applications that typically use packet- switched transmission to broadcast a signal that can be received by multiple recipients, all of whom are listening on a single multicasting address. multi-mode fiber Optical fiber with a central core of a diameter sufficient to permit light-pulses to zig-zag from side to side as well as to travel straight down the middle of the core. Step-indexed fibers have a sudden change of refractive index where cladding meets core. Signals are reflected back at this boundary. Graded index fibers have a gradual change of refractive index with radial distance from the center of the core, pulses are refracted back by this change. multimedia According to the Defense Information Systems Agency: "Two or more media types (audio, video, imagery, text, and data) electronically manipulated, integrated, and reconstructed in synchrony." Generally, multimedia refers to these media in a digital format for it is the digitiz
ation of voice and video information that is lending much of the power to multimedia communications Multiple Systems Operator When one cable company runs multiple cable systems, it creates what is referred to as a Multiple Systems Operator. MSOs benefit from economies of scale in areas of equipment procurement, marketing, management and technical expertise. Local decisions are left to the individual cable company operators. multiplex A method of transmitting multiple signals onto a single circuit in a manner SO that each can be recovered intact. multiplexer Electronic equipment that allows multiple signals to share a single communications circuit. Multiplexers are often called "muxes." There are many different types of multiplexers including T-1 and E-1 that use time division multiplexing, statistical and frequency Glossary of Terms division multiplexers, etc. Some use analog transmission schemes, some digital. A compatible demultiplexer is required to separate the signal back into its components. multiplexing The process of combining multiple signals onto a single circuit using various means. multipoint control unit A device that bridges together multiple inputs SO that more than three parties can participate in a videoconference. An MCU uses fast switching techniques to patch the presenters or speaker's input to the output ports representing the other participants. An ITU-T H.320-compliant MCU must meet the requirements of H.231 and H.243. Multirate ISDN A Bandwidth-on-Demand scheme in which telephone customers access, and subsequently pay for, ISDN bandwidth on an as-needed basis. multisensory Involving more than one human sense. - N- - N-ISDN Narrowband ISDN. Another name for conventional ISDN that defines bandwidths between 64 Kbps and 2.048 Mbps. narrowband Networks designed for voice transmission (typically analog voice), but which have been adapted to accommodate the transmission of low-speed data (up to 19.2 Kbps). Trying to get the existing narrowband public switched telephone network to transmit video
is a significant challenge to the local telephone operating companies. narrowcast A cable television or broadcast program aimed at a very small segment of the market. Typically these are specialized programs likely to be of interest to a relatively limited audience. network Interconnected computers or other hardware and software between which data can be transferred. Network transmission media can vary; it can be optical fiber, metallic media, or a wireless arrangement. Nipkow disc A disc with holes that, when rotated, permit some Video Communication: the Whole Picture light reflected from a screen to hit a photo tube. This creates a minuscule current flow; it is proportionate to the light in the image being reproduced. This was the basis of early television as pioneered by John Logie Baird in England in the 1920s. Paul Nipkow, a German physicist invented it, in 1884. Network Loadable Module. An application that resides in a Novell NetWare server and coexists with the core OS. NLMs are optimized to this environment and provide superior service to applications that run outside the core. A concentration point in a network where numerous trunks come together at the same switch. noise Any unwanted element accompanying program material. This can take the form of snow, random horizontal streaks or large smears of varying color. Network Termination 1, an ISDN standards-based device that converts between the U-Interface and the S/T-Interface. The NT-1 has a jack for the U-Interface from the wall and one or more jacks for the S/T Interface connection to the PC or videoconferencing system, as well as an external power supply National Television Standards Committee, the body formed by the FCC to develop television standards in the US. NTSC video has a format of 525 scan lines. Video is sent at a rate of 30 frames per second in an interlaced format, in which two fields comprise a frame (60 Hz). The bandwidth required to broadcast this signal is 4 MHz. NTSC uses a trichromatic color system known as YIQ. The NTSC signal's line
frequency is 15.75 kHz. Finally, the color subcarrier has a frequency of 3.58 MHz. Nippon Telephone and Telegraph. Nx384 N-by 384. The ITU-T's approach to developing a standard algorithm for video codec interoperability that was expanded into the Px64 or H.261 standard, and approved in 1990. It was based on the ITU-T's HO switched digital network standard. Nyquist's Theorem A formula that defines the sampling frequency in analog to digital conversions that states that a signal Glossary of Terms must be sampled at twice its bandwidth in order to be digitally characterized without distortion. For a sine wave, the sampling frequency must be no less than twice the maximum signal frequency. Optical carrier, as defined in the SONET specification. OC-1 or Optical Carrier Level 1 is defined as 51.84 Mbps. OC-3 is defined as 155.52 Mbps. octet An eight-bit datum representing one of 256 binary values. Also known as a BYTE operating system A fixed program used for operating a computer. An os accepts system commands and manages space in memory and on storage media. optical disk drives Peripheral storage disks. Used in video communications to store and play back motion image sequences, often accompanied by sound. optical fiber Very thin glass filaments of extremely high purity onto which light from lasers or LEDs is modulated in such a way that the transmission of digitally encoded information takes place at very high speeds. Fiber optics systems are capable of transmitting huge amounts of information. See fiber-optic cable. Open Systems Interconnection. An international standard that describes seven layers of communication protocols that allows dissimilar information systems and equipment to be interconnected. out-of-band signaling Network signaling (addressing, supervision and status information) that traverses a network using a separate path, channel or network than the information (call or transmission) itself. One well- known type of out-of-band signaling is the ITU-T's Signaling System #7. output devices Electrical com
ponents such as monitors, speakers, and printers that receive a computer's data. Video Communication: the Whole Picture P frame Predictive framing, as specified in the MPEG Recommendation. Pictures are coded by predicting a current frame using a past frame. The picture is broken up into 16x16 pixel blocks. Each block is compared to a the block that occupies the same vertical and horizontal position in a previous frame. packet A unit of digital data that is switched as a unit. A packet consists of some number of bits (either a fixed number or a variable number), such as those that serve as an address. The packet can be sent over a packet switching network by the best available route (switched virtual circuit) or over a predetermined route (permanent virtual circuit). In a switched virtual circuit or TCP/IP networking environment packets from a single message may take many different routes. At the destination they are reunited with other packets that comprise the communication, re-sequenced and presented to the recipient in their original form. packet switching A technique for transmitting data in which the message is subdivided into smaller units called packets. These packets each carry the destination address and sequencing information. Packets from many users are then collated on a single communications channel. Phase Alternate Line. The television standard used in most of Western Europe except France. PAL is not a single format however; variations are used in Australia/New Zealand, China, Brazil, and Argentina. PAL uses an image format composed of 625 lines. Frames are sent at a rate of 25 per second and each frame is divided into two fields to accommodate Europe's 50 Hz electrical system. PAL uses YUV, a trichromatic color system. To pivot a camera in a horizontal direction. parallel communications A multi-wire or bus system of communications. The bits that make up the information are transmitted separately, each assigned to a separate wire in the cable. Parallel communications are used frequently in computer-
to-printer outputs in which the data must Glossary of Terms be transferred very rapidly but over a short distance. The opposite method of transmission is serial in which information travels over a single channel a bit at a time. passband A range of frequencies transmitted by a network or captured by a device. Pay-Per-View PPV. A type of CATV or DBS service whereby a subscriber notifies a provider that he or she is interested in receiving a program and is billed for the specific service on a one-time basis. A self-contained desktop or notebook computer that can be operated as a standalone machine or connected to other PCs or mainframes via telephone lines or a LAN. Pulse Code Modulation. A method of converting an analog signal to a digital signal that involves sampling at a specified rate (typically 8,000 per second), converting the amplitude of the sample to a number and expressing the number digitally. Picture element. Generally synonymous with pixel, the term pel is used in the analog world of television broadcasting. As television moves toward the digital world of HDTV the term pixel is being embraced by the broadcasting industry. peripheral equipment Accessories such as document cameras, scanners, disks, optical storage devices that work with a system but are not integral to it. persistence 1) The time taken for an image to die away. 2) Ability to maintain a session despite inactivity or interruption. phase The relationship between the zero-crossing points of signals. A full cycle describes a 360-degree arc; a sine wave that crosses the zero-point when another has attained its highest point can be said to be 90 degrees out of phase with the other. phase modulation The process of shifting the phase of a sine wave in order to modulate it onto a carrier. Phoneme An element of speech such as, "ch," "k," or "sh," that is used in synthetic speech systems to assemble words for audio output. Video Communication: the Whole Picture phosphor Substance that emits visible light of specific wavelengths when irradiated, as,