- 60GHz Unlicensed Wireless
- 802.11x – 802.11b – 802.11g – 802.11n
- Applications – Industries
- AV Technology
- Broadcast Engineering
- Content Communications World – CCW NAB
- Conversion and Converters
- Convert 3G HD SDI to HDMI
- Dirac Pro
- Editorial Coverage
- Educational Guides
- Ensemble Designs
- Fiber Optic Medium
- Fiber Optic Transport
- Government and Military
- GSN – GOVERNMENT SECURITY NEWS
- Haute Spot
- Introduction to Fiber Optics
- Jim Jachetta
- Job Listings – Help Wanted
- Leadership – Management
- Market Research
- Military – Government
- MPEG-2 Basic Training
- MPEG-4 H.264
- NAB Show
- NHL – National Hockey League
- Optical Windows and Spectrum
- Partner Profile
- Press Release
- Professional AV – Pro AV
- Routing – Distribution
- Satnews Daily
- Snells Law
- Speaking Event
- Sports Video Group – SVG
- Storage – Archive
- Trade Shows
- TV Technology
- Types of Fiber-optic Material
- Users Guide to Fiber Optic Video Transmission
- Users Guides
- Video Compression
- Video Networking – Enterprise IPTV
- Video over Cellular
- Video Streaming – Webcasting
- Video Streaming – Webcasting
- VidOvation Video Report and Newsletter
- Windows Media
- Wireless Video
Tag Archives: Video over LAN
Original content from Transition to Digital Newsletter, November 6, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-part-2
Is MPEG compression your friend? Of course, the answer to this question is that MPEG compression is your friend, unless it’s not working properly. When that happens, it’s our job to make it friendly again. This “Transition to Digital” tutorial continues the discussion from the preceding mid-October “Transition to Digital” tutorial about monitoring and evaluating MPEG-2 streams.
Streams are made of packets with headers and are filled with metadata, compressed video or compressed audio. To reconstruct a program from a stream, all of its video, audio and table components, and the corresponding PID assignments, must be correct. Also, there must be consistency between PSI table contents and the associated video and audio streams. This is a good place to look for trouble in a suspicious MPEG-2 stream.
Program Specific Information
Program Specific Information (PSI) is part of the Transport Stream (TS). PSI is a set of tables needed to demultiplex and sort out PIDs that are tagged to programs. A Program Map Table (PMT) must be decoded to find the audio and video PIDs that identify the content of a particular program. Each program requires its own PMT with a unique PID value.
The master PSI table is the Program Association Table (PAT). If the PAT can’t be found and decoded in the transport stream, no programs can be found, decompressed or viewed.
PSI tables must be sent periodically and with a fast repetition rate so channel-surfers don’t feel that program selection takes too long. A critical aspect of MPEG testing is to check and verify the PSI tables for correct syntax and repetition rate.
Another PSI testing scenario is to determine the accuracy and consistency of PSI contents. As programs change or multiplexer provisioning is modified, errors may appear. One is an “Unreferenced PID,” where packets with a PID value are present in the TS that are not referred to in any table. Another would be a “Missing PID,” where no packets exist with the PID value referenced in the transport stream PSI table.
Good broadcast engineers never forget common sense. Just because there aren’t any unreferenced or missing PIDs doesn’t guarantee the viewer is necessarily receiving the correct program. There could be a mismatch of the audio content from one program being delivered with the video content from another.
Because MPEG-2 allows for multiple audio and video channels, a real-world “air check” is the most common-sense test to ensure that viewers are receiving the correct language and video. It’s possible to use a set-top box with a TV set to do the air check, but it’s preferable to use dedicated MPEG test gear that allows PSI table checks. It’s also handy if the test set includes a built-in decoder with picture and audio displays.
So, all the bits and bytes appear to be organized and in place. How do you evaluate the quality of an MPEG-2 stream? Most use the concept of QoE. Some engineers call QoE Perceived Quality of Service (PQoS), because QoE is the quality of service as it is actually perceived by the viewer. In this tutorial, we’ll call the measurement of viewer satisfaction QoE.
QoE methodology for the evaluation of audio and video content provides broadcasters with a variety of choices, covering low, medium or high levels of quality. The QoE evaluation allows operators to pre-determine a specific level of viewer satisfaction and then use it to minimize storage and network resources by allocating only the resources necessary to maintain that particular QoE level.
The most basic recognized method to measure video content QoE is known as referenceless analysis. Essentially, referenceless analysis is what everyone does subconsciously when they watch TV. Using this method of analysis, QoE is not measured by comparing the original video to what is delivered. Instead, the images are visually inspected for artifacts such as blockiness, blurred or jerky video, frame-by-frame if possible. The referenceless analysis approach is based on the theory that viewers don’t know the quality of the original content.
These days, I wouldn’t be so certain. Bigger, brighter, undistorted plasma, LCD and LED screens make artifacts more difficult for even the most casual viewers to ignore. Funny thing about the new non-CRT screens: They don’t “Lie like a Trinitron.” That’s the good news and the bad news for engineers and others in the production and delivery chain.
More scientific evaluations of QoE consist of objective and subjective evaluation procedures, each one taking place after encoding. More subjective quality evaluation processes require more eyeballs, making the process more time-consuming with each viewer’s opinion.
Objective evaluation methods are based on and make use of multiple scientific metrics. Objective QoE evaluation methodology can provide results quicker, but it requires some physical resources and dedicated test gear.
One objective method of monitoring QoE is to use devices such as the one shown in our image. This device is an Ethernet video quality and service assurance monitoring and troubleshooting probe. Some products such as this provide analysis to the PID level, and may contain a hard drive for offline verification and inspection. Products like this are designed to monitor, analyze and possibly debug IP and MPEG transport quality issues at a problem viewer’s location, the receiving end of an STL, your home, your station’s maintenance shop or anywhere typically described as the video edge. It sure beats investigating problem locations with a portable TV and a 10ft mast.
Quality of Service is the ability to provide different priorities to different applications, users or data streams, or to guarantee a certain level of performance to a specific data stream. QoS may guarantee a required bit rate, delay, jitter, packet dropping probability and bit error rate. Quality of service guarantees are important if the network capacity has little headroom, especially for real-time MPEG-2 streaming, because it often requires a fixed bit rate and is delay-sensitive.
A network that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, often during a session establishment phase. In computer networking and other packet-switched telecommunication networks, the term “traffic engineering” refers to resource reservation controls, not the achieved service quality.
During a session, QoS may monitor the achieved level of performance, such as the data rate and delay, and dynamically control scheduling priorities in the network nodes.
QoS is sometimes used as a quality measure, with many alternative definitions, rather than referring to the ability to reserve resources. Quality of service sometimes refers to a guaranteed level of quality of service. High QoS is often confused with a high level of performance or achieved service quality, such as a high bit rate, low latency and low bit error probability. A high level of performance is, in fact, a QoE factor.
Best-Effort non QoS
A so-called Best-Effort network or service does not fully support quality of service. It is also not all that unusual in broadcast facilities. Why? Because the technical foundations of most broadcast facilities are built on best-effort overprovisioning and redundancy. Many new devices such as routers and switches support QoS. Many older devices do not. As older devices are replaced within a station’s system, it will ultimately be capable of QoS monitoring and measurements.
A generously overprovisioned best-effort system shouldn’t need to rely on QoS, just as a well designed Master Control shouldn’t need a “Technical Difficulties” graphic. At least that’s the way some IT-centric people I’ve met seem to think. We broadcast engineers know it can’t hurt to have both readily available, just in case.
In the meantime, “Best Effort” can be a good substitute for complicated QoS control mechanisms. Your goal is to provide high-quality program content over a best-effort network by over-provisioning its capacity so that it has more than sufficient headroom for expected peak traffic loads. The resulting absence of network congestion eliminates the need for QoS mechanisms.
What is most interesting about MPEG-2 monitoring and evaluation is that there are more recognized methods worthy of discussion than space allows for now. The next “Transition to Digital” tutorial will address these methods to help you ensure your station’s MPEG streams meet viewer expectations.
The author would like to thank Les Zoltan at DVEO Pro Broadcast Division for his help in the preparation of this tutorial.
Original content from Transition to Digital Newsletter, October 16, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-10162011
One of the advantages of digital video is that it can be compressed and transported in an MPEG stream across an IP network. MPEG compressed digital video requires new sets of test tools and troubleshooting skills using bit stream monitoring and testing to accurately identify problems, or preferably, recognize and identify potential problems before they occur.
The MPEG-2 standard is defined by ISO/IEC 13818 as “the generic coding of moving pictures and associated audio information.” It combines lossy video compression and lossy audio data compression to fulfill bandwidth requirements. The foundation of all MPEG compression systems is asymmetric because the encoder is more sophisticated than the decoder.
MPEG encoders are always algorithmic. Some are also adaptive, using a feedback path. MPEG decoders are not adaptive and perform a fixed function. This works well for applications like broadcasting, where the number of expensive complex encoders is few and the number of simple inexpensive decoders is huge.
The MPEG standards provide little information about encoder process and operation. Rather, it specifically defines how a decoder interprets metadata in a bit stream. MPEG metadata tells the decoder what rate video was encoded at, and it defines the audio coding, channels and other vital stream information.
A decoder that successfully deciphers MPEG streams is called compliant. The genius of MPEG is that it allows different encoder designs to evolve simultaneously. Generic low-cost and proprietary high-performance encoders and encoding schemes all work because they are all designed to talk to compliant decoders.
Asychronous Serial Interface (ASI) is a serial interface signal where a start bit is sent before each byte, and a stop signal is sent after each byte. This type of start-stop communication without the use of synchronized fixed time intervals was patented in 1916 and the key technology making teletype machines possible. Today, an ASI signal is often the final product of MPEG video compression, ready for transmission to a transmitter, microwave or fiber. Unlike uncompressed SDI, an ASI signal can carry one or multiple compressed SD, HD or audio streams. ASI transmission speeds are variable and depend on the user’s requirements.
There are two transmission formats used by the ASI interface, a 188-byte format and a 204-byte format. The 188-byte format is the more common. If Reed-Solomon error correction data is included, the packet can grow an extra 16 bytes to 204 bytes total.
What’s the purpose of a general-purpose oscilloscope (GPO) in troubleshooting MPEG? Not much. Specialized technology demands specialized test gear. MPEG steams are complicated, and MPEG-2 streams are more so. Examining MPEG-2 streams is reminiscent of measuring the front porch or counting the number of sync pulse serrations to manually validate an analog video sync pulse. Well, kind of, anyway.
An MPEG-2 stream can be either an elementary stream (ES), a packetized elementary stream (PES) or a transport stream (TS). The ES and PES are files.
Starting with analog video and audio content, individual ESs are created by applying MPEG-2 compression algorithms to the source content in the MPEG-2 encoder. This process is typically called ingest. The encoder creates an individual compressed ES for each audio and video stream. An optimally functioning encoder will look transparent when decoded in a set-top box and displayed on a professional video monitor for technical inspection.
A good ES depends on several factors, such as the quality of the original source material, and the care used in monitoring and controlling audio and video variables upon ingest. The better the baseband signal, the better the quality of the digital file. Also influencing ES quality is the encoded stream bit rate, and how well the encoder applies its MPEG-2 compression algorithms within the allowable bit rate.
MPEG-2 has two main compression components: intraframe spatial compression and interframe motion compression. Encoders use various techniques, some proprietary, to maintain the maximum allowed bit rate while at the same time allocating bits to both compression components. This balancing act can sometimes be unsuccessful. It is a tradeoff between allocating bits for detail in a single frame and bits to represent the changes (motion) from frame to frame. Which is more important?
Researchers are currently investigating what constitutes a good picture. Presently, there is no direct correlation between the data in the ES and subjective picture quality. For now, the only way of checking encoding quality is with the human eye, after decoding.
Figure 1. The transport stream is defined by the syntax and structure of the TS header. Courtesy DVEO Pro Broadcast Division. Click on image to enlarge.
The packetized elementary stream
Individual ESs are essentially endless because the length of an ES is as long as the program itself. Each ES is broken into variable-length packets to create a PES, which contains a header and payload bytes.
The PES header is data about the encoding process the MPEG decoder needs to successfully decompress the ES. Each individual ES results in an individual PES. At this point, audio and video information still reside in separate PESs. The PES is primarily a logical construct and is not really intended to be used for interchange, transport and interoperability. The PES also serves as a common conversion point between TSs and PSs (covered below).
Both the TS and PS are formed by packetizing PES files. During the formation of the TS, additional packets containing tables needed to demultiplex the TS are inserted. These tables are collectively called PSI and will be addressed in detail in a moment. Null packets, containing a dummy payload, may also be inserted to fill the intervals between information-bearing packets. Some packets contain timing information for their associated program, called the program clock reference (PCR). The PCR is inserted into one of the optional header fields of the TS packet. Recovery of the PCR allows the decoder to synchronize its clock to the rate of the original encoder clock.
TS packets are fixed in length at 188 bytes with a minimum 4-byte header and a maximum 184-byte payload. The structure of the TS header is shown in Figure 1. The key fields in the minimum 4-byte header are the sync byte and the packet ID (PID). The sync byte’s function is indicated by its name. It is a long digital word used for delineating the beginning of a TS packet.
The PID is a unique address identifier. Every video and audio stream, as well as each PSI table, needs to have a unique PID. The PID value is provisioned in the MPEG multiplexing equipment. Certain PID values are reserved. Important reserved PID values are indicated in the table below. Other reserved PID values are specified by organizations such as the Digital Video Broadcasting Group (DVB) and the Advanced Television Systems Committee (ATSC) for electronic program guides and other table.
In order to reconstruct a program from all its video, audio and table components, it is necessary to ensure that the PID assignment is done correctly and that there is consistency between PSI table contents and the associated video and audio streams. This is one of the main testing issues in MPEG and will be the focus of the next “Transition to Digital” newsletter tutorial.
Note: The author would like to thank Les Zoltan at DVEO Pro Broadcast Division for his help in the preparation of this tutorial.
In a year when many Public / Education /Government stations face budget cuts, the PEG station in Andover, Massachusetts has been able to expand and modernize its facilities.
Reorganized as a not for profit corporation in January, 2008, AndoverTV is upgrading its studio facilities and recently completed a switch to an IPTV IP-based production network. The system, which the station uses to transport programming from remote locations across the town, uses Andover