Tag Archives: MPEG

MPEG-2 basic training, part 3

Original content from Transition to Digital Newsletter, November 16, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-part-3

Broadcast engineering requires a unique set of skills and talents. Some audio engineers claim the ability to hear the difference between tiny nuisances such as different kinds of speaker wire. They are known as those with golden ears. Their video engineering counterparts can spot and obsess over a single deviate pixel during a Super Bowl touchdown pass or a “Leave it to Beaver” rerun in real time. They are known as eagle eyes or video experts.

Not all audio and video engineers are blessed with super-senses. Nor do we all have the talent to focus our brain’s undivided processing power to discover and discern vague, cryptic and sometimes immeasurable sound or image anomalies with our bare eyes or ears on the fly, me included. Sometimes, the message can overpower the media. Fortunately for us and thanks to the Internet and digital video, more objective quality and measurement standards and tools have developed.

One of those standards is Perceptual Evaluation of Video Quality (PEVQ). It is an end-to-end (E2E) measurement algorithm standard that grades picture quality of a video presentation by a five-point mean opinion score (MOS), one being bad and five being excellent.

PVEQ can be used to analyze visible artifacts caused by digital video encoding/decoding or transcoding processes, RF- or IP-based transmission systems and viewer devices like set-top boxes. PVEQ is suited for next-generation networking and mobile services and include SD and HD IPTV, streaming video, mobile TV, video conferencing and video messaging.

The development for PVEQ began with still images. Evaluation models were later expanded to include motion video. PVEQ can be used to assess degradations of a decoded video stream from the network, such as that received by a TV set-top box, in comparison to the original reference picture as broadcast from the studio. This evaluation model is referred to as end-to-end (E2E) quality testing.

E2E exactly replicates how so-called average viewers would evaluate the video quality based on subjective comparison, so it addresses Quality-of-Experience (QoE) testing. PEVQ is based on modeling human visual behaviors. It is a full-reference algorithm that analyzes the picture pixel-by-pixel after a temporal alignment of corresponding frames of reference and test signal.

Besides an overall quality Mean Opinion Score figure of merit, abnormalities in the video signal are quantified by several key performance indicators (KPI), such as peak signal-to-noise ratios (PSNR), distortion indicators and lip-sync delay.

PVEQ references
Depending on the data made available to the algorithm, video quality test algorithms can be divided into three categories based on available reference data.

A Full Reference (FR) algorithm has access to and makes use of the original reference sequence for a comparative difference analysis. It compares each pixel of the reference sequence to each corresponding pixel of the received sequence. FR measurements deliver the highest accuracy and repeatability but are processing intensive.

A Reduced Reference (RR) algorithm uses a reduced bandwidth side channel between the sender and the receiver, which is not capable of transmitting the full reference signal. Instead, parameters are extracted at the sending side, which help predict the quality at the receiving end. RR measurements are less accurate than FR and represent a working compromise if bandwidth for the reference signal is limited.

A No Reference (NR) algorithm only uses the degraded signal for the quality estimation

Delay Factor - DF

Figure 1. The Delay Factor (DF) dictates buffer size needed to eliminate jitter.

 

and has no information of the original reference sequence. NR algorithms are low accuracy estimates only, because the original quality of the source reference is unknown. A common variant at the upper end of NR algorithms analyzes the stream at the packet level, but not the decoded video at the pixel level. The measurement is consequently limited to a transport stream analysis.

You can find more information on PVEQ at http://www.pevq.org/.

Another widely used MOS algorithm is VQmon. This algorithm was recently updated to VQmon for Streaming Video. It performs real-time analysis of video streamed using the key Adobe, Apple and Microsoft streaming protocols, analyzes video quality and buffering performance and reports detailed performance and QoE metrics. It uses packet/frame-based zero reference, with fast performance that enables real-time analysis on the impact that loss of I, B and P frames has on the content, both encrypted and unencrypted.

More information about VQmon is available at http://www.telchemy.com/index.php.

The 411 on MDI
The Media Delivery Index (MDI) measurement is specifically designed to monitor networks that are sensitive to arrival time and packet loss such as MPEG-2 video streams, and is described by the Internet Engineering Task Force document RFC 4445. It measures key video network performance metrics, including jitter, nominal flow rate deviations and instant data loss events for a particular stream.

MDI provides information to detect virtually all network-related impairments for streaming video, and it enables the measurement of jitter on fixed and variable bit-rate IP streams. MDI is typically shown as the ratio of the Delay Factor (DF) to the Media Loss Rate (MLR), i.e. DF:MLR.

DF is the number of milliseconds of streaming data that buffers must handle to eliminate jitter, something like a time-base corrector once did for baseband video. It is determined by first calculating the MDI virtual buffer depth of each packet as it arrives. In video streams, this value is sometimes called the Instantaneous Flow Rate (IFR). When calculating DF, it is known as DELTA.

To determine DF, DELTA is monitored to identify maximum and minimum virtual depths over time. Usually one or two seconds is enough time. The difference between maximum and minimum DELTA divided by the stream rate reveals the DF. In video streams, the difference is sometimes called the Instantaneous Flow Rate Deviation (IFRD). DF values less than 50ms are usually considered acceptable. An excellent white paper with much more detail on MDI is available from Agilent at http://cp.literature.agilent.com/litweb/pdf/5989-5088EN.pdf.

Using the formula in Figure 1, let’s say a 3.Mb/s MPEG video stream observed over a one-second interval feeds a maximum data rate into a virtual buffer of 3.005Mb and a low of 2.995Mb. The difference is the DF, which in this case is 10Kb. DF divided by the stream rate reveals the buffer requirements. In this case, 10K divided by 3.Mb/s is 3.333 milliseconds. Thus, to avoid packet loss in the presence of the known jitter, the receiver’s buffer must be 15kb, which at a 3Mb rate injects 4 milliseconds of delay. A device with an MDI rating of 4:0.003, for example, would indicate that the device has a 4 millisecond DF and a MLR of 0.003 media packets per second.

The MLR formula in Figure 2 is computed by dividing the number of lost or out-of-order

 

Figure 2. The Media Loss Rate (MLR) is used in the Media Delivery Index (MDI).

media packets by observed time in seconds. Out-of-order packets are crucial because many devices don’t reorder packets before handing them to the decoder. The best-case MLR is zero. The minimum acceptable MLR for HDTV is generally considered to be less than 0.0005. An MLR greater than zero adds time for viewing devices to lock into the higher MLR, which slows channel surfing an can introduce various ongoing anomalies when locked in.

Watch that jitter
Just as too much coffee can make you jittery, heavy traffic can make a network jittery, and jitter is a major source of video-related IP problems. Pro-actively monitoring jitter can alert you to help avert impending QoE issues before they occur.

One way to overload a MPEG-2 stream is with excessive bursts. Packet bursts can cause a network-level or a set-top box buffer to overflow or under-run, resulting in lost packets or empty buffers, which cause macro blocking or black/freeze frame conditions, respectively. An overload of metadata such as video content PIDs can contribute to this problem.

Figure 3. The S-meter was the first commonly used metric to objectively read and report

S-Meter

Figure 3. The S-meter was the first commonly used metric to objectively read and report signal strength at an RF receive site. Photo courtesy of WA0EGI.

 

signal strength at an RF receive site. Photo courtesy of WA0EGI.

Probing a streaming media network at various nodes and under different load conditions makes it possible to isolate and identify devices or bottlenecks that introduce significant jitter or packet loss to the transport stream. Deviations from nominal jitter or data loss benchmarks are indicative of an imminent or ongoing fault condition.

QoE is one of many subjective measurements used to determine how well a broadcaster’s signal, whether on-air, online or on-demand, satisfies the viewer’s perception of the sights and sounds as they are reproduced at his or her location. I can’t help but find some humor in the idea that the ones-and-zeros of a digital video stream can be rated on a gray scale of 1-5 for quality.
Experienced broadcast engineers know the so-called quality of a digital image begins well before the light enters lens, and with apologies to our friends in the broadcast camera lens business, the image is pre-distorted to some degree within the optical system before the photons hit the image sensors.

QoE or RST?
A scale of 1-5 is what ham radio operators have used for 100 years in the readability part of the Readability, Strength and Tone (RST) code system. While signal strength (S) could be objectively measured with an S-meter such as shown in Figure 3, readability (R) was purely subjective, and tone (T) could be subjective, objective or both. Engineers and hams know that as S and or T diminish, R follows, but that minimum acceptable RST values depend almost entirely on the minimum R figure the viewer or listener is willing to accept. In analog times, the minimum acceptable R figure often varied with the value of the message.

Digital technology and transport removes the viewer or listener’s subjective reception opinion from the loop. Digital video and audio is either as perfect as the originator intended or practically useless. We don’t need a committee to tell us that. It seems to me the digital cliff falls just south of a 4x5x8 RST. Your opinion may vary.

Posted in Applications - Industries, Broadcast, Education, Government and Military, H.264, MPEG-2, MPEG-2 Basic Training, Professional AV - Pro AV, Sports, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , , , , , , , , , , , | Leave a comment

MPEG-2 basic training, part 2

Original content from Transition to Digital Newsletter, November 6, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-part-2

Is MPEG compression your friend? Of course, the answer to this question is that MPEG compression is your friend, unless it’s not working properly. When that happens, it’s our job to make it friendly again. This “Transition to Digital” tutorial continues the discussion from the preceding mid-October “Transition to Digital” tutorial about monitoring and evaluating MPEG-2 streams.

Streams are made of packets with headers and are filled with metadata, compressed video or compressed audio. To reconstruct a program from a stream, all of its video, audio and table components, and the corresponding PID assignments, must be correct. Also, there must be consistency between PSI table contents and the associated video and audio streams. This is a good place to look for trouble in a suspicious MPEG-2 stream.

Program Specific Information
Program Specific Information (PSI) is part of the Transport Stream (TS). PSI is a set of tables needed to demultiplex and sort out PIDs that are tagged to programs. A Program Map Table (PMT) must be decoded to find the audio and video PIDs that identify the content of a particular program. Each program requires its own PMT with a unique PID value.

The master PSI table is the Program Association Table (PAT). If the PAT can’t be found and decoded in the transport stream, no programs can be found, decompressed or viewed.

PSI tables must be sent periodically and with a fast repetition rate so channel-surfers don’t feel that program selection takes too long. A critical aspect of MPEG testing is to check and verify the PSI tables for correct syntax and repetition rate.

Another PSI testing scenario is to determine the accuracy and consistency of PSI contents. As programs change or multiplexer provisioning is modified, errors may appear. One is an “Unreferenced PID,” where packets with a PID value are present in the TS that are not referred to in any table. Another would be a “Missing PID,” where no packets exist with the PID value referenced in the transport stream PSI table.

Good broadcast engineers never forget common sense. Just because there aren’t any unreferenced or missing PIDs doesn’t guarantee the viewer is necessarily receiving the correct program. There could be a mismatch of the audio content from one program being delivered with the video content from another.

Because MPEG-2 allows for multiple audio and video channels, a real-world “air check” is the most common-sense test to ensure that viewers are receiving the correct language and video. It’s possible to use a set-top box with a TV set to do the air check, but it’s preferable to use dedicated MPEG test gear that allows PSI table checks. It’s also handy if the test set includes a built-in decoder with picture and audio displays.

QoE
So, all the bits and bytes appear to be organized and in place. How do you evaluate the quality of an MPEG-2 stream? Most use the concept of QoE. Some engineers call QoE Perceived Quality of Service (PQoS), because QoE is the quality of service as it is actually perceived by the viewer. In this tutorial, we’ll call the measurement of viewer satisfaction QoE.

QoE methodology for the evaluation of audio and video content provides broadcasters with a variety of choices, covering low, medium or high levels of quality. The QoE evaluation allows operators to pre-determine a specific level of viewer satisfaction and then use it to minimize storage and network resources by allocating only the resources necessary to maintain that particular QoE level.

The most basic recognized method to measure video content QoE is known as referenceless analysis. Essentially, referenceless analysis is what everyone does subconsciously when they watch TV. Using this method of analysis, QoE is not measured by comparing the original video to what is delivered. Instead, the images are visually inspected for artifacts such as blockiness, blurred or jerky video, frame-by-frame if possible. The referenceless analysis approach is based on the theory that viewers don’t know the quality of the original content.

These days, I wouldn’t be so certain. Bigger, brighter, undistorted plasma, LCD and LED screens make artifacts more difficult for even the most casual viewers to ignore. Funny thing about the new non-CRT screens: They don’t “Lie like a Trinitron.” That’s the good news and the bad news for engineers and others in the production and delivery chain.

More scientific evaluations of QoE consist of objective and subjective evaluation procedures, each one taking place after encoding. More subjective quality evaluation processes require more eyeballs, making the process more time-consuming with each viewer’s opinion.

Objective evaluation methods are based on and make use of multiple scientific metrics. Objective QoE evaluation methodology can provide results quicker, but it requires some physical resources and dedicated test gear.

One objective method of monitoring QoE is to use devices such as the one shown in our image. This device is an Ethernet video quality and service assurance monitoring and troubleshooting probe. Some products such as this provide analysis to the PID level, and may contain a hard drive for offline verification and inspection. Products like this are designed to monitor, analyze and possibly debug IP and MPEG transport quality issues at a problem viewer’s location, the receiving end of an STL, your home, your station’s maintenance shop or anywhere typically described as the video edge. It sure beats investigating problem locations with a portable TV and a 10ft mast.

QoS
Quality of Service is the ability to provide different priorities to different applications, users or data streams, or to guarantee a certain level of performance to a specific data stream. QoS may guarantee a required bit rate, delay, jitter, packet dropping probability and bit error rate. Quality of service guarantees are important if the network capacity has little headroom, especially for real-time MPEG-2 streaming, because it often requires a fixed bit rate and is delay-sensitive.

A network that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, often during a session establishment phase. In computer networking and other packet-switched telecommunication networks, the term “traffic engineering” refers to resource reservation controls, not the achieved service quality.

During a session, QoS may monitor the achieved level of performance, such as the data rate and delay, and dynamically control scheduling priorities in the network nodes.

QoS is sometimes used as a quality measure, with many alternative definitions, rather than referring to the ability to reserve resources. Quality of service sometimes refers to a guaranteed level of quality of service. High QoS is often confused with a high level of performance or achieved service quality, such as a high bit rate, low latency and low bit error probability. A high level of performance is, in fact, a QoE factor.

Best-Effort non QoS
A so-called Best-Effort network or service does not fully support quality of service. It is also not all that unusual in broadcast facilities. Why? Because the technical foundations of most broadcast facilities are built on best-effort overprovisioning and redundancy. Many new devices such as routers and switches support QoS. Many older devices do not. As older devices are replaced within a station’s system, it will ultimately be capable of QoS monitoring and measurements.

A generously overprovisioned best-effort system shouldn’t need to rely on QoS, just as a well designed Master Control shouldn’t need a “Technical Difficulties” graphic. At least that’s the way some IT-centric people I’ve met seem to think. We broadcast engineers know it can’t hurt to have both readily available, just in case.

In the meantime, “Best Effort” can be a good substitute for complicated QoS control mechanisms. Your goal is to provide high-quality program content over a best-effort network by over-provisioning its capacity so that it has more than sufficient headroom for expected peak traffic loads. The resulting absence of network congestion eliminates the need for QoS mechanisms.

What is most interesting about MPEG-2 monitoring and evaluation is that there are more recognized methods worthy of discussion than space allows for now. The next “Transition to Digital” tutorial will address these methods to help you ensure your station’s MPEG streams meet viewer expectations.

The author would like to thank Les Zoltan at DVEO Pro Broadcast Division for his help in the preparation of this tutorial.

Posted in Applications - Industries, Broadcast, Education, Government and Military, MPEG-2 Basic Training, Professional AV - Pro AV, Sports, Video Compression, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , , , , , , , , , , , , , , | Leave a comment

Forward-thinking PEG station turns to IPTV IP based video system

In a year when many Public / Education /Government stations face budget cuts, the PEG station in Andover, Massachusetts has been able to expand and modernize its facilities.

Reorganized as a not for profit corporation in January, 2008, AndoverTV is upgrading its studio facilities and recently completed a switch to an IPTV IP-based production network. The system, which the station uses to transport programming from remote locations across the town, uses Andover

Posted in News, Partner Profile, Routing - Distribution, Video Compression, Video Networking - Enterprise IPTV | Tagged , , , , , , , , , , , , , , , , , , , , , | 3 Comments