Category Archives: MPEG-2

Broadcast MPEG-4 H.264 SD Encoders starting $3.5K from VidOvation

VidOvation leads the way in providing turn-key solutions for IPTV, Live Television, and Video Distribution on your own network.

Today we are featuring the VidOvation TV family of MPEG-4 AVC H.264 encoders which offer the industry’s highest channel density at prices starting at $3.5K per channel for SD. This is an industry first!

Call us for an additional price break if you act before the end of August!

VidOvation TV VEN-2200 MPEG-4 H.264 Encoder Stand-alone VidOvation TV VEN-5200 MPEG-4 H.264 Encoder openGear
VEN-2200 Stand-alone
MPEG-4 Encoder
VEN-5200 openGear
MPEG-4 Encoder

In addition to providing leading technology, VidOvation can guide you through the design and implementation process of your Video system to help you avoid costly mistakes and deliver the highest quality solution for your budget.

The VidOvation encoder platform is unique in the industry supporting up to 10 channels of SD or HD per rack-unit and with resolutions up to 1080p60. The VidOvation TV encoder family is also one of the few encoders in its price range to offer both DVB-ASI and IP outputs standard and can be configured to provide SMPTE 2022 forward error correction. Continue reading

Posted in Applications - Industries, Broadcast, Educational Guides, Government and Military, H.264, IPTV, MPEG-2, MPEG-4 H.264, News, Professional AV - Pro AV, Sports, Video Networking - Enterprise IPTV | Tagged , , , , , , , , , , | Leave a comment

IPTV and Television on the Network | Educational Download

IPTV System

VidOvation TV – Television and Video Distribution on your Network

Download presentations on VidOvation TV systems that deliver a flexible and scalable IPTV solution for your industry. The example market segments below are Business & Enterprise, Entertainment & Hospitality and Healthcare. The VidOvation TV platform can be fully customized to meet your exact needs while minimizing your CAPEX/OPEX expenditures while maximizing your ROI.

IPTV Presentations for Download

  • VidOvation TV Business and Enterprise IPTV Television System
  • VidOvation TV Entertainment and Hospitality IPTV Television System
  • VidOvation TV Healthcare IPTV Television System

If so, please click here to Download your IPTV and Television on the Network Presentations.

Posted in Applications - Industries, Broadcast, Government and Military, IPTV, MPEG-2, MPEG-4 H.264, Professional AV - Pro AV, Sports, Video Networking - Enterprise IPTV | Tagged , , , , , , , , , , | Leave a comment

Understanding Video Traffic Interference

Understanding Video Traffic Interference

Here is a great discussion on network traffic and video over IP.

By Phil Hippensteel On November 28, 2012

Dear Professor Phil,
At lunch several of us were discussing our new videoconferencing deployment. A debate developed. Some of the group said UDP traffic such as VoIP would be the most likely to interfere with the videoconferencing traffic because it gets high priority, just like the video. Others argued that traffic from TCP data applications would be more of a problem because of their socalled “bursty” nature. Who is correct? Terrance, Canton, OH

Terrance,

Both will cause problems but for very different reasons. The group arguing that VoIP will cause problems is assuming some things. First, to interfere with the video, the voice traffic must be on the same VLAN as the video traffic. Often it is not. Second, if the VoIP and video traffic are on the same physical LAN and have the same priority settings, the VoIP must be using enough of the bandwidth that it constrains the bandwidth available to the video conferencing devices. With good design, this should never happen. So, to summarize this side of the argument, VoIP can interfere with videoconferencing, but only if the network design is inadequate.
On the other hand, TCP traffic such as database applications and web traffic can have unpredictable effects on video conferencing traffic. The burstiness of TCP applications is virtually uncontrollable. It’s the way that TCP protocol works. So, when TCP traffic and video share a physical network, TCP has a tendency to grab all of the bandwidth it can. In addition, the TCP algorithm groups the packets in blocks of packets that can vary in size which is dependent on network conditions and application design. The resulting bursty nature of the traffic increases variation in delivery relative to time, or jitter. If the jitter becomes excessive, the jitter buffers can’t compensate and packets are dropped. This means that it is important to separate video conferencing traffic from data applications using VLANs or completely separate networks.
In addition to all of these facts, my experience has taught me that the group arguing that TCP traffic is more of a problem is indeed correct.

Phil Hippensteel, Ph.D., has spent more than forty years in higher education and now teaches for Penn State Harrisburg.

Posted in DIRAC, H.264, JPEG2000, MPEG-2, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , | Leave a comment

Visionary Solutions Details its IPTV Encoders

VidOvation is an authorized Value-added reseller for Visionary Solutions Inc.   VSI provides encoding products for edge acquisition and distribution applications using IP technology. Anywhere that content originates — and anywhere content goes — Visionary Solutions is there to help you leverage the convenience and flexibility of IP whether for acquisition, backhaul, or distribution.

Our solutions are deployed worldwide powering all types of applications:

  • Enterprise and Institutional Pro A/V
  • Broadcast TV
  • High-end Video Surveillance
  • Government and Military
  • Digital Signage

…and many more. Continue reading

Posted in Applications - Industries, Broadcast, Government and Military, MPEG-2, MPEG-2 Basic Training, News, Professional AV - Pro AV, Sports, Video Compression, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , , , , , , , , , , , | Leave a comment

MPEG-2 basic training, part 3

Original content from Transition to Digital Newsletter, November 16, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-part-3

Broadcast engineering requires a unique set of skills and talents. Some audio engineers claim the ability to hear the difference between tiny nuisances such as different kinds of speaker wire. They are known as those with golden ears. Their video engineering counterparts can spot and obsess over a single deviate pixel during a Super Bowl touchdown pass or a “Leave it to Beaver” rerun in real time. They are known as eagle eyes or video experts.

Not all audio and video engineers are blessed with super-senses. Nor do we all have the talent to focus our brain’s undivided processing power to discover and discern vague, cryptic and sometimes immeasurable sound or image anomalies with our bare eyes or ears on the fly, me included. Sometimes, the message can overpower the media. Fortunately for us and thanks to the Internet and digital video, more objective quality and measurement standards and tools have developed.

One of those standards is Perceptual Evaluation of Video Quality (PEVQ). It is an end-to-end (E2E) measurement algorithm standard that grades picture quality of a video presentation by a five-point mean opinion score (MOS), one being bad and five being excellent.

PVEQ can be used to analyze visible artifacts caused by digital video encoding/decoding or transcoding processes, RF- or IP-based transmission systems and viewer devices like set-top boxes. PVEQ is suited for next-generation networking and mobile services and include SD and HD IPTV, streaming video, mobile TV, video conferencing and video messaging.

The development for PVEQ began with still images. Evaluation models were later expanded to include motion video. PVEQ can be used to assess degradations of a decoded video stream from the network, such as that received by a TV set-top box, in comparison to the original reference picture as broadcast from the studio. This evaluation model is referred to as end-to-end (E2E) quality testing.

E2E exactly replicates how so-called average viewers would evaluate the video quality based on subjective comparison, so it addresses Quality-of-Experience (QoE) testing. PEVQ is based on modeling human visual behaviors. It is a full-reference algorithm that analyzes the picture pixel-by-pixel after a temporal alignment of corresponding frames of reference and test signal.

Besides an overall quality Mean Opinion Score figure of merit, abnormalities in the video signal are quantified by several key performance indicators (KPI), such as peak signal-to-noise ratios (PSNR), distortion indicators and lip-sync delay.

PVEQ references
Depending on the data made available to the algorithm, video quality test algorithms can be divided into three categories based on available reference data.

A Full Reference (FR) algorithm has access to and makes use of the original reference sequence for a comparative difference analysis. It compares each pixel of the reference sequence to each corresponding pixel of the received sequence. FR measurements deliver the highest accuracy and repeatability but are processing intensive.

A Reduced Reference (RR) algorithm uses a reduced bandwidth side channel between the sender and the receiver, which is not capable of transmitting the full reference signal. Instead, parameters are extracted at the sending side, which help predict the quality at the receiving end. RR measurements are less accurate than FR and represent a working compromise if bandwidth for the reference signal is limited.

A No Reference (NR) algorithm only uses the degraded signal for the quality estimation

Delay Factor - DF

Figure 1. The Delay Factor (DF) dictates buffer size needed to eliminate jitter.

 

and has no information of the original reference sequence. NR algorithms are low accuracy estimates only, because the original quality of the source reference is unknown. A common variant at the upper end of NR algorithms analyzes the stream at the packet level, but not the decoded video at the pixel level. The measurement is consequently limited to a transport stream analysis.

You can find more information on PVEQ at http://www.pevq.org/.

Another widely used MOS algorithm is VQmon. This algorithm was recently updated to VQmon for Streaming Video. It performs real-time analysis of video streamed using the key Adobe, Apple and Microsoft streaming protocols, analyzes video quality and buffering performance and reports detailed performance and QoE metrics. It uses packet/frame-based zero reference, with fast performance that enables real-time analysis on the impact that loss of I, B and P frames has on the content, both encrypted and unencrypted.

More information about VQmon is available at http://www.telchemy.com/index.php.

The 411 on MDI
The Media Delivery Index (MDI) measurement is specifically designed to monitor networks that are sensitive to arrival time and packet loss such as MPEG-2 video streams, and is described by the Internet Engineering Task Force document RFC 4445. It measures key video network performance metrics, including jitter, nominal flow rate deviations and instant data loss events for a particular stream.

MDI provides information to detect virtually all network-related impairments for streaming video, and it enables the measurement of jitter on fixed and variable bit-rate IP streams. MDI is typically shown as the ratio of the Delay Factor (DF) to the Media Loss Rate (MLR), i.e. DF:MLR.

DF is the number of milliseconds of streaming data that buffers must handle to eliminate jitter, something like a time-base corrector once did for baseband video. It is determined by first calculating the MDI virtual buffer depth of each packet as it arrives. In video streams, this value is sometimes called the Instantaneous Flow Rate (IFR). When calculating DF, it is known as DELTA.

To determine DF, DELTA is monitored to identify maximum and minimum virtual depths over time. Usually one or two seconds is enough time. The difference between maximum and minimum DELTA divided by the stream rate reveals the DF. In video streams, the difference is sometimes called the Instantaneous Flow Rate Deviation (IFRD). DF values less than 50ms are usually considered acceptable. An excellent white paper with much more detail on MDI is available from Agilent at http://cp.literature.agilent.com/litweb/pdf/5989-5088EN.pdf.

Using the formula in Figure 1, let’s say a 3.Mb/s MPEG video stream observed over a one-second interval feeds a maximum data rate into a virtual buffer of 3.005Mb and a low of 2.995Mb. The difference is the DF, which in this case is 10Kb. DF divided by the stream rate reveals the buffer requirements. In this case, 10K divided by 3.Mb/s is 3.333 milliseconds. Thus, to avoid packet loss in the presence of the known jitter, the receiver’s buffer must be 15kb, which at a 3Mb rate injects 4 milliseconds of delay. A device with an MDI rating of 4:0.003, for example, would indicate that the device has a 4 millisecond DF and a MLR of 0.003 media packets per second.

The MLR formula in Figure 2 is computed by dividing the number of lost or out-of-order

 

Figure 2. The Media Loss Rate (MLR) is used in the Media Delivery Index (MDI).

media packets by observed time in seconds. Out-of-order packets are crucial because many devices don’t reorder packets before handing them to the decoder. The best-case MLR is zero. The minimum acceptable MLR for HDTV is generally considered to be less than 0.0005. An MLR greater than zero adds time for viewing devices to lock into the higher MLR, which slows channel surfing an can introduce various ongoing anomalies when locked in.

Watch that jitter
Just as too much coffee can make you jittery, heavy traffic can make a network jittery, and jitter is a major source of video-related IP problems. Pro-actively monitoring jitter can alert you to help avert impending QoE issues before they occur.

One way to overload a MPEG-2 stream is with excessive bursts. Packet bursts can cause a network-level or a set-top box buffer to overflow or under-run, resulting in lost packets or empty buffers, which cause macro blocking or black/freeze frame conditions, respectively. An overload of metadata such as video content PIDs can contribute to this problem.

Figure 3. The S-meter was the first commonly used metric to objectively read and report

S-Meter

Figure 3. The S-meter was the first commonly used metric to objectively read and report signal strength at an RF receive site. Photo courtesy of WA0EGI.

 

signal strength at an RF receive site. Photo courtesy of WA0EGI.

Probing a streaming media network at various nodes and under different load conditions makes it possible to isolate and identify devices or bottlenecks that introduce significant jitter or packet loss to the transport stream. Deviations from nominal jitter or data loss benchmarks are indicative of an imminent or ongoing fault condition.

QoE is one of many subjective measurements used to determine how well a broadcaster’s signal, whether on-air, online or on-demand, satisfies the viewer’s perception of the sights and sounds as they are reproduced at his or her location. I can’t help but find some humor in the idea that the ones-and-zeros of a digital video stream can be rated on a gray scale of 1-5 for quality.
Experienced broadcast engineers know the so-called quality of a digital image begins well before the light enters lens, and with apologies to our friends in the broadcast camera lens business, the image is pre-distorted to some degree within the optical system before the photons hit the image sensors.

QoE or RST?
A scale of 1-5 is what ham radio operators have used for 100 years in the readability part of the Readability, Strength and Tone (RST) code system. While signal strength (S) could be objectively measured with an S-meter such as shown in Figure 3, readability (R) was purely subjective, and tone (T) could be subjective, objective or both. Engineers and hams know that as S and or T diminish, R follows, but that minimum acceptable RST values depend almost entirely on the minimum R figure the viewer or listener is willing to accept. In analog times, the minimum acceptable R figure often varied with the value of the message.

Digital technology and transport removes the viewer or listener’s subjective reception opinion from the loop. Digital video and audio is either as perfect as the originator intended or practically useless. We don’t need a committee to tell us that. It seems to me the digital cliff falls just south of a 4x5x8 RST. Your opinion may vary.

Posted in Applications - Industries, Broadcast, Education, Government and Military, H.264, MPEG-2, MPEG-2 Basic Training, Professional AV - Pro AV, Sports, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , , , , , , , , , , , | Leave a comment