Tag Archives: video webcasting

Learn Live Video Streaming & Webcasting with Multiple Cameras

Are you interested in Live Webcasting with Multiple Camera and want to learn more from multi-camera video streaming webcastingVidOvation, one of the leading vendors in this space?

Do you want to learn how to prevent costly mistakes in designing and implementing a Webcasting system?

Do you want to know how VidOvation can design you a broadcast quality Webcasting and Multi-camera Production Switcher System at 1/3 the cost of competing systems?

If so, please register below.

Learn more about:

  • How to do a Broadcast Quality Webcast
  • Let’s Start with the Cameras
  • Switchers for Multi-Camera Webcasts
  • Audio Sources for the Webcast
  • Getting the Stream(s) on the Web
  • Steam Live Video via LAN, WIFI, Satellite and Bonded Cellular
  • Hosting the Live Streams
  • Multiple Live Camera Sources
  • Archiving Content
  • Adding Slides and Graphics to my Webcast

Please click to register to Download your FREE guide, Live Webcasting & Video Streaming Made Easy with VidOstream Family. 

 

Posted in Applications - Industries, Broadcast, Education, Government and Military, Professional AV - Pro AV, Sports, Video Streaming - Webcasting, Video Streaming - Webcasting | Tagged , , , , , , , | Leave a comment

VidOvation Expands its Strategy to Bring Affordable, Broadcast Quality Audio and Video to the Corporate and Government Markets

Moving Video ForwardPress Contact:
Buzz Walker
Cognitive Impact
714.447.4993
buzz@cognitiveimpact.com
www.cognitiveimpact.com
or pr@vidovation.com

 For Immediate Release 

VidOvation Expands its Strategy to Bring Affordable, Broadcast Quality Audio and Video to the Corporate and Government Markets

Simplified Fiber Optic Transport Systems, IPTV, and Multi-camera Video Streaming over Wi-Fi, Wired, and Cellular Networks will be Showcased at NAB 2013

Irvine, CA, March 14, 2013 – VidOvation, a leading technology provider of video and data communication systems to the broadcast television and sports markets, announced today the expansion of their corporate strategy to bring their innovative broadcast expertise and technology to corporate AV and government customers at a system and implementation price point that will encourage early adoption from technology leaders and followers alike.

VidOvation’s expanded strategy will focus on three new technologies: 1) IPTV and Video-over-IP systems implemented in an open architecture to provide high end features, quality, and performance at a price point comparable to low end systems; 2) Webcasting and Video Streaming technology providing streaming and encoding for up to 4 cameras over Wi-Fi, wired, and cellular networks at 1/3 the implementation cost of competitive systems; and 3) Fiber optic transmission systems that are the price – performance leader and can connect to SD/HD-SDI, broadband, or RF networks. Continue reading

Posted in Editorial Coverage, NAB Show, News, VidOvation Video Report and Newsletter | Tagged , , , , , , , , , , | 3 Comments

Things to Consider When Building an IP Video Application

Creating Your Content
The core of an IP Video solution is encoding. Encoders come in all shapes and sizes and with varying degrees of reliability, functionality, and scalability. Some encoders are re-purposed computers with capture cards while others are purpose-built network appliances that have serving technology.  Determining the best compression format must also be addressed.  However, your general requirements and budget will likely make the choice for you.

Managing and Securing Your Content

IP Video is a powerful tool, capable of communicating to anyone, anywhere. That doesn’t mean you want your message in the hands of everyone. Being able to manage and secure your IP video solution is crucial to creating an effective viewing experience while protecting your content from prying eyes. Continue reading

Posted in Applications - Industries, Broadcast, Government and Military, H.264, IPTV, JPEG2000, MPEG-2 Basic Training, Professional AV - Pro AV, Sports, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , , , , , , , , | Leave a comment

NAB 2012 Preview from VidOvation

2012 NAB Preview

VidOvation

Booth SU11012

192 Technology Drive, Suite V
Irvine, CA 92618  USA
www.vidovation.com

VidOvation Contact:
Nicole Hollinger
Tel: +1 949.777.5435
pr@vidovation.com

VidOvation at the NAB Show:

VidOvation has decades of experience in video communications systems, video over IP, IPTV, MPEG-4, H.264, JPEG2000, DIRAC, SMPTE VC-2, wireless video WIFI, 5.8GHz, 60GHz RF, Video over CAT5 CAT6 UTP, Extenders, Splitters, Converters, Routing Switchers, Production, Distribution, Video Streaming, Webcasting, Digital Asset Management and Storage. At the NAB Show, VidOvation will demonstrate an array of its award-winning solutions; each designed with customer requirements in mind and built to the demanding specifications of today’s media operations. Continue reading

Posted in NAB Show, Video Streaming - Webcasting, Wireless Video | Tagged , , , , , , , , , , , , , , , | 7 Comments

MPEG-2 basic training, part 3

Original content from Transition to Digital Newsletter, November 16, 2011, Ned Soseman – http://broadcastengineering.com/infrastructure/mpeg-2-basic-training-part-3

Broadcast engineering requires a unique set of skills and talents. Some audio engineers claim the ability to hear the difference between tiny nuisances such as different kinds of speaker wire. They are known as those with golden ears. Their video engineering counterparts can spot and obsess over a single deviate pixel during a Super Bowl touchdown pass or a “Leave it to Beaver” rerun in real time. They are known as eagle eyes or video experts.

Not all audio and video engineers are blessed with super-senses. Nor do we all have the talent to focus our brain’s undivided processing power to discover and discern vague, cryptic and sometimes immeasurable sound or image anomalies with our bare eyes or ears on the fly, me included. Sometimes, the message can overpower the media. Fortunately for us and thanks to the Internet and digital video, more objective quality and measurement standards and tools have developed.

One of those standards is Perceptual Evaluation of Video Quality (PEVQ). It is an end-to-end (E2E) measurement algorithm standard that grades picture quality of a video presentation by a five-point mean opinion score (MOS), one being bad and five being excellent.

PVEQ can be used to analyze visible artifacts caused by digital video encoding/decoding or transcoding processes, RF- or IP-based transmission systems and viewer devices like set-top boxes. PVEQ is suited for next-generation networking and mobile services and include SD and HD IPTV, streaming video, mobile TV, video conferencing and video messaging.

The development for PVEQ began with still images. Evaluation models were later expanded to include motion video. PVEQ can be used to assess degradations of a decoded video stream from the network, such as that received by a TV set-top box, in comparison to the original reference picture as broadcast from the studio. This evaluation model is referred to as end-to-end (E2E) quality testing.

E2E exactly replicates how so-called average viewers would evaluate the video quality based on subjective comparison, so it addresses Quality-of-Experience (QoE) testing. PEVQ is based on modeling human visual behaviors. It is a full-reference algorithm that analyzes the picture pixel-by-pixel after a temporal alignment of corresponding frames of reference and test signal.

Besides an overall quality Mean Opinion Score figure of merit, abnormalities in the video signal are quantified by several key performance indicators (KPI), such as peak signal-to-noise ratios (PSNR), distortion indicators and lip-sync delay.

PVEQ references
Depending on the data made available to the algorithm, video quality test algorithms can be divided into three categories based on available reference data.

A Full Reference (FR) algorithm has access to and makes use of the original reference sequence for a comparative difference analysis. It compares each pixel of the reference sequence to each corresponding pixel of the received sequence. FR measurements deliver the highest accuracy and repeatability but are processing intensive.

A Reduced Reference (RR) algorithm uses a reduced bandwidth side channel between the sender and the receiver, which is not capable of transmitting the full reference signal. Instead, parameters are extracted at the sending side, which help predict the quality at the receiving end. RR measurements are less accurate than FR and represent a working compromise if bandwidth for the reference signal is limited.

A No Reference (NR) algorithm only uses the degraded signal for the quality estimation

Delay Factor - DF

Figure 1. The Delay Factor (DF) dictates buffer size needed to eliminate jitter.

 

and has no information of the original reference sequence. NR algorithms are low accuracy estimates only, because the original quality of the source reference is unknown. A common variant at the upper end of NR algorithms analyzes the stream at the packet level, but not the decoded video at the pixel level. The measurement is consequently limited to a transport stream analysis.

You can find more information on PVEQ at http://www.pevq.org/.

Another widely used MOS algorithm is VQmon. This algorithm was recently updated to VQmon for Streaming Video. It performs real-time analysis of video streamed using the key Adobe, Apple and Microsoft streaming protocols, analyzes video quality and buffering performance and reports detailed performance and QoE metrics. It uses packet/frame-based zero reference, with fast performance that enables real-time analysis on the impact that loss of I, B and P frames has on the content, both encrypted and unencrypted.

More information about VQmon is available at http://www.telchemy.com/index.php.

The 411 on MDI
The Media Delivery Index (MDI) measurement is specifically designed to monitor networks that are sensitive to arrival time and packet loss such as MPEG-2 video streams, and is described by the Internet Engineering Task Force document RFC 4445. It measures key video network performance metrics, including jitter, nominal flow rate deviations and instant data loss events for a particular stream.

MDI provides information to detect virtually all network-related impairments for streaming video, and it enables the measurement of jitter on fixed and variable bit-rate IP streams. MDI is typically shown as the ratio of the Delay Factor (DF) to the Media Loss Rate (MLR), i.e. DF:MLR.

DF is the number of milliseconds of streaming data that buffers must handle to eliminate jitter, something like a time-base corrector once did for baseband video. It is determined by first calculating the MDI virtual buffer depth of each packet as it arrives. In video streams, this value is sometimes called the Instantaneous Flow Rate (IFR). When calculating DF, it is known as DELTA.

To determine DF, DELTA is monitored to identify maximum and minimum virtual depths over time. Usually one or two seconds is enough time. The difference between maximum and minimum DELTA divided by the stream rate reveals the DF. In video streams, the difference is sometimes called the Instantaneous Flow Rate Deviation (IFRD). DF values less than 50ms are usually considered acceptable. An excellent white paper with much more detail on MDI is available from Agilent at http://cp.literature.agilent.com/litweb/pdf/5989-5088EN.pdf.

Using the formula in Figure 1, let’s say a 3.Mb/s MPEG video stream observed over a one-second interval feeds a maximum data rate into a virtual buffer of 3.005Mb and a low of 2.995Mb. The difference is the DF, which in this case is 10Kb. DF divided by the stream rate reveals the buffer requirements. In this case, 10K divided by 3.Mb/s is 3.333 milliseconds. Thus, to avoid packet loss in the presence of the known jitter, the receiver’s buffer must be 15kb, which at a 3Mb rate injects 4 milliseconds of delay. A device with an MDI rating of 4:0.003, for example, would indicate that the device has a 4 millisecond DF and a MLR of 0.003 media packets per second.

The MLR formula in Figure 2 is computed by dividing the number of lost or out-of-order

 

Figure 2. The Media Loss Rate (MLR) is used in the Media Delivery Index (MDI).

media packets by observed time in seconds. Out-of-order packets are crucial because many devices don’t reorder packets before handing them to the decoder. The best-case MLR is zero. The minimum acceptable MLR for HDTV is generally considered to be less than 0.0005. An MLR greater than zero adds time for viewing devices to lock into the higher MLR, which slows channel surfing an can introduce various ongoing anomalies when locked in.

Watch that jitter
Just as too much coffee can make you jittery, heavy traffic can make a network jittery, and jitter is a major source of video-related IP problems. Pro-actively monitoring jitter can alert you to help avert impending QoE issues before they occur.

One way to overload a MPEG-2 stream is with excessive bursts. Packet bursts can cause a network-level or a set-top box buffer to overflow or under-run, resulting in lost packets or empty buffers, which cause macro blocking or black/freeze frame conditions, respectively. An overload of metadata such as video content PIDs can contribute to this problem.

Figure 3. The S-meter was the first commonly used metric to objectively read and report

S-Meter

Figure 3. The S-meter was the first commonly used metric to objectively read and report signal strength at an RF receive site. Photo courtesy of WA0EGI.

 

signal strength at an RF receive site. Photo courtesy of WA0EGI.

Probing a streaming media network at various nodes and under different load conditions makes it possible to isolate and identify devices or bottlenecks that introduce significant jitter or packet loss to the transport stream. Deviations from nominal jitter or data loss benchmarks are indicative of an imminent or ongoing fault condition.

QoE is one of many subjective measurements used to determine how well a broadcaster’s signal, whether on-air, online or on-demand, satisfies the viewer’s perception of the sights and sounds as they are reproduced at his or her location. I can’t help but find some humor in the idea that the ones-and-zeros of a digital video stream can be rated on a gray scale of 1-5 for quality.
Experienced broadcast engineers know the so-called quality of a digital image begins well before the light enters lens, and with apologies to our friends in the broadcast camera lens business, the image is pre-distorted to some degree within the optical system before the photons hit the image sensors.

QoE or RST?
A scale of 1-5 is what ham radio operators have used for 100 years in the readability part of the Readability, Strength and Tone (RST) code system. While signal strength (S) could be objectively measured with an S-meter such as shown in Figure 3, readability (R) was purely subjective, and tone (T) could be subjective, objective or both. Engineers and hams know that as S and or T diminish, R follows, but that minimum acceptable RST values depend almost entirely on the minimum R figure the viewer or listener is willing to accept. In analog times, the minimum acceptable R figure often varied with the value of the message.

Digital technology and transport removes the viewer or listener’s subjective reception opinion from the loop. Digital video and audio is either as perfect as the originator intended or practically useless. We don’t need a committee to tell us that. It seems to me the digital cliff falls just south of a 4x5x8 RST. Your opinion may vary.

Posted in Applications - Industries, Broadcast, Education, Government and Military, H.264, MPEG-2, MPEG-2 Basic Training, Professional AV - Pro AV, Sports, Video Networking - Enterprise IPTV, Video Streaming - Webcasting | Tagged , , , , , , , , , , , , | Leave a comment